entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.00282v1
|
20230701093229
|
A nontopological soliton in an $\mathcal{N} = 1$ supersymmetric gauge Abelian model
|
[
"A. Yu. Loginov"
] |
hep-th
|
[
"hep-th"
] |
tusur]A.Yu. Loginov
[email protected]
[tusur]Laboratory of Applied Mathematics and Theoretical Physics, Tomsk State
University of Control Systems and Radioelectronics, 634050 Tomsk, Russia
A version of 𝒩 = 1 supersymmetric scalar electrodynamics is
considered here, and it is shown that an electrically charged nontopological
soliton exists in this model.
In addition to the long-range electric field, the soliton also possesses a
long-range scalar field, which leads to a modification of the intersoliton
interaction potential at large distances.
The supersymmetry of the model makes it possible to express fermionic zero
modes of the soliton in terms of bosonic fields.
The properties of the nontopological soliton are investigated using analytical
and numerical methods.
nontopological soliton electric charge supersymmetry fermionic zero modes
§ INTRODUCTION
Many models of field theory have solutions that describe spatially localised
and nonspreading field configurations with a finite energy <cit.>.
Nontopological solitons <cit.> represent one of these field
configurations.
A necessary condition for the existence of a nontopological soliton is the
symmetry of the corresponding field model, which may be both global and local.
In addition, the interaction potentials of the model must meet a certain
condition <cit.>.
The symmetry of the model results in the existence of a conserved Noether
charge.
The field configuration of a nontopological soliton is an extremum (minimum
or saddle point) of the energy functional at a fixed value of the Noether
charge, and this basic property largely determines the other properties of a
nontopological soliton; in particular, it leads to the characteristic time
dependence exp( - iω t ) of a soliton field.
Nontopological solitons may be formed during a primordial phase transition,
thus making a contribution to various scenarios of the evolution of the early
Universe <cit.>.
Furthermore, they may play an essential role in baryogenesis via the
Affleck-Dine mechanism <cit.>, and are considered to be
places where dark matter may be concentrated <cit.>.
Some field models with local Abelian symmetry admit the existence of
electrically charged nontopological solitons.
First described in Refs. <cit.>, they have since been
investigated in many other works (see, e.g., Refs. <cit.>).
The properties of electrically charged solitons differ significantly from
those of solitons without an electric charge; in particular, the electric
charge and the energy of a nontopological soliton cannot be arbitrarily large
in the general case <cit.>.
In addition, an electrically charged nontopological soliton can exist only if
the gauge coupling constant does not exceed some maximum value <cit.>.
The main goal of this work is to study a non-topological soliton in a version
of 𝒩 = 1 supersymmetric scalar electrodynamics.
The interaction potential of this model is expressed in terms of a
superpotential, which leads to relations between the nonlinear interaction
constants.
In addition, the superpotential largely determines the form of the
scalar-fermion interaction.
The requirements of renormalisability and gauge invariance impose severe
restrictions on the form of the superpotential, all of which significantly
reduces the number of model parameters compared to the nonsupersymmetric case.
Throughout this paper, we use the natural units c = 1, ħ = 1.
The metric tensor and the Dirac matrices are defined according to
Ref. <cit.>.
§ LAGRANGIAN AND FIELD EQUATIONS OF THE MODEL
The 𝒩 = 1 supersymmetric gauge model under consideration includes
three left-chiral matter superfields Φ_-1, Φ_0, and Φ_+1,
and one Abelian gauge superfield V.
The left-chiral superfield Φ_n contains two components: the complex
scalar field ϕ_n and the left-hand Dirac spinor field ψ_n L.
Written in the Wess-Zumino gauge, the gauge superfield V also contains two
components: the Abelian gauge field A_μ and the Majorana spinor field
λ.
The superfields Φ_n and V also contain auxiliary fields, but these can
be expressed in terms of the above mentioned physical fields.
The Lagrangian of the model takes the form
ℒ = -1/4F_μνF^μν-∑_n(
D_μϕ_n)^∗D^μϕ_n-V(ϕ)
-1/2λ̅γ^μ∂_μλ-∑_nψ_nLγ^μD_μψ_nL
-1/2∑_nm{f_nm(ψ_nL^T
ϵψ_mL)+f_nm^∗(ψ_nL^T
ϵψ_mL)^∗}
+i√(2)∑_nq_n{ϕ_n(ψ_nL
λ)-ϕ_n^∗(λ̅ψ_nL)
}.
In Eq. (<ref>), the matrix ϵ = -i γ_0γ_2γ_5,
the Latin indices n and m run over the set [-1, 0, 1], and the covariant
derivatives
D_μϕ_n =∂_μϕ_n-iq_nA_μϕ_n,
D_μψ_n =∂_μψ_n-iq_nA_μψ_n,
where q_n = n e are the Abelian charges of the left-chiral superfield
Φ_n.
To avoid U(1)-U(1)-U(1) and U(1)-graviton-graviton anomalies, the sum
of the U(1) quantum numbers of all left-chiral superfields and the sum of
their cubes should vanish, which is obviously true in our case.
The field-dependent coefficients f_nm and the interaction potential
V(ϕ) are expressed in terms of the superpotential
f(ϕ)=mϕ_-1ϕ_+1+gϕ_-1ϕ_0ϕ_+1,
where m is a mass parameter and g is a coupling constant.
The coefficients f_nm = ∂^2f/ ∂ϕ_n∂ϕ_m,
and the interaction potential
V( ϕ) =∑_n|∂f/∂ϕ_n
|^2+1/2(
∑_nq_nϕ_n^∗ϕ_n)^2
=|m + g ϕ_0 |^2
(|ϕ_-1|^2 +
|ϕ_+1|^2)
+g^2|ϕ_-1|^2|ϕ_+1|^2
+e^2/2( |ϕ_+1|^2-|ϕ_-1|^2)^2.
The field equations of model (<ref>) have the form
∂_μF^μν=j^ν,
D_μD^μϕ_n-∂V/∂ϕ_n^∗-
1/2∑_k' m'f_k' m' n^∗
(ψ_k'L^Tϵψ_m'L) ^∗
-i√(2)q_n( λψ_nL) = 0,
Dψ_nL-∑_m^'f_nm^'^∗ϵ(ψ_m^'L)^T
-i√(2)q_nϕ_nλ_R = 0,
∂λ +i√(2)∑_m^'q_m^'{ϕ _m^'ϵ( ψ
_m^'L) ^T+ϕ _m^'^∗ψ
_m^'L} = 0,
where the coefficients f_k m n = ∂^3f/∂ϕ_k∂ϕ_m∂ϕ_n and the electromagnetic current
j^ν=i∑_nq_nϕ_n^∗⟷D^νϕ_n-i∑_nq_nψ_nLγ^νψ_nL.
Later on, we shall also need the expression for the energy density of an
electrically charged bosonic field configuration of the model
ℰ =1/2E_iE_i+∑_n{(
D_tϕ_n)^∗D_tϕ_n.
. +(D_iϕ_n)^∗D_iϕ_n}
+ V(ϕ),
where E_i = F_i 0 are the components of the electric field strength.
§ ANSATZ AND SOME PROPERTIES OF THE NONTOPOLOGICAL SOLITON
The model (<ref>) can be viewed as the Abelian gauge version of a model of
the Wess-Zumino type <cit.>.
In Ref. <cit.>, it was shown that for superpotentials of the type
in Eq. (<ref>), these models admit the existence of nontopological
solitons.
It follows from continuity considerations that nontopological solitons can
also exist in gauge model (<ref>), at least for sufficiently small values
of the gauge coupling constant e.
Let us define the shifted field φ_0(𝐱,t) = mg^-1
+ϕ_0(𝐱,t).
To find a nontopological soliton solution, we shall use the spherically
symmetrical ansatz:
ϕ_+1( 𝐱,t) = 2^-1/2
exp(-iωt)f_+1( r),
ϕ_-1( 𝐱,t) = 2^-1/2
exp( iωt) f_-1( r),
φ_0( 𝐱,t) = 2^-1/2
(χ_1(r)+iχ_2( r)),
A^μ( 𝐱,t) =( Φ( r)
, 0).
The energy density (<ref>), written in terms of the ansatz functions
(<ref>), takes the form
ℰ = 1/2Ω^2(f_-1^2+f_+1^2) +
1/2 Φ^'2
+1/2(f_-1^'2+f_+1^'2+χ_1^'2
+χ_2^'2) + V,
where the interaction potential
V = g^2/4( f_-1^2+f_+1^2)
(χ_1^2+χ_2^2)
+g^2/4f_-1^2f_+1^2+e^2/8
(f_+1^2-f_-1^2)^2,
the function Ω( r ) = ω - eΦ( r ), and
the prime indicates the derivative with respect to r.
The Lagrangian density ℒ differs from the energy density
ℰ only in regard to the sign of the terms in the second line of
Eq. (<ref>).
The electromagnetic current of spherically symmetrical field configuration
(<ref>) is
j^ν=(e Ω(f_-1^2+f_+1^2), 0, 0, 0).
Substituting ansatz (<ref>) into the bosonic parts of field equations
(<ref>) and (<ref>), we obtain a system of nonlinear differential
equations for the ansatz functions:
Ω ^''+2/rΩ ^'-e^2(
f_-1^2+f_+1^2) Ω =0,
f_± 1^''+2/rf_± 1^'+∂ U/∂ f_± 1=0,
χ_1,2^''+2/rχ_1,2^'+∂ U/∂χ _1,2=0,
where the effective potential
U=1/2Ω^2(f_-1^2+f_+1^2)-V.
The regularity of the soliton field configuration and the finiteness of the
soliton energy lead to the following boundary conditions:
f_± 1^'( 0) =0, f_± 1( r) r→∞⟶0,
χ _1,2^'( 0) =0, χ _1,2( r)
r→∞⟶χ_1,2 vac,
Ω ^'( 0) =0, Ω( r)
r→∞⟶ω.
The boundary conditions in Eqs. (<ref>) and (<ref>) need some
explanation.
From Eqs. (<ref>) and (<ref>), it follows that the classical vacuum
of model (<ref>) is
F_μν = 0, ϕ_± 1 = 0, ϕ_0 = ϕ_0 vac,
where ϕ _0 vac is an arbitrary complex constant.
From Eq. (<ref>), it follows that model (<ref>) has an infinite
number of vacua at the classical level, as reflected in the boundary condition
in Eq. (<ref>).
All of these vacua are invariant under both the U(1) gauge and 𝒩
= 1 supersymmetry transformations.
According to the non-renormalisation theorems <cit.>, this will also be true when perturbative quantum corrections
are taken into account.
Eqs. (<ref>), (<ref>), and (<ref>) tell us that χ_1
and χ_2 satisfy the same linear homogeneous differential equation,
while Eq. (<ref>) tells us that χ_1 and χ_2 satisfy the
same homogeneous boundary condition at r = 0.
It follows that the ratio χ_2(r)/χ_1(r) does not depend on r, and
is equal to χ_2 vac/χ_1 vac.
The phase of the ansatz function φ_0(r)=2^-1/2(χ_1(r)+iχ_2
(r)) is therefore a constant.
However, from Eqs. (<ref>) and (<ref>), it follows that in this
case, the energy density and the Lagrangian density do not depend on the
phase of φ_0(r).
Without loss of generality, we can set this phase (and hence χ_2(r))
equal to zero.
The field configurations of model (<ref>) are determined up to gauge
transformations.
In particular, the choice of ansatz (<ref>) is equivalent to the choice
of the radial gauge.
However, this gauge does not fix the soliton field configuration completely;
to do this, we need to impose an additional condition Φ(∞) = 0, which
is equivalent to Eq. (<ref>).
The basic property of any non-topological soliton is that it is an extremum of
the energy functional E at a fixed value of some Noether charge Q_N (in
our case E=4π∫_0^∞ℰ(r)r^2dr and Q_N=4
π e^-1∫_0^∞j^0(r)r^2dr).
This property results in the differential relation
dE/dQ_N = Ω_∞,
where Ω_∞≡Ω(∞) = ω - e Φ(∞)=ω.
Note that a similar relation also holds for the electrically charged magnetic
monopoles <cit.>.
Eqs. (<ref>) and (<ref>) tell us that the potentials V and U
are invariant under the permutation f_-1↔ f_+1.
It follows that if f_-1(r), f_+1(r), χ_1(r), and Ω(r) is
a solution of system (<ref>) – (<ref>), then f_+1(r),
f_-1(r), χ_1(r), and Ω(r) is also a solution.
Using qualitative research methods for differential equations, it can be shown
that the solutions f_-1(r) and f_+1(r) coincide when the gauge coupling
constant e = 0.
In the following, we define the function δ(r,e^2) = f_+1(r,e^2) - f_-1(r,e^2), where the dependence on the
gauge coupling constant is explicitly indicated and we use the fact that the
potential V depends on e only through e^2.
The function δ(r,e^2) satisfies the nonlinear differential
equation
δ^''+2/rδ^'+[ Ω^2+2^-1g^2(f_-1^2-χ_1^2)-2e^2f_-1^2]
δ
+2^-1(g^2-4e^2)f_-1δ^2
-2^-1e^2δ^3 = 0,
where the dependence of δ, Ω, f_-1, and χ_1 on r and
e^2 is omitted.
From Eq. (<ref>), it follows that δ(r,e^2) satisfies
the boundary conditions
δ ^'( 0,e) = 0, δ(
∞ ,e) = 0.
Our goal is to find the derivatives δ^(n)≡∂^nδ/∂ e^n at e = 0.
To do this, we differentiate Eq. (<ref>) with respect to e, and then
set e = 0.
As a result, we obtain the trivial linear equation
δ^(1)''+2r^-1δ^(1)' = 0.
Its solution must satisfy the boundary conditions in Eq. (<ref>)
differentiated with respect to e, and it is therefore easy to see that the
solution is δ^(1)(r,0) = 0.
Thus, we have established that δ(r, 0) = 0 and δ^(1)
(r,0) = 0.
By continuing to differentiate Eq. (<ref>) with respect to e, setting
e = 0, and taking into account the previous results at each step, it can
be shown that δ^(n)(r, 0) = 0 for any n ≥ 0.
It follows that δ(r, e^2) vanishes, and hence f_+1(
r,e^2) = f_-1(r,e^2) ≡ f(r,e^2).
We now examine the asymptotics of the soliton fields for large r.
Suppose that f(r) tends to zero exponentially as r →∞.
In this case, we can neglect the nonlinear terms in Eqs. (<ref>) and
(<ref>), and obtain the asymptotic forms of Ω(r) and χ_1(r)
as r →∞:
Ω∼ω -e/4πQ/r,
χ _1∼χ _1 vac-1/4πQ_s/r,
where Q = 4 π∫_0^∞j^0(r)r^2dr is the electric
charge of the soliton, and Q_s is the scalar charge defined by analogy with
the large-distance asymptotics Φ∼ Q/(4π r) for the electric potential.
We see that both Ω=ω-eΦ and χ_1 tend rather slowly (∝
r^-1) to their limiting values as r →∞.
It should be noted that nontopological solitons with a long-range scalar field
were studied in Refs. <cit.>.
Furthermore, electrically charged nontopological solitons with a long-range
scalar field were studied in Refs. <cit.>.
By substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>),
retaining the terms linear in f(r), and solving the resulting differential
equation, we obtain the large-distance asymptotics of f(r) as
f( r) ∼f_∞e^-Δr(Δr)^β
×(1-a^2/32π^2Δ^3r
-b/8πΔ^2r),
where f_∞ is a constant,
Δ = ( ω_max^2-ω^2)^1/2,
a = eω_max|Q|-g|ω||Q_s|,
b = e |ω||Q |- g ω_max |Q_s |,
β = -1-b/( 4πΔ),
and the parameter ω_max = 2^-1/2 g |χ_1 vac|.
We see that our assumption about the exponential asymptotics of f(r) turned
out to be correct; we also see that the long-range terms in the asymptotics of
Ω(r) and χ_1(r) modify the pre-exponential factor in the
asymptotics of f(r).
Furthermore, we can conclude that the nontopological soliton cannot exist when
|ω| > ω_max, since in this case asymptotics
(<ref>) shows oscillating behavior, leading to an infinite energy and
charge for the corresponding field configuration.
The presence of two long-range fields in Eqs. (<ref>) and (<ref>)
leads to a modification of the intersoliton interaction potential at large
distances.
It can be shown that in the case of large distances and low velocities, the
leading term of the intersoliton interaction potential is
V_12=Q^( 1) Q^( 2) -Q_s^(
1) Q_s^( 2) /4π r_12,
where Q^( i ) (Q_s^( i)) is the electric
(scalar) charge of the i-th soliton.
Eq. (<ref>) tells us that the energy of the intersoliton interaction is
the sum of the energies of the Coulomb and scalar interactions.
Depending on the signs of Q^(1) and Q^(2), the Coulomb energy may be
both positive (repulsion) and negative (attraction).
At the same time, it follows from the inhomogeneity of the boundary condition
in Eq. (<ref>) that for the fixed vacuum in Eq. (<ref>), the
scalar charges Q_s^( i ) of the solitons must have
the same sign.
Hence, unlike the Coulomb field, the long-range scalar field always leads to
attraction between solitons.
§ FERMIONIC ZERO MODES
The Lagrangian density (<ref>) is written in the Wess-Zumino gauge,
meaning that the corresponding action S = ∫ℒd^4x is not
invariant under the usual 𝒩 = 1 supersymmetry transformations.
However, it will be invariant under the modified supersymmetry transformations
<cit.>:
δϕ_n = √(2)α_Rψ_nL,
δψ_nL = √(2)γ^μ( D_μϕ_n)
α_R+√(2)ℱ_nα_L,
δA_μ = α̅γ_μλ,
δλ = i 𝒟γ_5α- 1/4 F_μν
[γ^μ,γ^ν] α,
where
α =-i
[ ϵ _a; ∑_be_abϵ_b^∗ ],
[ ϵ _1; ϵ _2 ]
=
[ ϵ _11+iϵ _12; ϵ _21+iϵ _22 ],
ℱ_n=-(∂ f/∂ϕ_n)^∗,
and
𝒟=e(ϕ_+1^∗ϕ_+1-ϕ_-1^∗ϕ_-1).
In Eq. (<ref>), ϵ_i j are real infinitesimal anticommuting
transformation parameters and e_a b is an antisymmetric 2× 2 matrix
with e_1 2= +1, from which it follows that α in Eq. (<ref>) is
the Majorana spinor.
In Eq. (<ref>), the auxiliary fields ℱ_n are expressed in
terms of superpotential (<ref>), and it is assumed that all the fields
in Eqs. (<ref>)–(<ref>) satisfy field equations
(<ref>)–(<ref>).
Fermionic zero modes are generated by the action of transformations
(<ref>) and (<ref>) on purely bosonic field configuration
(<ref>).
To represent these in a compact form, we introduce a column Ψ consisting
of four fermionic fields included in the Lagrangian (<ref>).
The transposed form of Ψ is
Ψ^T = N(ψ_+1 L^T,ψ_0 L^T,
ψ_-1 L^T,λ^T),
where
ψ _± 1 L=
[ A_± 1f+Cf^'; B_± 1f+Df^'; 0; 0 ]
e^∓ iω t,
ψ _0 L=
[ iϵ _12^-1/2gf^2+Cχ _1^'; iϵ _22^-1/2gf^2+Dχ _1^'; 0; 0 ],
λ =iΦ ^'
[ ϵ _1c+ϵ _2e^-iφs; -ϵ _2c+ϵ _1e^iφs; -ϵ _2^∗c+ϵ _1^∗e^-iφs; -ϵ _1^∗c-ϵ _2^∗e^iφs ],
and N is a normalisation factor.
For brevity, in Eqs. (<ref>)–(<ref>), we use the notation
A_±1 =±iϵ_2^∗Ω+i2^-1/2ϵ_1gχ_1,
B_±1 =∓iϵ_1^∗Ω+i2^-1/2ϵ_2gχ_1,
C = -ϵ_2^∗c+ϵ_1^∗e^-iφs,
D = -ϵ_1^∗c-ϵ_2^∗e^iφs,
where c = cos(θ), s = sin(θ), ϵ_1 = ϵ_11 +
iϵ_12, and ϵ_2 = ϵ_21 + i ϵ_22.
Eqs. (<ref>)–(<ref>) depend linearly on the four anticommuting
parameters ϵ_ij, and hence Eq. (<ref>) can be written as Ψ
= ∑_ijϵ_ijΨ_ij.
It follows that there are four (according to the number of the 𝒩
= 1 supersymmetry generators) independent fermionic zero modes Ψ_ij
expressed in terms of ansatz functions (<ref>).
It can be shown that the components of the fermionic zero modes Ψ_ij
satisfy field equations (<ref>) and (<ref>), provided that
the ansatz functions Ω, f, and χ_1 satisfy
Eqs. (<ref>)–(<ref>).
The fermionic zero modes satisfy the orthonormality condition
∫Ψ_ij^†Ψ_i^'j^'d^3x =
δ_i i^'δ_j j^',
provided that the normalisation factor
N = [ 2π∫_0^∞[4(Φ^'2+f^'2) +2χ_1^'2. .
. . +g^2f^4+2f^2( 2Ω^2+g^2χ_1^2) ] r^2dr]^-1/2.
From Eq. (<ref>), it follows that the gaugino component λ of the
fermionic zero mode Ψ_ij is proportional to the electric field strength
E_r =-Φ' of the soliton, and therefore decreases rather slowly (∝
r^-2) at large distances.
Furthermore, Eqs. (<ref>) and (<ref>) tell us that at large
distances, the component ψ_0 L∝χ_1' ∼ Q_s/(4 π
r^2).
We see that similarly to the λ component, the ψ_0 L component
of Ψ_i j decreases slowly (∝ r^-2) at large distances.
In contrast, Eqs. (<ref>) and (<ref>) tell us that the two
remaining components ψ_± 1 L of Ψ_i j, which correspond to the
short-range scalar fields ϕ_± 1, decrease exponentially away from the
soliton.
Written in terms of the left-handed fermion fields (including the massless
“neutrino” ψ_0 L), the Lagrangian (<ref>) is not invariant under
the P and C transformations; it is, however, invariant under the combined
CP transformation.
Under the latter transformation, the original soliton solution (f(r)exp(∓ i
ω t), χ_1(r), Φ(r), Ω(r)) of the energy E and
electric charge Q is transformed into an antisoliton solution (f(r)exp(±
i ω t), χ_1(r), -Φ(r), -Ω(r)) of the energy E and
electric charge -Q.
It can be shown that under the CP transformation, the fermionic zero modes
Ψ_ij of the soliton turn into those Ψ̃_ij of the
antisoliton:
[Ψ_11(x)]^CP = -Ψ̃_22(x),
[Ψ_12(x)]^CP = -Ψ̃_21(x),
[Ψ_21(x)]^CP = Ψ̃_12(x),
[Ψ_22(x)]^CP = Ψ̃_11(x).
This is because the CP transformation is a discrete symmetry of the
Lagrangian (<ref>), and hence must convert one fermion-soliton solution
into another.
§ NUMERICAL RESULTS
The system of differential equations (<ref>) – (<ref>) with
boundary conditions (<ref>) represents a mixed boundary value problem
on the semi-infinite interval r∈[0,∞).
To solve this system, we use the numerical methods provided in the Maple
package <cit.>.
Formally, the boundary value problem (<ref>) – (<ref>) depends on
five parameters: ω, m, g, e, and χ_1 vac.
However, it is easily shown that the energy and Noether charge of the soliton
depends nontrivially on only three dimensionless parameters:
E( ω,m,g,e,χ_1 vac) = mg^-2Ẽ
(ω̃,ẽ,χ̃_1 vac),
Q_N( ω,m,g,e,χ_1 vac) = g^-2Q̃
_N(ω̃,ẽ,χ̃_1 vac
),
where ω̃ = ω/m, ẽ = e/g, and χ̃_1 vac=χ_1 vac/m.
Hence, without loss of generality, we can set the parameters m and g
equal to unity.
In addition, we set the dimensionless parameter χ̃_1 vac =
2√(2) in these numerical calculations.
Figure <ref> shows the dependence of the soliton energy Ẽ on the
phase frequency ω̃ for several values of the gauge coupling
constant ẽ.
We see that for each ẽ, the phase frequency ω̃∈(
ω̃_min(ẽ), ω̃_max], where ω̃_max = 2^-1/2χ̃_1 vac = 2.
As ẽ decreases, the minimum allowable frequency ω̃_min(ẽ) falls monotonically, reaching the limiting value ω̃_min(0) = 0.
Using numerical methods, we can show that as ω̃→ω̃_min(ẽ), the soliton energy
Ẽ(ω̃,ẽ) ∼ a(ẽ)
(ω̃-ω̃_min(ẽ))^-2,
where a(ẽ) is a function of ẽ.
It follows that the soliton energy increases indefinitely as ω̃→ω̃_min(ẽ).
On the other hand, ω̃_min(ẽ) monotonically increases
with ẽ, meaning that there is a limiting value ẽ_max
for which ω̃_min(ẽ_max) = ω̃_max.
It follows that the nontopological soliton can exist only when ẽ∈[0, ẽ_max).
In the subplot in Fig. <ref>, we can see the curves Ẽ(ω̃,ẽ) in the vicinity of the maximum allowable phase frequency
ω̃_max.
All the curves Ẽ(ω̃, ẽ) in the subplot tend
to zero as ω̃→ω̃_max.
It has been found numerically that as ω̃→ω̃_min(ẽ), the soliton energy
Ẽ(ω̃,ẽ)≈ b(ẽ
)(ω̃_max-ω̃)^1/2,
where b(ẽ) is an increasing function of ẽ.
According to Eq. (<ref>), the curves Q̃_N(ω̃,
ẽ) are related to the curves Ẽ(ω̃, ẽ) by
the integral relation Q̃_N( ω̃, ẽ)
=-∫_ω̃^ω̃_maxτ^-1∂_τẼ( τ ,ẽ) dτ.
It follows that the curves Q̃_N(ω̃, ẽ) will
be similar to the curves Ẽ(ω̃, ẽ) shown in
Fig. <ref>; in particular, the behavior of the curves Q̃_N(
ω̃,ẽ) in the neighborhoods of ω̃_min and
ω̃_max is the same as that of the curves Ẽ(ω̃, ẽ).
Figure <ref> shows the dependence of the soliton energy Ẽ on
the Noether charge Q̃_N for several values of the gauge coupling
constant ẽ.
In Fig. <ref>, the black dashed line Ẽ = ω̃_maxQ̃_N corresponds to the energy of a plane-wave configuration with
a given Noether charge Q̃_N.
We see that for all values of ẽ considered here, the energies of the
solitons with a given Q̃_N are lower than the energy of the
corresponding plane-wave configuration.
It follows that these solitons are stable against decay into massive charged
ϕ-mesons.
We have established that the energy Ẽ(ω̃,ẽ) and
the Noether charge Q̃_N(ω̃, ẽ) of the soliton
increase indefinitely as ω̃→ω̃_min
(ẽ).
In view of this, it would be interesting to explore the behavior of the soliton
fields in this limit.
To do this, we define the dimensionless profile functions f̃(r̃)
= m^-1 g f(r), χ̃_1(r̃) = m^-1 g χ_1(r), and
Φ̃(r̃) = m^-1 g Φ(r), where r̃ = m^-1 r.
We also define the dimensionless energy density ℰ̃(r̃)
= m^-4 g^2ℰ(r) and the dimensionless Noether charge density
j̃_N^0(r̃) = m^-3 g^2 j_N^0(r).
Figure <ref> shows these dimensionless functions for parameter values
ẽ = 0.1 and ω̃ = 0.32214.
Note that ω̃=0.32214 is the minimum value of the phase frequency,
which we were able to achieve by numerical methods for ẽ = 0.1.
We see that only f̃(r̃) and j̃^0_N(r̃) are
localised, whereas Φ̃(r̃), χ̃_1(r̃),
and ℰ̃(r̃) are long-range, which is consistent
with the asymptotic forms in Eqs. (<ref>), (<ref>), and
(<ref>).
We also see that χ̃_1(r̃) ≈ 0 in the interior of the
soliton.
The long-range character (∝ r^-4) of the energy density ℰ̃ arises from the gradient of the long-range electric potential
Φ̃ and the gradient of the long-range neutral scalar field
χ̃_1.
According to Eq. (<ref>), the local character of the charge density
j̃_N^0 is due to the local character of the function f̃.
Note that the electrostatic repulsion causes the electric charge density to
increase near the surface of the soliton.
Eq. (<ref>) tells us that the asymptotics of χ̃_1 is
characterised by the scalar charge Q̃_s = g Q_s.
Using numerical methods, we find that similarly to the energy Ẽ and
the Noether charge Q̃_N, the scalar charge
Q̃_s(ω̃,ẽ)
∝(ω̃-ω̃_min(ẽ))^-2
as ω̃→ω̃_min.
However, unlike the Noether (electric) charge Q_N (Q = e Q_N), the
scalar charge Q_s is simply a definition and is not related to any
symmetry of model (<ref>).
§ CONCLUSION
In the present paper, we show that an electrically charged nontopological
soliton exists in a version of 𝒩 = 1 supersymmetric scalar
electrodynamics.
A characteristic feature of this soliton is the presence of two long-range
fields, which slowly (∝ r^-1) tend to limiting values: these are the
electrostatic Coulomb field, and the electrically neutral massless scalar
field.
The presence of these two long-range fields leads to a modification of the
intersoliton interaction in comparison with the purely Coulomb case.
Another feature of the soliton is that its energy and electric charge take
arbitrarily large values when the modulus of the phase frequency tends to the
minimum possible value.
In contrast, the energy and electric charge of the soliton vanish when the
modulus of the phase frequency tends to the maximum possible value.
We note that in the general case, the energy and electric charge of a
nontopological soliton cannot be arbitrarily large due to Coulomb repulsion
<cit.>.
We avoid this restriction because the attraction due to the massless scalar
field compensates for the Coulomb repulsion.
A similar situation also arises in the massless limit of the gauged
Fridberg-Lee-Sirlin model <cit.>.
It is also worth noting that the electric charge and energy of the dyon
(electrically charged magnetic monopole) also cannot be arbitrarily large in
the general non-BPS case <cit.>.
Only in the BPS limit, when the scalar field of the dyon becomes massless, can
the energy and electric charge take arbitrarily large values.
The 𝒩=1 supersymmetry of the model makes it possible to obtain
expressions for the fermionic zero modes in terms of bosonic fields of the
soliton.
The fermionic zero modes are bound states of the fermion-soliton system,
and their components that correspond to the long-range bosonic fields are also
long-range.
In accordance with the number of 𝒩=1 supersymmetry generators, the
number of independent fermionic zero modes of the soliton is four.
The fermionic zero modes of two solitons with opposite electric charges are
related by the CP transformation.
In this work, we have investigated a nontopological soliton of an 𝒩
= 1 supersymmetric Abelian gauge model.
It is known <cit.>, however, that nontopological solitons can
also exist in non-Abelian gauge models.
In particular, it was shown in Ref. <cit.> that an electrically
charged nontopological soliton exists in the Weinberg-Salam model of
electroweak interactions.
This model allows for 𝒩 = 1 supersymmetric extension, and its
fermionic sector contains both massive (e, μ, τ) and massless
(ν_e, ν_μ, ν_τ) fermions.
The bosonic superpartners of the neutrinos (sneutrinos) also have zero masses.
We can assume that, similarly to the nonsupersymmetric case
<cit.>, an electrically charged nontopological soliton also
exists in this model, meaning that some properties of this soliton will be
similar to those studied in this work.
In particular, in addition to the long-range Coulomb field, this soliton will
have long-range fields of massless sneutrinos.
Furthermore, it will be possible to express the fermionic zero modes of this
soliton in terms of its bosonic fields.
§ ACKNOWLEDGEMENTS
This work was supported by the Russian Science Foundation, grant No 23-11-00002.
elsarticle-num
§ FIGURE CAPTIONS
Fig. 1. Dependence of the soliton energy Ẽ on the phase frequency
ω̃ for several values of the gauge coupling constant ẽ.
Fig. 2. Dependence of the soliton energy Ẽ on the Noether
charge Q̃_N for several values of the gauge coupling constant
ẽ.
Fig. 3. Scaled dimensionless functions 10 ×f̃
(r̃) (solid black), Φ̃(r̃) (solid red), χ̃_1(r̃) (solid blue), 10^4×ℰ̃(r̃)
(dashed orange), and 10^3×j̃_N^0(r̃) (dashed brown).
The functions correspond to the parameters ẽ=0.1 and ω̃
= 0.32214.
|
http://arxiv.org/abs/2307.00627v1
|
20230702175220
|
New Bounds for Time-Dependent Scheduling with Uniform Deterioration
|
[
"Angelos Gkikas",
"Dimitrios Letsios",
"Tomasz Radzik",
"Kathleen Steinhöfel"
] |
cs.DS
|
[
"cs.DS",
"math.OC"
] |
[kings]King's College London, United Kingdom
[ocado]Ocado Technology, Hatfield, AL10 9UL, United Kingdom
Time-dependent scheduling with linear deterioration involves determining when to execute jobs whose processing times degrade as their beginning is delayed.
Each job i is associated with a release time r_i and a processing time function p_i(s_i)=α_i + β_i· s_i, where α_i,β_i>0 are constants and s_i is the job's start time.
In this setting, the approximability of both single-machine minimum makespan and total completion time problems remains open.
Here, we take a step forward by developing new bounds and approximation results for the interesting special case of the problems with uniform deterioration, i.e. β_i=β, for each i.
The key contribution is a O(1+1/β)-approximation algorithm for the makespan problem and a O(1+1/β^2) approximation algorithm for the total completion time problem.
Further, we propose greedy constant-factor approximation algorithms for instances with β=O(1/n) and β=Ω(n), where n is the number of jobs.
Our analysis is based on a new approach for comparing computed and optimal schedules via bounding pseudomatchings.
Time-Dependent Scheduling Linear Deterioration Approximation Algorithms
§ INTRODUCTION
Single-machine scheduling problems involve deciding when to process a set 𝒥={1,…,n} of n jobs arriving over time, i.e. each job i∈𝒥 is associated with a release time r_i∈ℝ^+, using a machine that may execute at most one job per time so as to optimize some objective function f(C⃗), e.g. the makespan max_i∈𝒥{C_i} or the total completion time ∑_i∈𝒥C_i, where C⃗=(C_1,…,C_n) is the vector of job completion times.
Prior literature largely assumes that each job i∈𝒥 has a fixed processing time p_i∈ℝ^+.
However, this assumption can be fairly strong.
In various contexts, e.g. production scheduling with machine degradation and delivery scheduling in road networks with varying traffic, the time at which the execution of a job begins significantly affects its processing time.
Scheduling problems where the processing time of each job i∈𝒥 is a function p_i(s_i) of its start time s_i are typically referred to as time-dependent scheduling problems.
Previous work investigates time-dependent scheduling problems with processing time functions p_i(s_i)=α_i+β_i s_i, where α_i∈ℝ^+ is the fixed part and β_i· s_i is the variable part depending on the deterioration rate β_i∈ℝ^+ and the start time s_i, for each job i∈𝒥.
Such problems are referred to as scheduling with linear deterioration and model settings where delaying the beginning of a job execution by one unit of time results in an increase of the job's processing time by β_i units of time.
In this context, the single-machine problems of minimizing the makespan and the total completion time are open from an algorithmic viewpoint.
Using the standard 3-field scheduling notation, the problems can be denoted as 1|r_i,p_i(t)=α_i+β_i· s_i|C_max and 1|r_i, p_i(t)=α_i+β_i· s_i|∑ C_i.
When all jobs have equal release times, the makespan problem is polynomially solvable <cit.>, while the complexity of the total completion time problem is unknown and conjectured to be 𝒩𝒫-hard <cit.>.
When the jobs have arbitrary release times, both problems are known to be strongly 𝒩𝒫-hard <cit.>.
The best known algorithms are based on iterative subproblem decomposition, e.g. dynamic programming and branch-and-bound <cit.>, but have exponential running times.
The two problems have attracted attention in the special cases with
(1) proportional linear deterioration, i.e. p_i(t)=β_i· s_i (eq. α_i=0), for each i∈𝒥, and (2) fixed processing times, i.e. p_i(s_i)=α_i (eq. β_i=0), for each i∈𝒥.
In the former case, the makespan problem 1|r_i,p_i(s_i)=β_i· t|C_max is optimally solvable in O(nlog n) time <cit.>, while the best known algorithm for the total completion problem 1|r_i,p_i(s_i)=β_i· t|∑ C_i is (1+β_max)-approximate <cit.>.
In the latter case, makespan problem is polynomially solvable via a greedy algorithm <cit.>, while the total completion time problem is strongly 𝒩𝒫-hard, admitting a Polynomial-Time Approximation Scheme (PTAS) and greedy constant-factor approximation algorithms <cit.>.
In addition to the above, there exist various complexity and approximation results for problem generalizations (e.g. multiprocessor environments <cit.>), relaxations (e.g. preemptive versions <cit.>), and variants (e.g. step and position-dependent processing time functions <cit.> and uncertainty <cit.>).
Surveys relevant to time-dependent scheduling algorithms can be found in <cit.>.
Table 1 summarizes results closely related to this manuscript.
Last but not least, there exist investigations on interesting time-dependent scheduling applications, including production and delivery scheduling <cit.>, defending aerial threats <cit.>, fire fighting <cit.>, and personel scheduling <cit.>.
Contributions and paper organization
Despite the aforementioned literature, the approximability of the single-machine time-dependent scheduling problems with jobs arriving over time, linear deterioration, the makespan and total completion time objectives remains unsettled.
This manuscript takes a step forward in this direction by focussing on the special case with uniform deterioration, i.e. the problems 1|r_i,p_i(s_i)=α_i+β· C_max and 1|r_i,p_i(s_i)=α_i+β·∑ C_i.
To the authors knowledge, no approximation algorithms are known for those.
Our main contribution is the analysis of greedy algorithms and the derivation of approximation results based on a new approach for bounding time-dependent scheduling problems using pseudomatchings.
The first part of the manuscript (Sections 2-5) is devoted to the makespan problem and the last part (Section 6) covers the total completion time problem.
In more detail, the manuscript proceeds as follows.
Section <ref> formally describes the problem and expresses the makespan of a feasible schedule as a function of the job processing times and idle periods.
Section <ref> introduces our pseudomatching concepts and demonstrates their bounding properties.
Section <ref> analyzes two basic algorithms that we call Non-Idling and Non-Interfering <cit.>.
The former avoids idle periods by always scheduling a pending job when the machine becomes available, while the latter introduces idle periods (i.e. delays pending jobs) to prioritize (i.e. avoid interfering with) jobs that arrive later[A job is pending if it has been released, but has not been processed.].
Non-Idling optimally solves the problems 1|r_j,p_j(s_j)=β_j· s_j|C_max and 1|r_j,p_j(s_j)=α_j|C_max and is similar to other standard scheduling algorithms, e.g. List Scheduling and First-Come First Served.
Non-Interfering extends the standard Shortest Processing Time First algorithm, which is optimal for 1|p_j(s_j)=α_j+β· t|C_max instances with a single release time, to instances with arbitrary release times.
On the positive side, we prove that the two algorithms achieve constant approximation ratios for the special cases of our problem with β≤ 1/n and β≥ n+1, respectively.
On the negative side, we show that both algorithms are Ω((1+β)^n)-approximate in the worst case.
The above negative result demonstrates an interesting degeneracy of the problem: misplacing just a single job may result in severe solution quality degradation.
Despite this pathological finding, Section <ref> shows the existence of a time-dependent priority policy, namely Earliest Completion Time First, that achieves an (3+1/β)-approximation ratio for 1|r_j,p_j(s_j)=α_j+β· t|C_max.
Next, we turn our attention to the total completion time objective.
Prior literature derives equivalence relationships between the makespan and the sum of completion times.
For single-machine instances with fixed processing times, an optimal schedule for makespan is 2-approximate for the total completion time and vice versa <cit.>.
We extend this approximation equivalence relationship to the time-dependent scheduling context.
On one hand, we show that any ρ-approximation algorithm for the total completion time problem is (1+ρ)-approximate for minimizing makespan.
On the other hand, we show that ρ-approximation algorithm for makespan is (1+1/β)ρ-approximate for the total completion time.
This last finding implies the existence of a O(1+1/β^2)-approximation algorithm for the total completion time problem.
§ PRELIMINARIES
Next, we formally define the time-dependent scheduling problem with uniformly deteriorating processing times and express the makespan of a feasible schedule as a weighted sum of the fixed processing times and gap lengths.
§.§ Problem Definition.
An instance of the problem consists of a set 𝒥={1,…,n} of jobs that have to be executed by a single machine that may execute at most one job at a time.
Each job must be executed non-preemptively, i.e. in a continuous time interval without interruptions until it completes.
For each job i∈𝒥, a linearly increasing function p_i(s_i)=α_i+β· s_i
specifies the processing time of i if it begins at time s_i.
We refer to the terms α_i and β· s_i as fixed and variable processing time, respectively, where β>0 is a constant rate at which p_i(s_i) is increased per unit of time that the start time s_i is delayed.
Given two jobs i,j∈𝒥, if α_i<α_j, then we say that i is shorter than j and that j is longer than i.
Job i∈𝒥 is released at time r_i, i.e. may only begin processing at a time s_i≥ r_i.
W.l.o.g. r_min=min_i∈𝒥{r_i}=0.
Let C_i be the completion time of job i, i.e. C_i=α_i+(1+β)s_i.
The objective is to find a feasible schedule such that the makespan, i.e. the time T=max_i∈𝒥{C_i} at which the last job completes, is minimized.
Given a time t, denote by 𝒫(t) the set of pending jobs, i.e. the ones which have been released but have not begun processing before t.
At each time t that the machine becomes available, a feasible schedule specifies the next job to begin from time t and onward.
§.§ Makespan Expression.
Due to release times, optimal schedules may require gaps, i.e. maximal idle time intervals during which no job is processed (Figure <ref>).
Consider a feasible schedule 𝒮 and number the jobs in increasing order s_1<…<s_n of their start times.
Denote the gap between jobs i-1 and i by q_i=s_i-C_i-1, for i∈{1,…,n}, where C_0=0.
If q_i=0, then there is no idle period between jobs i-1 and i.
Lemma <ref> expresses the makespan of a feasible schedule w.r.t. gaps and fixed processing times.
This is an adaptation of standard expressions in the time-dependent scheduling literature <cit.>, but now accounts for gaps because release times.
Lemma <ref> derives an alternative expression of the fixed processing time contributions to the makespan.
Consider a feasible schedule 𝒮 and suppose that the jobs are numbered in increasing order of their start times in 𝒮, i.e. s_1≤…≤ s_n.
Then, the makespan of 𝒮 is:
T=∑_i=1^n(1+β)^n-i+1q_i+∑_i=1^n(1+β)^n-iα_i
We show by induction on k∈{1,…,n} that C_k=∑_i=1^k(1+β)^k-i[(1+β)q_i+α_i].
For the induction basis, it clearly holds that C_1=(1+β)q_1+α_1, since job 1 begins at time s_1=q_1.
For the induction step, suppose that the equality is true with index k-1.
Using the fact that s_k=C_k-1+q_k and the induction hypothesis:
C_k = (1+β)[C_k-1+q_k] + α_k
= (1+β)[∑_i=1^k-1(1+β)^(k-1)-i[(1+β)q_i+α_i] + q_k] + α_k
= ∑_i=1^k(1+β)^k-i[(1+β)q_i+ α_i]
Lemma <ref> has the following implications.
First, if all jobs begin at time t and are executed without any gap between them, then they complete at T = (1+β)^nt+∑_i=1^n(1+β)^n-iα_i.
Second, when all jobs have equal release times, there exists always an optimal schedule without gaps and greedily scheduling the jobs in non-decreasing order α_1≤…≤α_n of their fixed processing times is optimal <cit.>.
Third, for any subset 𝒥'={γ(1),…,γ(k)} of jobs sorted in non-decreasing order α_γ(1)≤…≤α_γ(k) of their fixed processing times and executed consecutively without gaps starting at time t, we get the lower bound T≥ (1+β)^kt+ ∑_i=1^k(1+β)^k-iα_γ(i) on the makespan of any feasible schedule.
Consider a feasible schedule 𝒮 and number the jobs in increasing order s_1≤…≤ s_n of their start times in 𝒮.
Then S has fixed processing time cost ∑_i=1^n(1+β)^n-iα_i=∑_i=1^nα_i+∑_k=2^nβ(1+β)^n-k(∑_i=1^k-1α_i).
Consider a job i∈𝒥.
Because i is scheduled in the i-th position of 𝒮, its execution increases the start time, and therefore the processing time, of every job in the set {i+1,,…,n}.
Specifically, the processing time of job i+1 is increased by βα_i, of job i+2 by β(1+β)α_i and so on.
That is, the processing time of job n is increased by β(1+β)^n-i-1α_i.
Hence, the overall contribution of α_i to the makespan is 1+∑_j=1^n-iβ(1+β)^j-1α_i=(1+β)^n-iα_i.
Using this geometric series sum and Lemma <ref>, the fixed processing time distribution to 𝒮 can be expressed A=∑_i=1^nα_i+∑_k=1^n-1(∑_i=1^n-kβ(1+β)^n-k-i)α_k.
Next, we rearrange the sum so that fixed processing time terms α_i with the same weight β(1+β)^n-k are grouped together, for i∈𝒥 and k∈{2,…,n}.
That is, A=∑_i=1^nα_i+∑_k=2^nβ(1+β)^n-k(∑_i=1^k-1α_i).
§ BOUNDING PSEUDOMATCHINGS
To analyze the performance of our algorithms for 1|r_j,p_j(t)=α_j+β· t|C_max, we need an approach for upper and lower bounding the (fixed processing time) load completed by a feasible and an optimal schedule, respectively, up to any time t.
To this end, we introduce the ρ-pseudomatching and weak pseudomatching concepts that allow bounding sums and geometric series incorporating the β parameter, respectively.
Definition <ref> defines the so-called bounding graph that is used for comparing schedules computed by algorithms with optimal schedules.
Definition <ref> and Lemma <ref> summarize a core argument used for analyzing the Non-Interfering algorithm (Section 4).
Definition <ref> and Lemma <ref> describe a main argument in the analysis of the Earliest Completion Time First algorithm (Section 5).
The main technical difficulty in deriving approximation bounds (Sections 4-5) is showing the existence of these pseudomatchings for a schedule computed by an algorithm.
Let 𝒜={a_1,…,a_k} and 𝒪={o_1,…,o_k} be two equal-cardinality indexed sets of positive real numbers.
We refer to the complete bipartite graph G=(𝒜∪𝒪,𝒜×𝒪) as the bounding graph of 𝒜 and 𝒪.
Given two equal-cardinality indexed sets 𝒜 and 𝒪 of positive real numbers and their bounding graph G, we say that a subset M⊆𝒜∪𝒪 of edges is a ρ-pseudomatching if the following properties hold:
4.1 Each node a_i∈𝒜 appears exactly once as an endpoint of a M edge.
4.2 Each node o_j∈𝒪 appears at most ρ times as an endpoint of a M edge.
4.3 For each (a_i,o_j)∈ M, it holds that a_i≤ o_j.
Consider two equal-cardinality indexed sets 𝒜 and 𝒪 of positive real numbers.
If the corresponding bounding graph G admits a ρ-pseudomatching M, then:
∑_a_i∈𝒜a_i≤ρ[∑_o_j∈𝒪o_j].
Denote by 𝒜_j={a_i:(a_i,o_j)∈ M} the subset of 𝒜 elements matched with element o_j∈𝒪 in M.
Because of Property 4.1, each a_i is matched exactly once, thus ∑_a_i∈𝒜a_i=∑_o_j∈𝒪∑_a_i∈𝒜_ja_i.
Due to Properties 4.2-4.3, we have that ∑_a_i∈𝒜_ja_i≤ρ· o_j, for each o_j∈𝒪.
Therefore, we conclude that ∑_a_i∈𝒜a_i≤ρ[∑_o_j∈𝒪o_j].
Given two equal-cardinality indexed sets 𝒜 and 𝒪 of positive real numbers and their bounding graph G, we say that a subset M⊆𝒜∪𝒪 of edges is a weak pseudomatching if the following hold:
6.1 Each node a_i∈𝒜 appears exactly once as an endpoint of a M edge.
6.2 For each (a_i,o_j)∈ M, it holds that i>j and a_i≤ o_j.
Consider two equal-cardinality indexed sets 𝒜 and 𝒪 of positive real numbers.
If the corresponding bounding graph G admits a weak pseudomatching M, then:
∑_a_i∈𝒜(1+β)^n-ia_i≤(1+1/β)[∑_o_j∈𝒪(1+β)^n-jo_j].
Denote by 𝒜_j={a_i:(a_i,o_j)∈ M} the subset of 𝒜 elements matched with element o_j∈𝒪 in M.
By Property 6.1, it must be the case that ∑_a_i∈𝒜(1+β)^n-ia_i=∑_o_j∈𝒪∑_a_i∈𝒜_j(1+β)^n-ia_i.
Due to Property 6.2, we have that ∑_a_i∈𝒜_j(1+β)^n-ia_i≤∑_i=j+1^n(1+β)^n-imax_a_i∈𝒜_j{a_i}≤∑_i=j+1^n(1+β)^n-io_j= (1+1/β)(1+β)^n-jo_j, for each o_j∈𝒪, where the last equality follows from a standard geometric series sum calculation.
We conclude that
∑_a_i∈𝒜(1+β)^n-ia_i ≤(1+1/β)(∑_o_j∈𝒪(1+β)^n-jo_j).
§ TWO BASIC ALGORITHMS
This section investigates the two greedy non-interfering and non-idling algorithms that have been proposed for special cases and variants of our problem.
We show that the non-interfering algorithm achieves a constant approximation ratio for instances with β>n+1, but is Ω((1+β)^n)-approximate for general instances.
Next, we argue that the non-idling algorithm attains a constant factor approximation ratio for instances with β≤1/n, but is Ω((1+β)^n)-approximate for arbitrary instances.
Finally, we prove that returning the best of the two schedules computes a 2-approximate solution for instances with two distinct release times.
§.§ Non-Interfering Algorithm
Given a feasible schedule 𝒮 for an instance 𝒥 of the problem and two jobs i,j∈𝒥, we say that job i interferes with job j time t in 𝒮 if s_i=t, α_i>α_j and t<r_j< (1+β)t+α_i,
i.e. the situation where a longer job i begins at a time t before the release time r_j a shorter job j in 𝒮 and i completes after r_j, which can be avoided with an idle period during [t,r_j).
Clearly, jobs i and j have start times s_i<s_j in 𝒮.
In such a case, we say that i is an interfering job in 𝒮.
Algorithm <ref> constructs a schedule without interfering jobs.
[Non-Interfering]
At each time t that the machine becomes available, schedule a pending job i=min_k∈𝒫(t){α_k} with minimal fixed processing time, unless this job is interfering, i.e. there exists a job j such that α_j<α_i and t<r_j<(1+β)t+α_i.
In this case, introduce an idle period during [t,r_j) and proceed with time t=r_j.
Next, we proceed with Lemma <ref> and Observation <ref>, which simplify the proof of Lemma <ref> (as we do not need to account for gaps).
Starting from an instance 𝒥, Lemma <ref> defines another instance 𝒥 such that the non-interfering schedules execute the jobs in the same order in the two instances and the non-interfering schedule for 𝒥 does not contain gaps.
Observation <ref> shows that the job order uniquely characterizes an optimal schedule.
Consider an arbitrary instance 𝒥={1,…,n} for which the non-interfering algorithm produces a schedule 𝒮 with gaps.
Number the jobs in increasing order s_1<…<s_n of their start times in 𝒮.
Starting from 𝒥, construct a different instance 𝒥 with the same number of jobs, i.e. |𝒥|=|𝒥|.
Each job k∈𝒥 has fixed processing time α̃_k=α_k and release time r̃_k=min{r_k,∑_i=1^k-1(1+β)^(k-1)-iα_i}, where α_k and r_k are the original parameters of 𝒥.
The non-interfering algorithm executes the jobs in the same order in 𝒥 and 𝒥, and produces a schedule without gaps, i.e. q_i=0, for each i∈{1,…,n}, for 𝒥.
Starting from 𝒮, the new problem instance 𝒥 is constructed so that |𝒥|=|𝒥|, by rounding release times down.
In particular, we set a new release time r̃_k=min{r_k,∑_i=1^k-1(1+β)^(k-1)-iα_i} and fixed processing time α̃_k=α_k, for each k∈𝒥.
Consider the schedule 𝒮 for 𝒥 obtained by executing the jobs in the same order with 𝒮, but without gaps and denote the makespan of 𝒮 by T.
By construction, job k∈𝒥 has start time s̃_k=∑_i=1^k-1(1+β)^(k-1)-iα̃_i≥r̃_k, i.e. the new release times are not violated, in 𝒮.
Next, consider any pair k,ℓ∈𝒥 of jobs such that k<ℓ and α̃_k>α̃_ℓ.
Since 𝒮 is non-interfering, we have that C_k<r_ℓ, which implies that ∑_i=1^k-1(1+β)^(k-1)-iα_i≤ r_ℓ.
Given that k<ℓ, we conclude that C̃_k≤r̃_ℓ, i.e. 𝒮 is non-interfering for 𝒥.
Consider an arbitrary order γ(·) of the jobs, where γ(k)∈𝒥 is the job in the k-th position of the order, for k∈{1,…,n}.
Suppose that there exists an optimal schedule 𝒮^* executing the jobs according to γ(·), i.e. s_γ(1)^*<…<s_γ(n)^*, where s_i^* is the start time of job i∈𝒥 in 𝒮^*.
W.l.o.g. it holds that s_γ(i)=max{r_γ(i),C_γ(i-1)}, for i∈{1,…,n}, where C_γ(0)=0.
Number the jobs in increasing order s_1<…<s_n of their start times in the schedule 𝒮 produced by the non-interfering algorithm.
Job k∈𝒥 is executed in the k-th position of 𝒮.
Next, consider an optimal schedule 𝒮^* and let γ(k)∈𝒥 be the job executed in the k-th position of 𝒮^*, for k∈{1,…,n}.
We say that k∈𝒥 is a critical job if C_k≤ C_γ(k)^*, where C_i^* is the completion time of job i∈𝒥 in 𝒮^*.
Lemma <ref> upper bounds the fixed processing times of jobs executed after the last critical job in 𝒮 based on pseudomatchings.
Consider a non-interfering schedule 𝒮 and let ℓ=max{k:C_k≤ C_γ(k)^*, k∈𝒥} be the last critical job.
For each k∈{ℓ+1,…,n}, it holds that ∑_i=ℓ+1^kα_i≤2[∑_j=1^kα_γ(j)].
To prove the lemma, we may assume w.l.o.g. that 𝒮 does not contain gaps.
Otherwise, if 𝒮 contains gaps, starting from the original instance 𝒥, we may consider the modified instance 𝒥 obtained according to Lemma <ref>.
Using Observations <ref> and the orders of the jobs in the non-interfering schedule 𝒮 and in an optimal schedule 𝒮^* for 𝒥, we can obtain two feasible schedules 𝒮 and 𝒮^*, respectively, for 𝒥.
By Lemma <ref>, 𝒮 is the schedule produced by the non-interfering algorithm for 𝒥 and does not contain any gaps.
Therefore, proving the lemma with 𝒮 and 𝒮^* implies that the lemma holds for the original instance 𝒥.
We note that the optimal schedule may change for 𝒥, but this does not affect our argument since we compare a non-interfering schedule without gaps with an arbitrary feasible schedule.
In the remainder of the proof, assume that q_i=0, for each i∈𝒥, in 𝒮.
For each k∈{1,…,n}, define the sets 𝒜_k={1,…,k} and 𝒪_k={γ(1),…,γ(k)} of jobs executed in the first k positions of 𝒮 and 𝒮^*, respectively.
Further, for each k>ℓ, denote by 𝒜_k^-={ℓ+1,…,k} the subset of the 𝒜_k jobs executed after the last critical job ℓ in 𝒮.
For simplicity of the presentation, we denote a job in 𝒜_k by its actual index i and a job in 𝒪_k by γ(j) (i.e. using the γ(·) notation), for i,j∈{1,…,k}.
Deriving the lemma is equivalent to showing that ∑_i∈𝒜_k^-α_i≤2[∑_γ(j)∈𝒪_kα_γ(j)].
For each k∈{1,…,n}, consider the bounding (complete bipartite) graph G_k=(𝒜_k∪𝒪_k,𝒜_k×𝒪_k) with 2k nodes: a node for each of the k jobs in 𝒜_k and a node for each of the k jobs in 𝒪_k.
Note that, if there exist i∈𝒜_k and γ(j)∈𝒪_k such that i=γ(j), then we introduce two nodes for job i, i.e. a node in each side of the bipartition.
The graph contains all possible k^2 edges with one endpoint in 𝒜_k and the other in 𝒪_k.
Using standard terminology, a matching in G_k is a subset M_k⊆𝒜_k×𝒪_k of edges without a common endpoint.
If (i,γ(j))∈ M_k, then we say that the nodes i∈𝒜_k and γ(j)∈𝒪_k are matched by M_k.
By relaxing the notion of a matching, we refer to a set M_k of edges in G_k as a ρ-pseudomatching if every node i∈𝒜_k appears at most once as an endpoint of an edge in M_k and every node γ(j)∈𝒪_k appears at most ρ times as an edge point of an edge in M_k, where ρ∈ℤ^+ is a positive integer.
Let M_k(𝒜_k)={i:(i,γ(j))∈ M_k, i∈𝒜_k, γ(j)∈𝒪_k} be the subset of the 𝒜_k nodes appearing as an endpoint of an edge in M_k.
Next, for each k∈{ℓ+1,…,n}, we show the existence of a 2-pseudomatching M_k in G_k with the following properties:
* M_k(𝒜_k)=𝒜_k^-, i.e. each job i∈𝒜_k^- appears exactly once as the endpoint of an edge in M_k and no other 𝒜_k node is matched.
* For every job i∈𝒜_k^- such that there exists a job γ(j)∈𝒪_k with i=γ(j), we have that (i,γ(j))∈ M_k. That is, every job i which is executed in the {ℓ+1,…,k} positions of 𝒮 and the first k positions of 𝒮^* must be matched with itself in M_k.
* Every job i∈𝒜_k^- which is not executed in the first k positions of 𝒮^*, i.e. γ(j)=i for some γ(j)∉𝒪_k, must be matched with a job γ(j)∈𝒪_k∖𝒜_k in M_k.
* Each job γ(j)∈𝒪_k∖𝒜_k is matched with at most one job in 𝒜_k∖𝒪_k.
* If (i,γ(j))∈ M_k, for some pair of jobs i∈𝒜_k and γ(j)∈𝒪_k, then α_i≤α_γ(j).
We refer to a pseudomatching satisfying the above properties as a 2-pseudomatching.
If such a pseudomatching exists, then it clearly holds that ∑_i∈𝒜_k^-α_i≤2[∑_γ(j)∈𝒪_kα_γ(j)]: each 𝒜_k^- job is matched exactly once with an 𝒪_k job and each 𝒪_k job is matched at most two 𝒜_k^- jobs.
We will show its existence by induction on k∈{ℓ+1,…,n}.
For the induction basis, consider the case k=ℓ+1.
If ℓ+1∈𝒪_k, then γ(j)=ℓ+1, for some j∈{1,…,ℓ+1}.
Clearly, ℳ_ℓ+1={(ℓ+1,γ(j))} is a 2-pseudomatching, given that α_ℓ+1=α_γ(j).
If ℓ+1∉𝒪_k, then, by using a simple pigeonhole principle argument[A similar, but more elaborate, pigeonhole argument is rigorously presented in the proof of Theorem <ref>.], there exists a job γ(j)>ℓ+1 such that j∈{1,…,ℓ+1}.
Since ℓ+1 is not critical, we have that α_ℓ+1≤α_γ(j).
Otherwise, if α_ℓ+1>α_γ(j), by the way the non-interfering algorithm works and the fact that ℓ+1<γ(j), we would have C_ℓ+1≤ r_γ(j)<C_γ(ℓ+1)^*, which would contradict that ℓ+1 is not critical.
We conclude that M_ℓ+1={(ℓ+1,γ(j))} is a 2-pseudomatching.
For the induction step, assume that G_k admits a 2-pseudomatching M_k.
We will convert M_k into a 2-pseudomatching M_k+1 for G_k+1.
This update involves the following steps:
* Initially, we adapt M_k based on the job γ(k+1) executed in the (k+1)-th position of 𝒮^* to obtain an intermediate 2-pseudomatching M_k+1.
Suppose that γ(k+1)=i.
If i∉𝒜_k^-, then we set M_k+1=M_k.
Otherwise, if i∈𝒜_k^-, i.e. 𝒮 completes job i in the positions {ℓ+1,…,k}, then we need to update M_k so as to satisfy Properties 1-2 in the resulting 2-pseudomatching M_k+1.
Since γ(k+1)∉𝒪_k, by the induction hypothesis, job i is matched with exactly one job γ(j)∈𝒪_k∖𝒜_k in M_k.
We set M_k+1=(M_k∪{(i,γ(k+1))})∖{(i,γ(j))}, that is we remove (i,γ(j)) from M_k and add (i,γ(k+1)) to obtain M_k+1.
In this way, i∈𝒜_k^- is now matched with job γ(k+1), i.e. itself, in the 𝒪_k side of G_k.
* Next, we adapt M_k+1 so as to have job k+1 matched with some 𝒪_k+1 job in M_k+1. We distinguish two cases based on whether (k+1)∈𝒪_k+1, or (k+1)∉𝒪_k+1.
In the former case, there exists j∈{1,…,k+1} s.t. γ(j)=k+1. Based on Property 2, we set M_k+1=M_k+1∪{(k+1,γ(j))}, i.e. job (k+1)∈𝒜_k+1^- is matched with itself in the 𝒪_k+1 side.
In the latter case, it holds that k+1∈𝒜_k+1∖𝒪_k+1.
Let x=|𝒜_k+1∖𝒪_k+1| be the number of jobs executed in the first k+1 positions of 𝒮 and after the first k+1 positions of 𝒮^*.
A simple set theoretic argument implies that |𝒪_k+1∖𝒜_k+1|=x.
By the induction hypothesis and Properties 3-4, we conclude that each 𝒪_k∖𝒜_k is matched with at most one 𝒜_k∖𝒪_k job.
Therefore, given that i∈𝒜_k+1∖𝒪_k+1, there exists a job γ(j)∈𝒪_k+1∖𝒜_k+1 which is not matched with any 𝒜_k job in M_k+1.
We set M_k+1=M_k+1∪{(i,γ(j))} and guarantee that Properties 1-4 are satisfied.
Next, we claim that α_k+1≤α_γ(j). Otherwise, if α_k+1>α_γ(j), by the way the non-interfering algorithm works and the fact that k+1<γ(j) (γ(j)∈𝒪_k+1∖𝒜_k+1), we would have C_k+1≤ r_γ(j)<C_γ(k+1)^*, which would contradict that k+1 is not critical.
Lemma <ref> lower bounds the optimal makespan using release times.
Assume that the jobs are numbered so that r_1≤…≤ r_n. Any optimal schedule 𝒮^* has makespan T^*≥∑_i=1^nβ^n-ir_i.
Denote by C_i and T the completion time of job i∈𝒥 and makespan, respectively, in schedule S.
We prove by induction that C_k≥∑_i=1^kβ^k-ir_i, for each k∈{1,…,n}.
For k=1, it clearly holds that C_1≥β s_1^*≥ r_1.
Suppose that lemma is true for some k∈{1,…,n-1}.
By the fact that s_k+1≥max{r_k+1,C_k} and the induction hypothesis:
C_k+1= (1+β)s_k+1+α_k+1≥ r_k+1+β C_k≥ r_k+1+β[∑_i=1^kβ^k-ir_i]
= ∑_i=1^k+1β^(k+1)-ir_i.
Theorem <ref> presents bounds on the approximation ratio of the non-interfering algorithm.
Algorithm <ref> is (3+e)-approximate for instances with β≥ n+1 and Ω((1+β)^n)-approximate for general instances.
Recall that the jobs are numbered in increasing order s_1<…≤ s_n of their start times in the schedule 𝒮 produced by the non-interfering algorithm
and γ(k)∈𝒥 is the job executed in the k-th position of an optimal schedule 𝒮^*, for k∈{1,…,n}.
Let ℓ=max{k:C_k≤ C_γ(k)^*
} be the last critical position.
By Lemma <ref>, we have that T = (1+β)^n-ℓC_ℓ+∑_i=ℓ+1^n(1+β)^n-i+1q_i+∑_i=ℓ+1^n (1+β)^n-iα_i.
Using Lemma <ref>, i.e. expanding the last sum of this expression with geometric series, we get that:
T = (1+β)^n-ℓC_ℓ + ∑_i=ℓ+1^n(1+β)^n-i+1q_i
+∑_i=ℓ+1^nα_i+∑_k=ℓ+2^nβ(1+β)^n-k(∑_i=ℓ+1^k-1α_i)
Consider job i∈𝒥.
If q_i>0, then job i begins at its release time r_i.
That is, the gap of length q_i immediately preceding job i occurs exactly during the time interval [r_i-q_i,r_i).
Hence, q_i≤ r_i.
Based on this observation, the obvious fact that ∑_i=ℓ+1^nα_i≤∑_i=1^nα_γ(i) and Lemma <ref>, Equation (<ref>) implies that:
T ≤ (1+β)^n-ℓC_γ(ℓ)^* +
∑_i=ℓ+1^n(1+1/β)^n-i+1β^n-i+1r_i
+ 2[∑_i=1^nα_γ(i) + ∑_k=ℓ+2^nβ(1+β)^n-k(∑_i=1^k-1α_γ(i)) ]
By the definition of γ(·), Lemma <ref> and Lemma <ref>, we get that
T^*≥max{(1+β)^n-ℓC_γ(ℓ)^*, ∑_j=1^nα_γ(j)+∑_k=2^nβ(1+β)^n-k(∑_i=1^k-1α_γ(i)), ∑_i=1^nβ^n-i+1r_i}
For β≥ n+1, we have that (1+1/β)^n+1≤ e.
Therefore, Equations (<ref>)-(<ref>) imply that T≤ (3+e)T^*.
For the lower bound, consider an instance with n jobs, where r_min=B, for some large constant B=ω(n).
Job j∈{1,…,n} has α_j=B+n-j and r_j=∑_i=1^j(1+β)^i-1B.
We show by induction on j that no job begins before r_j in the algorithm's schedule 𝒮.
Given that r_1<…< r_n, our claim trivially holds for j=1 because no job can be executed before r_1=min_i∈𝒥{r_i}.
For the induction hypothesis, assume that our claim is true for some j≥ 1, i.e. no job begins before r_j in 𝒮.
Since α_1≥…≥α_j, any job beginning at time r_j would have completion time at least:
(1+β)r_j+α_j≥∑_i=1^j(1+β)^i-1B + n-j > r_j+1.
So, the algorithm will not schedule any job during [r_j,r_j+1), because otherwise this job would be interfering.
Our claim implies that the algorithm schedules all jobs according to shortest fixed processing time first starting from r_n and has makespan:
T=(1+β)^nr_n+∑_j=1^n(1+β)^n-jB+∑_j=1^n(1+β)^n-j(j-1) = Ω((1+β)^2nB).
In an optimal schedule 𝒮^*, all jobs begin at time t=0 and are consecutively executed according without any idle period between them according to earliest release time first.
The makespan of 𝒮^* is:
T^*=∑_j=1^n(1+β)^n-jB+∑_j=1^n(1+β)^n-j(n-j) = O((1+β)^n).
Hence, T/T^*=Ω((1+β)^n).
§.§ Non-Idling Algorithm
Algorithm <ref> constructs a feasible schedule by executing the shortest pending job whenever the machine becomes available.
[Non-Idling]
Greedily schedule jobs over time, by initiating a pending job min_i∈𝒫(t){α_i} with minimal fixed procesing time at each time t that the machine becomes available.
Algorithm <ref> is (1+e)-approximate for instances with β≤ 1/n and Ω((1+β)^n)-approximate for general instances.
On the positive side, consider a schedule 𝒮 produced by the non-idling algorithm and number the jobs in increasing order s_1<…<s_n of their start times in 𝒮.
Let Q=∑_i=1^n(1+β)^n-i+1q_i and A=∑_i=1^n(1+β)^n-iα_i be the gap-dependent and fixed processing time costs of 𝒮, respectively.
Next, we will show that Q≤ T^* and A≤ e T^*, where T^* is the optimal makespan.
By Lemma <ref>, we get that T=Q+A≤(1+e)T^*.
To bound the gap-dependent cost of the algorithm, we show the existence of an optimal schedule 𝒮^* satisfying the property that, for each idle time interval [t,u) in 𝒮, the interval [t,u) is also idle in 𝒮^*.
For simplicity, we prove the lemma for the case where 𝒮 contains a single maximal idle time interval, i.e. gap q_j>0, but the argument is naturally extended to an arbitrary number of gaps.
We may partition 𝒥 into the sets 𝒥_A={i∈𝒥:s_i≥ t} and 𝒥_B={i∈𝒥:s_i<t} of jobs beginning after and before, respectively, time t in 𝒮.
Using this definition and the fact that [t,u) is idle in the algorithm's schedule, we conclude that r_i≥ u for each i∈𝒥_A and r_i<t for every i∈𝒥_B.
Let 𝒮_A^* and 𝒮_B^* be optimal schedules for 𝒥_A and 𝒥_B of makespans T_A^* and T_B^*, respectively.
Clearly, the schedule 𝒮^* obtained by merging 𝒮_A^* and 𝒮_B is feasible and optimal for 𝒥 given that T^*=T_A^*.
Thus, Q≤ T^*.
To bound the algorithm's fixed processing time cost, by using the standard Euler constant inequality (1+1/k)^k≤ e, for each constant k≥ 1, we get that:
A=∑_i=1^n(1+β)^n-iα_i≤
(1+β)^n[∑_i=1^nα_i]≤
e[∑_i=1^nα_i]≤ e T^*.
On the negative side, consider an instance with n=k+1 jobs, namely a job of fixed processing time α_1=B>1, release time r_1=0, and k jobs with α_i=0 and r_i=1, for i∈{2,…,k+1}.
The non-idling schedule executes the jobs in increasing order of their indices, i.e. the first job completes at C_1=B and all remaining jobs are consecutively executed starting at C_1.
By Lemma <ref>, 𝒮 has makespan T=(1+β)^kB.
In an optimal schedule 𝒮^*, all short jobs are consecutively executed during [1,(1+β)^k], the long job begins right after and completes at T^*=(1+β)^k+1+B.
If B=(1+β)^k+1,
T/T^*=(1+β)^kB/(1+β)^k+1+B=Ω((1+β)^k).
§.§ Best-of-Two Algorithm
The best-of-two algorithm returns the best among the non-idling and non-interfering schedules.
Theorem <ref> shows that this algorithm achieves a 2-approximation ratio when r_i={0,r} for each job i∈𝒥.
The best-of-two algorithm is 2-approximate for instances with two distinct release times.
Consider an optimal schedule 𝒮^* of makespan T^*, in which job i∈𝒥 begins at s_i^* and completes at C_i^*.
Further, denote by k^*=|{j∈𝒥:s_j^*<r}| the number of jobs beginning before r and let γ be the order s_γ(1)^*<…<s_γ(n)^* of jobs in increasing start times, i.e. job γ(i) is executed in the i-th position of 𝒮^*.
For a given subset 𝒥'={π(1),…,π(k)} of k jobs which are numbered so that α_π(1)≤…≤α_π(k), denote by F(𝒥')=∑_i=1^k(1+β)^k-iα_π(i) their fixed processing time cost if they are continuously scheduled without gaps and other intermediate jobs in non-decreasing order of their fixed processing times.
We distinguish two cases based on whether C_γ(k^*)^*≤ r and C_γ(k^*)^*>r.
In the former case, since n-k^* jobs begin after r in 𝒮^*, Lemma <ref> implies that T^*≥max{(1+β)^n-k^*r,F(𝒥)}.
Assume that the algorithm's non-interfering schedule 𝒮 has makespan T and suppose that it associates a start time s_j and completion time C_j to each job j∈𝒥.
Also, let k=|{j∈𝒥:s_j<r}|.
We claim that k≥ k^*.
Assume for contradiction that k<k^*.
W.l.o.g. we may assume that α_γ(1)≤…≤α_γ(k^*), i.e. 𝒮^* schedules jobs in non-decreasing order of fixed processing times before r.
Because 𝒮 schedules the pending job with the shortest fixed processing time at each time that the machine becomes available, it must be the case that α_i≤α_γ(i), for 1≤ i≤ k.
If C_k<C_γ(k^*)^*, then there exists a job j∈𝒥
such that s_j^*<r≤ s_j which can be feasibly executed during [C_k,r] in 𝒮, i.e. a contradiction on the definition of k.
If C_k≥ C_γ(k^*)^*, then there exist jobs i,j∈𝒥 such that α_i>α_j, s_i<r≤ s_j and
s_i^*≥ r>s_j^*,
which contradicts the fact that the algorithm always schedules a pending job with a minimal processing time.
Hence, our claim is true.
By Lemma <ref>, if 𝒥'={j∈𝒥:s_j≥ r}, then
T = (1+β)^n-kr+F(𝒥')
≤ (1+β)^n-k^*r+F(𝒥)
≤ 2T^*.
In the latter case, consider the non-idling schedule 𝒮 of the algorithm, denote its makespan by T and the execution interval of each job j∈𝒥 by [s_j,C_j].
Let t^*=max_j∈𝒥{C_j^*:s_j^*<r} and t=max_j∈𝒥{C_j:s_j<r} be the completion time of the interfering job in 𝒮^* and 𝒮, respectively.
Similarly before, T^*≥max{(1+β)^n-k^*t^*,F(𝒥)}.
Given that 𝒮 executes the jobs with the minimal fixed processing times until it encounters job k with C_k>r, we have that k≥ k^*.
If t≤ t^*, then
T = (1+β)^n-kt+F(𝒥') ≤ (1+β)^n-k^*t^*+F(𝒥)
≤ 2T^*, where 𝒥'={j∈𝒥:s_j≥ t}.
If t>t^*, then k≥ k^*-1, i.e. T^*≥(1+β)^n-k-1r.
Therefore, T≤ (1+β)^n-k-1r+F(𝒥'∪{k})≤ 2T^*.
§ EARLIEST COMPLETION-TIME FIRST
Next, we consider the Earliest Completion Time First (ECTF) algorithm and show that it is O(1+1/β)-approximate.
ECTF produces a schedule satisfying Observation <ref>.
That is, if jobs are numbered in increasing order s_1<…<s_n of their start times in 𝒮, then job i∈𝒥 has start time s_i=max{r_i,C_i-1}.
At every time t, let Γ_i(t)=(1+β)max{t,r_i}+α_i be the completion time of job i∈𝒥, if i is the next to be executed from time t and onward.
In addition, denote by ℱ(t)={i:i∈𝒥,C_i≤ t} the set of completed jobs at time t in 𝒮.
[ECTF]
At each time t that the machine becomes available, schedule the uncompleted job min_i∈𝒥∖ℱ(t){Γ_i(t)} with the earliest completion time.
Algorithm <ref> achieves an approximation ratio ρ∈[2,3+1/β].
We initially prove the upper bound.
Denote the non-interfering schedule and an optimal schedule by 𝒮 and 𝒮^*, respectively.
Number the jobs in increasing order s_1<…<s_n of their start times in 𝒮.
That is, job i∈𝒥 is executed in the i-th position of 𝒮.
Let π(i)∈{1,…,n} be the position at which job i∈𝒥 is executed in 𝒮^*.
Analogously, denote by γ(i)∈𝒥 the job executed in the i-th position of 𝒮, for i∈{1,…,n}.
We partition the set 𝒥 of jobs into the subset 𝒲={i:i≥π(i)} of well-ordered jobs whose position in 𝒮 is greater than or equal to their position in 𝒮^* and the subset ℐ={i:i<π(i)} of inverted jobs executed at a strictly smaller position in 𝒮 compared to their position in 𝒮^*.
Consider an arbitrary inverted job i∈ℐ executed in a subsequent position π(i)∈{i+1,…,n} in 𝒮^*.
By a simple pigeonhole principle argument, a key observation is that there exists a job j executed after i in 𝒮 and not later than the i-th position in 𝒮^*, i.e. π(j)≤ i<j.
Clearly, job j is well-ordered, i.e. j∈𝒲.
Consider the start times s_i and s_i^* of job i∈𝒥 and the immediately preceding gaps q_i and q_i^* in 𝒮 and 𝒮^*, respectively.
Based on the previous observation, define the set 𝒦_I={i:i∈ℐ,∃ j∈𝒲 s.t. π(j)≤ i<j, r_j>s_i-q_i} of critical inverted jobs.
That is, for each job k∈𝒦_I, there exists a well-ordered job ℓ such that π(ℓ)≤ k<ℓ and ℓ is released after s_k-q_k.
Given that ECTF executes k before ℓ, it must be the case that Γ_k(s_k-q_k)≤Γ_ℓ(s_k-q_k), i.e. C_k≤ (1+β)r_ℓ+α_ℓ≤ C_ℓ^*.
Thus, we get that ∑_i=1^k(1+β)^k-i+1q_i+∑_i=1^k(1+β)^k-iα_i≤∑_i=1^π(ℓ)(1+β)^π(ℓ)-i+1q_γ(i)^*+∑_i=1^π(ℓ)(1+β)^π(ℓ)-iα_γ(i).
By taking into account that π(ℓ)≤ k and multiplying both sides with (1+β)^n-k:
∑_i=1^k(1+β)^n-i+1q_i+∑_i=1^k(1+β)^n-iα_i≤∑_i=1^k(1+β)^n-i+1q_γ(i)^*+∑_i=1^k(1+β)^n-iα_γ(i).
Next, define the set 𝒦_W={i:i∈𝒲,q_i>0} of critical well-ordered jobs.
Consider a job k∈𝒦_W.
Given that q_k>0, job k begins at its release time in 𝒮, i.e. s_k=r_k.
That is, C_k=(1+β)r_k+α_k≤ C_k^*, or equivalently ∑_i=1^k(1+β)^k-i+1q_i+∑_i=1^k(1+β)^k-iα_i≤∑_i=1^π(k)(1+β)^π(k)-i+1q_γ(i)^*+∑_i=1^π(k)(1+β)^π(k)-iα_γ(i).
By taking into account the fact that k is well-ordered, i.e. π(k)≤ k, and multiplying both sides of the inequality with (1+β)^n-k, we conclude that Eq. (<ref>) holds for each job k∈ K_W as well.
Let 𝒦=𝒦_I∪𝒦_W be the set of all critical jobs and consider the maximum index critical job k=max{i:i∈𝒦}.
Next, denote by 𝒲_k={i:i>k,i∈𝒲} and ℐ_k={i:i>k,i∈ℐ} the well-ordered and inverted jobs, respectively, of index strictly greater than the one of the maximum index critical job k.
For each job i∈𝒥 with i>k, either i∈𝒲_k, or i∈ℐ_k.
In the former case, since i∈𝒲_k, we have that i≥π(i).
Because i>k, it also holds that q_i=0.
Therefore, (1+β)^n-i+1q_i+(1+β)^n-iα_i≤(1+β)^n-π(i)α_i.
By summing over all jobs in 𝒲_k, we get that
∑_i∈𝒲_k(1+β)^n-i+1q_i+∑_i∈𝒲_k(1+β)^n-iα_i ≤∑_i∈𝒲_k(1+β)^n-π(i)α_i.
In the latter case, i.e. i∈ℐ_k, because i<π(i), the pigeonhole principle argument mentioned earlier implies that there exists a well-ordered job j∈𝒲 such that π(j)≤ i<j.
Let t=s_i-q_i.
Because i>k, i.e. i is non-critical, it must be the case that r_j≤ t.
Due to the ECTF policy and given that job i is executed before job j in 𝒮, we have that
Γ_i(t)≤Γ_j(t) ⇒ (1+β)max{t,s_i} + α_i ≤ (1+β)max{t,r_j} + α_j
⇒ (1+β)q_i + α_i ≤α_j.
Taking also into account that π(j)≤ i, (1+β)^n-i+1q_i+(1+β)^n-iα_i≤ (1+β)^n-π(j)α_j.
We pick such a well-ordered job j arbitrarily, match it with i, and denote it by μ(i)=j.
Let ℳ_j={i:i∈ℐ,μ(i)=j} be the set of inverted jobs matched with a job j∈𝒲.
If i∈ℳ_j, then it clearly holds that π(j)≤ i.
Thus, based on the weak pseudomatching bound (Lemma 7),
∑_i∈ℳ_j[(1+β)^n-i+1q_i+(1+β)^n-iα_i] ≤∑_i=π(j)^n(1+β)^n-iα_j
=[(1+β)^n-π(j)+1-1/(1+β)-1]α_j
≤(1+1/β)(1+β)^n-π(j)α_j
That is, we get that:
∑_i∈ℐ_k[(1+β)^n-i+1q_i+(1+β)^n-iα_i]
= ∑_j∈𝒲∑_i∈ℳ_j[(1+β)^n-i+1q_i+(1+β)^n-iα_i]
≤(1+1/β)[∑_j∈𝒲 (1+β)^n-π(j)α_j]
The algorithm achieves makespan:
T =∑_i=1^k(1+β)^n-i+1[q_i+(1+β)^n-iα_i] + ∑_i∈𝒲_k(1+β)^n-iα_i
+ ∑_i∈ℐ_k[(1+β)^n-i+1q_i+(1+β)^n-iα_i]
For the optimal makespan, it clearly holds that:
T^*≥max{∑_i=1^n(1+β)^n-i+1q_γ(i)^*+∑_i=1^n(1+β)^n-iα_γ(i),∑_i∈𝒲(1+β)^n-π(i)α_i}
By Eq. (<ref>)-(<ref>), we conclude that T≤(3+1/β)T^*.
Lower Bound
Next, we show the lower bound.
We consider an instance with n=2k jobs: k short jobs and k long jobs.
The j-th long job has r_j^L=0 and α_j^L=(1+β)B, for j∈{1,…,k}.
The j-th short jobs has release time r_j^S=∑_i=1^j(1+β)^i-1B and and α_j^S=0, for j∈{1,…,k}.
Let 𝒮 be a schedule produced by ECTF.
We show by induction that all small jobs are executed before all long jobs in 𝒮, i.e. the j-th short job is executed during [∑_i=1^j(1+β)^i-1B, ∑_i=1^j(1+β)^iB).
For j=1, the small job completes at (1+β)B and any long job has completion time ≥(1+β)B in any feasible schedule.
Next, assume that our claim is true for some j∈{1,…,k-1}.
Since the j-th short job has completion time C_j=∑_i=1^j(1+β)^iB, the (j+1)-th job begins at its release time r_j+1>C_j and completes at ∑_i=1^j+1(1+β)^iB, while any long job would complete at ≥ C_j+1 if it began at C_j.
Therefore,
T=∑_i=1^2k(1+β)^iB=(1+β)B/β[(1+β)^2k-1].
On the other hand, the optimal solution executes all long jobs before all short jobs and has makespan:
T^*=∑_i=k+1^2k(1+β)^iB=(1+β)^k+1B/β[(1+β)^k-1].
Therefore, we conclude that
β T = (1+β)^2k+1B-(1+β)B
=[(1+β)^2k+1B-(1+β)^k+1B] + [(1+β)^k+1B-(1+β)B]
≤[1+1/(1+β)^k]β T^*
Hence, T/T^*≥(1+1/(1+β)^k).
If β=ω(1), then 𝒮 is 2-approximate.
§ TOTAL COMPLETION TIME OBJECTIVE
This section explores relationships between the problems of minimizing the makespan max_i∈𝒥{C_i} and the sum of completion times ∑_i∈𝒥C_i.
Theorem <ref> shows that any O(1)-approximate schedule for the former is also O(1)-approximate for the latter.
Any ρ-approximation algorithm for minimizing the sum of completion times is (1+ρ)-approximate for minimizing the makespan.
Suppose that 𝒮 and 𝒮^* are a ρ-approximate schedule for minimizing the sum of completion times and an optimal schedule for minimizing makespan, while T and T^* are the makespans of the two schedules.
Assuming that s_i, C_i, and q_i the start time, completion time, and gap associated with job i∈𝒥 in 𝒮, we similarly define s_i^*, C_i^*, and q_i^* for 𝒮^*.
Given that 𝒮 is ρ-approximate for minimizing ∑_i=1^nC_i, the job start times in the two schedules can be related as follows:
∑_i=1^nC_i≤ρ[∑_i=1^nC_i^*]⇒ ∑_i=1^n[(1+β)s_i+α_i] ≤ρ[∑_i=1^n[(1+β)s_i^*+α_i]]
⇒ ∑_i=1^ns_i ≤ρ[∑_i=1^ns_i^*] + ρ-1/1+β[∑_i=1^nα_i]
Observe that ∑_i=1^nq_i≤ r_max, where r_max=max_i=1^n{r_i} is the maximum release time, because gaps may only occur before release times in a canonical schedule.
To upper bound the makespan of 𝒮:
T = ∑_i=1^n[q_i+p_i(s_i)]
= ∑_i=1^nq_i + ∑_i=1^nα_i + β[∑_i=1^ns_i]
≤ r_max + 1+ρβ/1+β[∑_i=1^nα_i] + ρβ[∑_i=1^ns_i^*]
≤ r_max + ρ[∑_i=1^n(β s_i^*+α_i)]
The last inequality follows from the fact that ρ≥ 1.
Given that T^*≥max{∑_i=1^n(β s_i^*+α_i),r_max}, we conclude that T≤(1+ρ)T^*.
Theorem <ref> shows that any O(1)-approximate schedule for max_i∈𝒥{C_i} is O(1+1/β)-approximate for ∑_i∈𝒥C_i.
This result and Theorem <ref> directly imply that there exists an O(1)-approximation algorithm for minimizing the sum of completion times when β=Ω(1).
Any ρ-approximation algorithm for minimizing the makespan is (1+1/β)ρ-approximate for minimizing the sum of completion times.
Suppose that 𝒮 and 𝒮^* are a ρ-approximate schedule for minimizing the makespan and an optimal schedule for minimizing the sum of completion times, respectively.
Assuming that s_i, C_i, and q_i are the start time, completion time, and gap associated with job i∈𝒥 in 𝒮, we similarly define s_i^*, C_i^*, and q_i^* for 𝒮^*.
Given that 𝒮 is ρ-approximate for the makespan objective and the fact that ∑_i=1^nq_i^*≤ r_max, we get that:
T ≤ρ T^* ⇒∑_i=1^n[β s_i+α_i] ≤ρ[∑_i=1^nq_i^*+∑_i=1^n[β s_i^*+α_i]]
⇒∑_i=1^n s_i ≤ρ[∑_i=1^ns_i^*] +
ρ/β[r_max+∑_i=1^nα_i]
Since ρ≥ 1, we can upper bound the sum of completion times of 𝒮 as follows:
∑_i=1^nC_i
= (1+β)[∑_i=1^ns_i]+∑_i=1^nα_i
≤ρ[∑_i=1^n[(1+β)s_i^*+α_i]] +
ρ/β[r_max+∑_i=1^nα_i]
Because ∑_i=1^nC_i^*=∑_i=1^n[(1+β)s_i^*+α_i]≥ r_max+∑_i=1^nα_i, we get that ∑_i=1^nC_i≤(1+1/β)ρ[∑_i=1^n C_i^*].
Algorithm <ref> is O(1+1/β^2)-approximate for minimizing ∑_i∈𝒥C_i.
§ CONCLUDING REMARKS
We obtain new approximation results for time-dependent scheduling with uniformly deteriorating processing time functions p_i(s_i)=α_i+β· s_i, the makespan and the total completion time objectives.
The approximability of the more general problems with arbitrary linear deterioration remains an intriguing open question.
We expect our bounding framework based on pseudomatchings to be useful for follow-up work.
The key technical difficulty would be extending the proposed bounds to account for the different deteriorating rates.
We leave this as a future direction.
elsarticle-harv
|
http://arxiv.org/abs/2307.02290v1
|
20230705134157
|
On a variant of Viterbo's conjecture
|
[
"Wenmin Gong"
] |
math.SG
|
[
"math.SG",
"math.DG",
"math.DS",
"math.MG",
"53D40 (Primary) 57R17, 53D12(Secondary)"
] |
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising
Junwen Chen Yingcheng Wang Keiji Yanai
Department of Informatics, The University of Electro-Communications, Tokyo, Japan
[email protected], [email protected], [email protected]
August 1, 2023
===========================================================================================================================================================================================================
We consider an analogue of Viterbo's conjecture: whether the spectral metric on the orbit space of a fiber in the disk cotangent bundle of a closed manifold under the action of compactly supported Hamiltonian diffeomorphism group is bounded. We use wrapped Floer cohomology to define the spectral invariant of an admissible Lagrangian submanifold in a Weinstein domain, and show that the pseudo-metric given by this spectral invariant is a genuine -invariant metric. We show that the spectral metric on the orbit space of an admissible Lagrangian is bounded if and only if the wrapped Floer cohomology vanishes. In particular this gives a negative answer to the corresponding Viterbo's problem. As a consequence of it, we show that the Lagrangian Hofer diameter on the orbit space of any fiber in the disk cotangent bundle of a closed manifold is infinite.
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising
Junwen Chen Yingcheng Wang Keiji Yanai
Department of Informatics, The University of Electro-Communications, Tokyo, Japan
[email protected], [email protected], [email protected]
August 1, 2023
===========================================================================================================================================================================================================
§ INTRODUCTION
§.§ Viterbo's conjecture and results
A well-known conjecture of Viterbo from 2007 <cit.> states that the spectral norm (o_M,L) of any exact Lagrangian L Hamiltonian isotopic to the zero section o_M in the unit disk cotangent bundle D_g^*M of the n-dimensional torus M=^n admits a uniform bound with respect a Riemannian metric g. Recently, Shelukhin <cit.> proved this conjecture, and beyond this case he also proved that Viterbo's conjecture holds true for a wide range of closed manifolds M, for instance, compact rank one symmetric spaces S^n, P^n, P^n, P^n <cit.> and string point-invertible manifolds <cit.>. Biran and Cornea <cit.> proved that Viterbo's conjecture is equivalent to that the boundary depth of the Floer complex of the pair (L,T_q^*M) is uniformly bounded in Lagrangians L that are exact isotopic within T^*M of a closed manifold M to the zero section. Yet despite all that, this conjecture remains open for an arbitrary closed smooth manifold.
On the contact side, in <cit.> Dimitroglou Rizell showed that the spectral norm of Legendrians inside the contactisation D^*S^1× in one-jet space J^1S^1 which are Legendrian isotopic to the zero section does not satisfy a uniform bound. As a consequence of this result, we see that Viterbo's conjecture could not be directly generalized to the Legendrian case. In the same paper, the author also constructed a Hamiltonian isotopy of a closed exact Lagrangian inside the 2-torus with an open ball removed (a Liouville domain) for which the spectral norm becomes arbitrarily large, see <cit.> for more examples for this phenomenon.
Viterbo's conjecture motivates us to consider the following
Question A Whether or not the spectral norm (F_q,L) of every exact Lagrangian L (with boundary) in the co-disk bundle
D_g^*M of a closed manifold M, which is isotopic to a fixed fiber F_q⊂ D_g^*M by a Hamiltonian isotopy with support in int(D_g^*M), has a uniform bound?
Before answering this question, the first problem with which we are confronted is how to define the spectral norm of a Hamiltonian deformation L of a fiber under a Hamiltonian flow with support inside the unit disk cotangent bundle D_g^*M⊂ T^*M of a closed manifold M with respect to a Riemannian metric g. Historically, the spectral norm (o_M,L) of an exact Lagrangian L⊂ T^*M that is Hamiltonian isotopic to the zero section o_M⊂ T^*M was given by Viterbo <cit.> by the difference of two homological minimax values of a generating function. In <cit.>, Oh used Lagrangian spectral invariant to obtain an equivalent definition from a modern perspective, see <cit.> for a thorough introduction to this subject. The most natural framework including (D_g^*M,F_q) of the unit disk cotangent bundle and its fiber for us to work with is a Liouville domain (M,dθ) with an exact Lagrangian submanifold L intersecting the boundary ∂ M transversally. At the time of writing of this paper, it is a little surprise to the author that the spectral invariant and the associated spectral norm for such L have yet to be explored, even although the spectral norm for the Hamiltonian deformation of a closed embedded weakly exact Lagrangian in a compact or convex at infinity symplectic manifold has already been given by Leclercq <cit.> using Lagrangian Floer homology for over a decade, let alone the previous work on this subject by Viterbo <cit.> and Oh <cit.> and the later work by Monzner, Vichery and Zapolsky <cit.>, Leclercq and Zapolsky <cit.>, Katić, Milinković and Nikolić <cit.> and Fukaya, Oh, Ohta and Ono <cit.>.
Let (M^2n,ω=dθ) be a Weinstein domain and L^n⊂ M an admissible Lagrangian (see Definition <ref>).
Denote by _c(M) the set of Hamiltonians H∈ C^∞([0,1]× M) such that dH_t are supported in [0,1]× M∖∂ M.
Let φ_H^t be the Hamiltonian flow of H∈_c(M) which is given by integrating the time-dependent vector field X_H_t, where H_t=H(t,·) and X_H_t is determined by -dH_t=ω(X_H_t,·). Let _c(M,ω) denote the group of time-one maps of the flows φ_H^t for Hamiltonians H∈_c(M).
Using wrapped Floer cohomology, we define the spectral invariant ℓ(H,) for each H∈_c(M) and each non-zero class ∈ H^*(L) by
ℓ(H,α)=sup{a∈∖(L,H)|π_a∘ψ_pss^H(α)=0},
see (<ref>) or (<ref>) for an equivalent definition. We establish the basic properties of this spectral invariant, see Proposition <ref>.
The spectral pseudo-norm is given for H∈_c(M) by
(L,H)=-ℓ(H,1_L)-ℓ(H,1_L)
where 1_L∈ H^0(L) is the fundamental class. We denote by
(L)={φ(L)|φ∈_c(M,ω)}
the orbit space of L under the group _c(M,ω).
The Lagrangian spectral pseudo-metric is given by
(L_1,L_2)=inf_H∈_c(M){(L,H)|φ_H^1(L_1)=L_2}, ∀ L_1,L_2∈(L).
Based on the properties of the spectral invariant ℓ, we show that the spectral pseudo-metric satisfies the following properties.
The pseudo-metric on (L) satisfies
(a) (L_1,L_2)≥0, and (L_1,L_2)=0 if and only if L_1=L_2;
(b) (L_1,L_2)=(L_2,L_1);
(c) (L_1,L_2)≤(L_1,L_3)+(L_2,L_3);
(d) (φ(L_1),φ(L_2))=(L_1,L_2) for all φ∈_c(M,dθ);
(e) (L_1,L_2)≤δ_H(L_1,L_2) (see (<ref>) for the definition of δ_H);
(f) (L_1,L_2)='(ϕ(L_1),ϕ(L_1)) for all symplectomorphism ϕ with support in M∖∂ M, where ':(ϕ(L))×(ϕ(L))→ [0,∞) is the corresponding pseudo-metric for ϕ(L).
From this theorem we see that the pseudo-metric on (L) is a genuine -invariant metric. -invariant metrics on _c(M,ω) that satisfy these properties have appeared in the work of Viterbo <cit.> for the standard symplectic vector space (^2n,ω_0) and cotangent bundles of closed manifolds, Schwarz <cit.> for closed weakly exact manifolds, Frauenfelder and Schlenk <cit.> for weakly exact convex symplectic manifolds and Oh <cit.> in general case.
Our main result is the following
Let (M^2n,dθ) be a Weinstein domain and L^n⊂ M an admissible Lagrangian submanifold. Then the metric space (ℒ(L),) has a uniform bound if and only if the wrapped Floer cohomology of L vanishes.
§.§ Lagrangian Hofer geometry
Except for Viterbo's conjecture,
another motivation in this paper comes from Hofer geometry, for which we direct the reader to the book <cit.> by Polterovich for a fascinating introduction. For a symplectic manifold (M,ω), there is a pseudo-norm on _c(M,ω) which is defined by
φ=inf{∫^1_0(sup_x∈ MH(t,x)-inf_x∈ MH(t,x))dt|φ=φ_H^1}.
A deep result of Hofer <cit.> states that the pseudo-norm · on _c(^2n,ω_0) with ω_0=∑_idx_i∧ dy_i is a norm and it was shown to be the case for general symplectic manifolds in <cit.>. The Hofer norm · gives rise to a bi-invariant metric d_H on _c(M,ω) via the formula d_H(φ,ψ)=φ∘ψ^-1.
A Lagrangian version of the Hofer metric d_H was found by Chekanov in <cit.> where the Lagrangian Hofer pseudo-metric δ_H on the orbit space (L) of a fixed Lagrangian L⊂ M is defined as
δ_H(L_1,L_2)=inf{φ|φ(L_1)=L_2, φ∈_c(M,ω)}, ∀ L_1,L_2∈(L).
Let L be a closed Lagrangian submanifold of a tame symplectic manifold (M,ω). Then ((L),δ_H) is a metric space.
In <cit.>, Sugimoto obtained a similar result by replacing the closedness condition by the weekly exact condition for the Lagrangian L with boundary and its completion L. In particular, <cit.> implies the following
Let L be an admissible Lagrangian submanifold of a Liouville domain (M,dθ). Then ((L),δ_H) is a metric space.
The nontrivial part of the proof of Theorem <ref> is to show the non-degeneracy of the map δ_H:(L)×(L)→, which can also be viewed as a simple application of properties (a) and (e) in Theorem <ref>.
Let B⊂^2n be the open unit ball which is endowed with the symplectic structure ω=∑_i=1^n1/πdx_i∧ dy_i. Denote by L_0={(x_i,y_i)∈ B| y_i=0} the standard Lagrangian in the unit ball. In <cit.>, Khanevsky proved that the metric space ((L_0),δ_H) is unbounded in two-dimensional case, and later Seyfaddini <cit.> proved the unboundedness of ((L_0),δ_H) in full generality. So far, in field of Lagrangian Hofer geometry, substantial progress has been made, see for instance <cit.>. Despite this, a basic question that seems to remain open is the following
Question B How to generalize the aforementioned Khanevsky-Seyfaddini's result to other Lagrangian submanifolds (with boundary)?
In what follows we will see that Theorem <ref> has applications to both Question A and Question B.
In <cit.>, it was shown that for any closed M, the wrapped Floer cohomologies HW^*(F_q) and HW^*(L_K) recover the Morse homologies of the based loop space Ω_qM and the space 𝒫_K(M) of paths in M with end points in K over /2 respectively, i.e.,
HW^*(F_q)≅ H_-*(Ω_qM), HW^*(L_K)≅ H_-*(𝒫_K(M))
where F_q is any fiber in the unit disk cotangent bundle D^*M of a closed manifold M, and L_K⊂ D^*M is the disk conormal bundle of a compact submanifold K⊂ M (see Example <ref>). Consequently, the wrapped Floer cohomologies HW^*(F_q) and HW^*(L_K) never vanish. This, together with Theorem <ref>, implies the following
The metric spaces ((F_q),) and ((L_K),) are unbounded.
In particular, the above result gives a negative answer to Question A. Moreover, from the inequality
between the spectral and Hofer metrics
(L_1,L_2)≤δ_H(L_1,L_2), ∀ L_1,L_2∈(L)
and Theorem <ref>, we get
Let L be an admissible Lagrangian submanifold in the Weinstein domain (M,dθ). If HW^*(L)≠0, then the metric space ((L),δ_H) is unbounded.
The above result gives a partial solution to Question B. In particular, we see that
The metric spaces ((F_q),δ_H) and ((L_K),δ_H) are unbounded.
Here we notice that Usher <cit.> obtained a similar result to Corollary <ref>[Strictly speaking, on the unboundedness of Lagrangian Hofer metric on (F_q) or (L_K), Usher's result implied by <cit.> is a very special case of our result since there some geometric and topological conditions are required to be satisfied.
].
On the other hand, a classical result that the symplectic cohomology of unit closed ball D=B⊂^2n vanishes, i.e. SH^*(D)=0, implies that the wrapped Floer cohomology of the standard Lagrangian L_0 also vanishes, see Theorem 10.6 in <cit.>. Hence, a simple application of Theorem <ref> gives the following
The spectral diameter of the metric space ((L_0),) is finite.
Comparing this assertion with the above Khanevsky-Seyfaddini's result, we find that the spectral metric γ and the Lagrangian Hofer metric δ_H are not equivalent on the orbit space of an admissible Lagrangian in general.
§.§ An overview of the proof of the main result
Our main result (Theorem <ref>) is a consequence of the following two theorems, which we will prove separately.
Let (M^2n,dθ) be a Weinstein domain and L^n⊂ M an admissible Lagrangian submanifold. If the wrapped Floer cohomology of L vanishes, then the diameter of the metric space (ℒ(L),) has the upper bound 2c_HW(L).
In the above theorem c_HW(L) is the wrapped Floer capacity of an admissible Lagrangian L (see Section <ref>) which is defined by
c_HW(L)=inf{a>0|ι_-a^L∘ψ^f_pss(1_L)=0}
where 1_L∈ H^0(L) is the fundamental class, ι_a^L is the inclusion map (see Section <ref>), and ψ^f_pss is the PSS-map (see Section <ref>). This capacity as defined by Borman and McLean in <cit.> is an analogue of the symplectic cohomology capacity which is known as Floer-Hofer-Wysocki capacity (see <cit.>).
The strategy to prove Theorem <ref> is reminiscent of the one to compute the Biran-Polterovich-Salamon capacities (as defined in <cit.>) by Weber in <cit.>. The latter method (Weber's) was generalized to the Finsler setting <cit.>, which seems to be the first attempt to relate relative symplectic capacities (BPS) to Finsler/convex geometry via Floer theory.
Let (M^2n,dθ) be a Weinstein domain, and let L^n⊂ M be an admissible Lagrangian submanifold. If the wrapped Floer cohomology of L does not vanish, then the metric space (ℒ(L),) is unbounded.
The crucial technical result in the proof of Theorem <ref> is the following
Let (M^2n,dθ) be a Weinstein domain and L^n⊂ M an admissible Lagrangian submanifold. For any Hamiltonian H∈ C^∞([0,1]× M) with support in [0,1]× int(M), it holds that ℓ(H,1_L)≤ 0.
The proof of Proposition <ref> is based on Ganor-Tanny's “barricade" technique <cit.>, see Section <ref>. In particular, this proposition can be viewed as a relative version of Lemma 4.1 in <cit.>. In a recent work <cit.>, Mailhot has used this technique to obtain a similar result for Hamiltonian spectral invariant (see <cit.>) which leads to an unboundness for the Hamiltonian spectral diameter defined on a Liouville domain M provided that SH^*(M)≠0.
We expect that the results and techniques in the present paper could be generalized to solve Question B for a broader range of Lagrangian submanifolds beyond the exact case in a Liouville domain, and to study other related problems in Hofer geometry.
§ ACKNOWLEDGEMENTS
The author would like to thank Yaniv Ganor and Shira Tanny for useful discussions.
§ WRAPPED FLOER COHOMOLOGY
In this section, following <cit.> we briefly recall the construction of wrapped Floer cohomology for admissible Lagrangian submanifolds.
§.§ Liouville domain and admissible Lagrangians
Let (M^2n,ω=dθ) be a Liouville domain with Liouville vector field V_θ positively transverse to ∂ M which is defined by ι_V_θω=θ. Then α=θ|_∂ M is a contact form on ∂ M. Denote by φ_V_θ^t the flow of V_θ. We extend M to a complete manifold by setting
M=M⋃_t≥ 0φ_V_θ^t(∂ M).
Using the Liouville flow φ_V_θ^log(r) one can identify M∖ M with (1,∞)×∂ M on which we extend the one-form θ to M by letting θ=ρ for ρ∈ [1,∞). Then (M,dθ) is a complete exact symplectic manifold. Throughout this paper for r∈(0,∞) we denote
M_r=M∖((r,∞)×∂ M).
Let L^n⊂ (M,dθ) be a connected, orientable, exact Lagrangian submanifold with Legendrian boundary ∂ L=L∩∂ M such that the Liouville vector field is tangent to TL along the boundary. We call such L an admissible Lagrangian if furthermore θ|_L=dk_L for some function k_L∈ C^∞(L,) and k_L vanishes on a neighborhood of the boundary ∂ L.
We extend an admissible Lagrangian L to a non-compact exact one by setting
L=L⋃_t≥ 0φ_V_θ^t(∂ L)
with θ|_L=dk_L for the compactly supported function k_L (by abuse of notion). Clearly, in the coordinates (1,∞)×∂ M⊂M the non-compact Lagrangian L∖ L has the form (1,∞)×∂ L.
All Lagrangian planes through 0 are admissible Lagrangians in a starshaped domain (D^2n,dθ) in ^2n with θ=1/2∑_i=1^n(y_idx_i-x_idy_i).
For a fiberwise starshaped hypersurface W in the cotangent bundle T^*M of a closed manifold M, each cotangent fiber W∩ T_q^*M is an admissible Lagrangian of the Liouville domain (W,dθ) where θ is the restriction of the canonical one-form pdq of T^*M to W.
Let D^*M be the disk cotangent bundle of a closed manifold M. The disk conormal bundle
L_K={(q,p)∈ D^*M|_K|⟨ p,v⟩=0 ∀ v∈ T_qK}
of a closed submanifold of K⊂ M is an admissible Lagrangian with respect to the one-form pdq|_D^*M.
Let J_t with t∈[0,1] be the t-dependent smooth dθ-compatible almost complex structures on M, meaning that ⟨·,·⟩=dθ(·,J_t·) is a family of Riemannian metrics on M and J^2=-Id in End(TM). An almost complex structure J is said to be of contact type on [ρ_0,∞)×∂ M for some ρ_0>0 if dρ∘ J=-θ for ρ≥ρ_0. Denote by 𝒥_θ be set of smooth families (J_t)_t∈[0,1] of compatible almost complex structures of contact type and time-independent on [1,∞)×∂ M.
To define wrapped Floer cohomology on (M,dθ) we will need a C^0-bound for solutions u:× [0,1]→M to the s-dependent Floer equation
∂_s u+J_t^s(∂_t u-X_H_t^s(u))=0
with Lagrangian boundary conditions
u(s,0), u(s,1)∈L ∀ s∈.
The following technical lemma is a consequence of a maximum principle for Floer strips, its proof is
standard, see for instance <cit.>.
Assume that for ρ≥ρ_0, J_t^s is of contact type and independent of parameters s,t, and H^s_t=h_s(ρ) with ∂ _sh_s'(ρ)≤ 0. Then the function ρ∘ u cannot have local maxima unless it is constant. Hence, any solution of (<ref>) and (<ref>) with asymptotics x_-,x_+ must lie in the region ρ≤max{ρ(x_±),ρ_0}.
§.§ Admissible Hamiltonians and Hamiltonian chords
Recall that the Reeb vector field R of a contact form α on ∂ M is determined by
d(R,·)=0 and (R)=1.
Fix an admissible Lagrangian L⊂ M. A Reeb chord of period T is a map :[0,T]→∂ M satisfying
=R((t)) and (0),(T)∈∂ L.
The set of all positive periods of Reeb chords is denoted by ℛ(∂ L,θ) which is known to be a closed nowhere dense set in (0,+∞). We define τ∈(0,∞) the minimal positive period of all Reeb chords, i.e. τ=minℛ(∂ L,θ), and set τ=∞ if there is no Reeb chord with ends on ∂ L.
A Hamiltonian chord of a smooth Hamiltonian function H∈ C^∞([0,1]×M) is defined by
ẋ=X_H(x(t)) and x(0),x(1)∈L,
where X_H is the Hamiltonian vector field given by dθ(X_H,·)=-dH_t with H_t:=H(t,·). Clearly, the Hamiltonian chords correspond to the intersections φ^1_H(L)∩L where φ_H^1 is the time one of X_H. Moreover,
for Hamiltonians H=h(ρ) on (0,∞)×∂ M, any Hamiltonian chord x of H in this region has constant H(x(t))=h(ρ) and corresponds to the Reeb chord (t)=x(t/T) of period T=|h'(ρ)|.
We call H∈ C^∞([0,1]×M) an admissible Hamiltonian
if there is constants ρ_0>0 and a∈ such that H has the form
H(t,ρ,x)=μ_Hρ+a for ρ≥ρ_0 on (0,∞)×∂ M,
here μ_H∉ℛ(∂ L,θ) is some non-negative number, which is called the slope of H. Throughout the paper we denote by ℋ the set of admissible Hamiltonians, and _<τ⊂ the set of admissible Hamiltonians H which are linear for ρ≥ 1 with 0≤μ_H<τ.
§.§ The action and index of Hamiltonian chords
Given an admissible Lagrangian L⊂ M and an admissible Hamiltonian H∈, we denote by the set of contractible Hamiltonian chords, meaning that [x]=0 in π_1(M,L). For a chord x∈ we define the action of x as
(x)=-∫ x^*θ+∫^1_0 H(x(t))dt+k_L(x(1))-k_L(x(0)).
It is easy to verify that the critical points the action functional are exactly Hamiltonian chords with ends on L. We denote by (L,H) the set of all actions x∈ which is called the action spectrum for (L,H). It is well known that the set (H,L) is a closed nowhere dense subsets of .
For a Hamiltonian H satisfying (<ref>), if the chord x∈ lies on (ρ_0,∞)×∂ M, we have
(x)=k_L(x(1))-k_L(x(0))-ρ h'(ρ)+h(ρ).
In particular, if both x(0) and x(1) lie in the neighborhood of L∖ L that k_L vanishes, then the action of x is the y-intercept of the tangent line of the function y=h(ρ) at ρ.
A chord x∈ is said to be non-degenerate if the vector spaces T_x(1)L and dφ_H^1(T_x(0)L) are transverse. We call an admissible Hamiltonian H∈ non-degenerate with respect to L if all chords x∈ are non-degenerate. The set of such Hamiltonians is denoted by ^reg⊂, and we denote _<τ^reg:=_<τ∩^reg.
Associated to each chord x∈ we assign a capping disk v:𝔻→ M such that v(e^iπ t)=x(t),∀ t∈[0,1] and v(e^iπ t)∈L,∀ t∈ [-1,0], where 𝔻={z∈:|z|≤ 1}. We sometimes use x to denote the pair (x,v) for convenience whenever the capping disk v is clear from the context.
By Stokes' theorem one can rewrite the action (x) as
(x)=-∫ v^*dθ+∫^1_0 H(x(t))dt
which is independent of the choices of capping disks due to the exactness of λ|_L.
For each non-degenerate contractible chord x∈, there is a well-defined ℤ/2-grading
which we describe as follows. Following <cit.>, for a fixed Lagrangian V in the Lagrangian Grassmannian ℒ_n for (^2n,∑_idx_i∧ y_i) and a path Λ:[a,b]→ℒ_n, we denote by μ_RS(Λ;V) the Robbin-Salamon Maslov index. Set V_0={0}×^n. We normalize μ_RS by letting
μ_RS({e^2π iktV_0}_t∈[0,1];V_0)=2k
for V_0={0}× in (^2,dx∧ dy) with k∈.
We pick a symplectic trivialization
Φ_(x,v):𝔻×^2n→ v^*TM
such that Φ_(x,v)(-1)V_0=T_x(1)L, where Φ_(x,v)(z)X=Φ_(x,v)(z,X) for (z,X)∈𝔻×^2n.
Let Λ_(x,v):[-1,1]→ℒ_n be a path given by the concatenation
Φ_(x,v)(e^iπ t)Λ_(x,v)(t):={T_v(e^iπ t)L}_t∈[-1,0]♯{dφ_H^t(T_x(0)L)}_t∈[0,1].
We define the Robbin-Salamon index of the pair (x,v) as
μ_RS(x,v):=μ_RS(Λ_(x,v),V_0).
To grade wrapped Floer cohomology, for a non-degenerate contractible chord x with a capping disk v we define the -value index by
μ(x,v)=-μ_RS(x,v)+n/2.
Notice that for two capping disks v_1,v_2 of the chord x, we have μ(x,v_1)-μ(x,v_2)=μ_L(v_1♯v_2)∈ 2 where μ_L is the Maslov index of v_1♯v_2∈π_2(M,L) obtained by gluing v_1 and v_2 along x. If L is orientable, then we have a /2-grading on x∈ by setting
|x|≡μ(x,v) (mod 2)
which is independent of the capping disk v.
§.§ The wrapped Floer complex
Given J∈𝒥_θ and H∈^reg, consider the solutions u:×[0,1]→ M to the Floer equation
∂_s u+J_t(∂_t u-X_H_t(u))=0
with the boundary conditions u(,0), u(,1)∈ L. The energy of a solution u to (<ref>) is defined as
E_J(u)=1/2∫^1_0∫^1_0|∂_su|_J^2+|∂_tu-X_H(u)|_J^2dsdt
where |·|_J is the norm with respect to the metric ω(·,J·) and we use the notation E(u) for short whenever the almost complex structure is clear from the context.
Denote by ℳ_H,J(x,y) the space of the above solutions u of finite energy such that
lim_s→-∞u(s,t)=x, lim_s→+∞u(s,t)=y
There is a natural ℝ-translation on ℳ_H,J(x,y) in s-direction, and its quotient space is denoted by ℳ_H,J(x,y). Solutions of (<ref>) can be thought as negative gradient flow lines for in an L^2-metric on . For each u∈ℳ_H,J(x,y) we linearize (<ref>) and obtain a Fredholm operator D_H,J,u in suitable Sobolev spaces. Since H∈ and J∈𝒥_θ, it follows from Lemma <ref> that all u∈ℳ_H,J(x,y) have a uniform C^0-bound in -component of the coordinates [1,∞)×∂ M.
Besides, in our situation that the symplectic form dθ is exact and L⊂M is an exact submanifold, there are no bubbling pseudo-holomorphic disks or spheres in the limits of families of solutions of (<ref>). Hence, there is a dense subspace 𝒥^reg⊂𝒥_θ of almost complex structures such that for each J∈𝒥^reg, D_H,J,u are onto for all u∈ℳ_H,J(x,y) (see <cit.>). In this situation we call (H,J) Floer-regular or regular. As a consequence, the space
ℳ_H,J(x,y), as well as ℳ_H,J(x,y), is a smooth manifold which near u has dimension
_uℳ_H,J(x,y)=μ(x)-μ(y),
where x is any capping disk of the chord x and y=x♯ u is the induced capped chord by gluing x and the strip u along x.
The wrapped Floer complex CW^*(L,H) is the vector space over ℤ/2 generated by chords x∈.
When μ(x)=μ(y♯ u)+1, one can define the Floer differential
d_H,J:CW^*(L,H)→ CW^*(L,H)
by counting isolated points in ℳ_H,J(x,y) mod 2,
d_H,J(y)=∑_x♯_2ℳ_H,J(x,y)· x.
This map has square zero, i.e. d_H,J^2=0, and hence CW^*(L,H) is a cochain complex over the coefficient ℤ/2. The wrapped Floer cohomology for (L,H) is defined to be the quotient space Ker(d_H,J)/Im(d_H,J) which is denoted by HW^*(L;H). It can be shown that the cohomology HW^*(L;H) is independent of J∈^reg up to canonical isomorphisms, see <cit.>, and therefore we suppress it from the notation.
For a∈∪{±∞}, we define
CW^*_(a,+∞)(L,H)={x∈|(x)>a}.
Since the action does not increase along its negative gradient flows, the differential d_H,J increases the action , and hence the vector space CW^*_(a,+∞)(L,H) is a subcomplex of the wrapped Floer complex CW^*(L,H).
For b>a, the cohomology of the quotient
CW^*_(a,b](L,H)=CW^*_(a,+∞)(L,H)/CW^*_(b,+∞)(L,H)
is called the filtered wrapped Floer cohomology of (L,H) with action window (a,b] which is denoted by HW^*_(a,b](L,H).
If a<b<c, the inclusion map CW^*_(b,c](L,H)→ CW^*_(a,c](L,H) induces a map
ι^a,c_b,c:HW^*_(b,c](L,H)⟶ HW^*_(a,c](L,H),
and the quotient map CW^*_(a,c](L,H)→ CW^*_(a,b](L,H) induces a map
π^a,c_a,b:HW^*_(a,c](L,H)⟶ HW^*_(a,b](L,H).
These maps are called action window maps.
In particular, we have the natural maps induced by the inclusion and quotient maps
ι_a: HW^*_(a,∞)(L,H)⟶ HW^*(L,H),
π_a:HW^*(L,H)⟶ HW^*_(-∞,a](L,H).
For an admissible Hamiltonian H∈ (not necessarily non-degenerate), when a,b∈∪{±∞}∖(L,H) one can defined the wrapped Floer cohomology for the pair (L,H) by setting
HW^*_(a,b](L,H):=HW^*_(a,b](L,H),
where H∈^reg is a C^2-small perturbation of H and has the same slope with H at infinity. This definition does not depend on the choices of perturbations H, see <cit.>.
§.§ Energy estimates and continuation maps
Let s↦ (H_t^s,J_t^s) be a path of admissible Hamiltonians and almost complex structures which is constant at the ends and satisfies
(H^±∞,J^±∞)=(H^±,J^±)∈^reg×^reg.
Consider the solutions u:×[0,1]→ M to the s-dependent Floer equation
∂_s u+J_t^s(∂_t u-X_H_t^s(u))=0
with the boundary conditions u(,0), u(,1)∈ L. For non-degenerate chords x_±∈𝒞(L,H^±) we denote
ℳ_H^s,J^s(x_-,x_+) the set of finite energy solutions to (<ref>). As before, the spaces ℳ_H^s,J^s(x_-,x_+) are finite dimensional manifolds of local dimension given by (<ref>) provided that the path J^s is generic. There is another way to achieve regularity, i.e. by fixing a path of almost complex structures J^s∈𝒥_θ,s∈, we perturb the homotopy H^s in the function space of homotopies connecting H^- to H^+
such that the linearized operators D_H,J,u for the resulting homotopy H are onto. In this case, we call the pair (H,J) Floer-regular or regular. The energy of any solution to (<ref>) satisfies the identity
E(u)=𝒜_L,H^-(x_-)-𝒜_L,H^+(x_+)+∫_× [0,1](∂_sH^s_t)(u(s,t))dsdt.
For x_+∈𝒞(L,H^+) we define the map
Φ_H^+H^-:(CW^*(L,H^+),d_H^+,J^+)⟶(CW^*(L,H^-),d_H^-,J^-),
Φ_H^+H^-(x_+)=∑_x_-∈𝒞(L,H^-)♯__2ℳ^0_H^s,J^s(x_-,x_+)x_-
where ℳ^0_H^s,J^s(x_-,x_+) denotes the zero dimensional component of ℳ_H^s,J^s(x_-,x_+). From the energy identity (<ref>) one can see that if
∫_× [0,1](∂_sH^s_t)(u(s,t))dsdt≤ 0
then Φ_H^+H^- preserves the action filtration, and hence is a chain map which induces a map
Φ_H^+H^-:HW^*_(a,b](L,H^+)⟶ HW^*_(a,b](L,H^-).
We call Φ_H^+H^- the continuation map from HW^*_(a,b](L,H^+) to HW^*_(a,b](L,H^-).
In particular, if H^+≤ H^- and the homotopy H^s is monotone, i.e. ∂_sH^s_t≤ 0, then the map
Φ_H^+H^- is well-defined and we call it a monotone homomorphism. It can be shown that monotone homomorphism are independent of the choices of monotone homotopies (H^s,J^s) used to define them, see <cit.>. Moreover, these maps are natural, meaning that Φ_HH=Id and
Φ_GH∘Φ_FG=Φ_FH
for admissible Hamiltonians F≤ G≤ H.
For a∈∪{-∞}
the inclusion and quotient maps commute with the monotone homomorphisms
HW^*_(a,∞)(L,H)
[r]^ ι_a[d]^Φ_HK
HW^*(L,H)
[d]^Φ_HK[r]^π_a
HW^*_(-∞,a](L,H)
[d]^Φ_HK
HW^*_(a,∞)(L,K)
[r]^ ι_a
HW^*(L,K)
[r]^π_a
HW^*_(-∞,a](L,K)
Using Lemma <ref> one can prove the following.
Assume that H,K∈ are two admissible Hamiltonians with slopes μ_H and μ_K at infinity. If
μ_H=μ_K, then the continuation map Φ_HK is isomorphic. When (μ_H,μ_K)∩ℛ(∂ L,θ)=∅, we still have the isomorphism Φ_HK:HW^*(L,H)≅→ HW^*(L,K).
The proof of Lemma <ref> is parallel to <cit.>, see also <cit.>.
The following lemma is similar to the ones in the Hamiltonian Floer homology case, see for instance <cit.>.
If (K_s)_s∈[0,1] is a monotone homotopy from H to K such that
a,b∉(L,K_s) for every s∈[0,1], then Φ_HK is an isomorphism.
Let X⊂ M be a compact subset of X and set
^X={H∈|H|_[0,1]× X<0 and H satisfies (<ref>) for some ρ_0≥ 1}.
Now we introduce the partial-order relation ≼ on the set by
H≼ K⟺ H≤ K.
Then the monotone homomorphisms Φ_HK yield the partially
ordered system (HW,χ) of _2-vector space over ^X, that is, HW assigns to every H∈^X the _2-vector space HW^*_(a,b](L,H) and χ assigns to any two Hamiltonians H,K∈^X with H≼ K the monotone continuation map Φ_HK. Since (^X,≼) is a directed system, we have
For a<b the wrapped Floer cohomology relative to X for an admissible Lagrangian L⊂ (M,dθ) is defined by
HW^*_(a,b](L,X)=lim_⟶
H∈^XHW^*_(a,b](L,H).
When X=M, the cohomology defined above is the usual wrapped Floer cohomology which is denoted by
HW^*_(a,b](L). If H∈ is negative on M, we have the natural homomorphism
σ_H:HW^*_(a,b](L,H)⟶ HW^*_(a,b](L).
Clearly, for any H,K∈^M with H≼ K we have the commutative diagram:
HW^*_(a,b](L,H)
[dr]_σ_H[rr]^Φ_HK
HW^*_(a,b](L,K)
[dl]^σ_K
HM^*_(a,b](L)
Hence, taking the direct limits in (<ref>) yields
HW^*_(a,∞)(L,H)
[r]^ ι_a[d]^σ_H
HW^*(L,H)
[d]^σ_H[r]^π_a
HW^*_(-∞,a](L,H)
[d]^σ_H
HW^*_(a,∞)(L)
[r]^ ι_a^L
HW^*(L)
[r]^π_a^L
HW^*_(-∞,a](L)
Since there exists a cofinal family (H_k)_k∈ of Hamiltonians in ^M such that the Hamiltonian chords of every H_k lying in M are constant ones (whose actions are negative obviously), for any a>0 it holds that
HW^*(L):=HW^*_(-∞,∞)(L)=HW^*_(-∞,a](L).
We emphasize here that the wrapped Floer cohomology HW^*(L) defined in this paper is equivalent to the definition of the wrapped Floer cohomology in <cit.> where admissible Hamiltonians are required to be positive on M. This is because adding a constant to any Hamiltonian function does not change the Hamiltonian flow nor Floer equations, but only shifts the action of a Hamiltonian chord by the constant.
§.§ The product structure of wrapped Floer cohomology
Following <cit.> we outline the construction of the product structure of wrapped Floer homology HW^*(L).
Given regular pairs (H^i,J^i),i=0,1,2 in the definition of HW^*(L,H^i), we shall define a product
*_F:HW^*(L,H^0)⊗ HW^*(L,H^1)⟶ HW^*(L,H^2).
Consider a 2-dimensional disk 𝒟 with three boundary points z^i∈∂𝒟,i=0,1,2 removed. Let j be a complex structure on 𝒟. Near every boundary puncture we equip a strip-like end which can be biholomorphically mapped onto the semi-infinite strips
Z_±=_±×[0,1]
with the standard complex structure, i.e. j∂_s=∂_t. More precisely, for i=0,1 we consider a positive strip-like end near z_i which is a holomorphic embedding
κ_i:_+×[0,1]⟶𝒟
satisfying
κ_i^-1(∂𝒟)=_+×{0,1} and lim_s→+∞κ_i(s,·)=z_i.
Near z_2 we equip a negative strip-like end in a similar way. Moreover, we require that these strip-like ends (the images of κ_i) are pairwise disjoint.
To define the moduli space of the product structure, we need to choose a compatible perturbation data. Let α∈Ω^1(𝒟) be a 1-form whose restriction to the boundary ∂𝒟 is zero and which satisfies κ_i^*α=dt and dα≤ 0 (such form α exists, see <cit.>). Let J^𝒟 be a family of almost complex structures parameterized by 𝒟 which satisfies J^𝒟_κ_i(s,t)=J^i_t,i=0,1,2, and being of contact type at the cylindrical ends of M. Let H^𝒟 be a family of Hamiltonians (H^𝒟_z)_z∈𝒟 parameterized by 𝒟 so that for every z∈𝒟, H_z is an admissible Hamiltonian with H_z^𝒟=μρ+a outside M for two constants μ>0 and a≤0, and that H^𝒟_κ_i(s,t)=H^i_t,i=0,1,2. Besides, for each x∈M we impose the non-positive condition on the 2-form β(x):=d(H^𝒟_z(x)α):
β(x)(V,jV)≤ 0
for every tangent vector V along 𝒟. Such 2-form β exists by a suitable choice of H^𝒟, see for instance <cit.>.
Consider the inhomogeneous ∂-equation
u:𝒟⟶M, u(∂𝒟)⊂L,
lim_s→±∞u(κ_i(s,·))=z_i, i=0,1,2,
(du_z-X_H^𝒟(u(z))⊗α_z)∘ j-J^𝒟_z(u(z))∘(du_z-X_H^𝒟(u(z))⊗α_z)=0.
where X_H^𝒟 is the Hamiltonian vector field generated by H^𝒟_z and each z_i is a chord of the Hamiltonian H_i with respect to L.
The energy of each solution u to (<ref>) is defined by
E(u)=1/2∫_𝒟|du-X⊗α|^2 Vol_𝒟
where |·| is a norm induced by the symplectic form dλ, the complex structures J^𝒟 and j, and Vol_𝒟 is the volume form on 𝒟.
The following no escape lemma follows from <cit.>, see also <cit.> or <cit.>, we include
the proof for completeness.
Under the above assumptions, u(𝒟) lie in M for all solutions u to (<ref>).
It suffices to prove that if the solution u lands outside M then u(Σ)⊂∂ M where Σ:=u^-1(M∖ M^∘).
We notice that Σ is a compact surface with corners which divide the boundary ∂Σ into two pieces: the piece landing in the boundary ∂ M and the one landing in L. Write ∂Σ=∂_b Σ∪∂_l Σ according to these two pieces.
We denote by j the restriction of the complex structure from 𝒟 to Σ.
By Stokes' theorem and the non-positive condition on β, we have the energy estimate:
E(u|_Σ) = 1/2∫_Σ|du-X_H^𝒟⊗α|^2Vol_Σ
= ∫_Σ u^*dθ-u^*(dH^𝒟)∧α
= ∫_Σ d(u^*θ-(u^*H^𝒟)α)+β(u)
≤ ∫_Σ d(u^*θ-(u^*H^𝒟)α)
= ∫_∂Σ u^*θ-(u^*H^𝒟)α
Since |_∂𝒟=0, for any connected component ϖ of ∂_lΣ we have (u^*H^𝒟)|_ϖ=0. And since u^*θ|_L=u^*dk_L, by Stokes' theorem we get ∫_ϖ u^*θ=0 for circles ϖ, while for intervals ϖ, ∫_ϖ u^*θ=k_L(p)-k_L(q) for corners p,q∈∂_bΣ∩∂_lΣ and hence this integral also vanishes by the assumption that the pull back θ|_L∖ L=0.
It therefore follows from (<ref>) and the contact condition dρ∘ J=-θ outside M that
E(u|_Σ) ≤ ∫_∂_bΣ u^*θ-(u^*H^𝒟)α
≤ ∫_∂_bΣθ∘(du-X_H^𝒟(u)⊗α)-aα
= ∫_∂_bΣθ∘(du-X_H^𝒟(u)⊗α)-a∫_Σ
dα
≤ ∫_∂_bΣ (θ∘ J)∘(du-X_H^𝒟(u)⊗α)∘(-j)
= ∫_∂_bΣ dρ∘(du-X_H^𝒟(u)⊗α)∘(-j)
= ∫_∂_bΣ dρ∘ du∘(-j)
where we have used θ(X_H^𝒟)+a=H^𝒟 outside M in the second inequality.
Let ξ be a tangent vector to ∂_bΣ which gives rise to the boundary orientation. Then jξ points into Σ, and thus du(jξ) does not point outwards along ∂ M, so dρ∘ du(jξ)≥ 0. Integrating that it follows from (<ref>) that E(u|_Σ)=0. This implies that each connected component of u|_Σ is contained in a single orbit of X_H^𝒟. If ∂_b Σ≠∅, then since X_H^𝒟 is tangent to ∂ M, this orbit must be contained in ∂ M.
Now we define the moduli space ℳ(H^𝒟,J^𝒟,α;z_0,z_1,z_2) to be the space of the solutions u to (<ref>). We call the perturbation datum (H^𝒟,J^𝒟) regular if for all choices of chords z_i and solutions u∈ℳ(H^𝒟,J^𝒟,α;z_0,z_1,z_2) the linearized operators D_u are surjective. It can be shown that for a residual set of the compatible perturbation data (H^𝒟,J^𝒟) is regular. In this case, ℳ(H^𝒟,J^𝒟,α;z_0,z_1,z_2) is a smooth manifold, and its dimension equals μ(z_2)-μ(z_1)-μ(z_0) where the capping disk z_2 is given by gluing capping disks z_0 and z_1 with u along the chords z_0 and z_1. When this dimension is zero, this manifold is
compact and hence consists of a finite number of points. Therefore we have a chain map
*:CW^*(L,H_0)⊗ CW^*(L,H_1)⟶ CW^*(L,H_2),
z_0*z_1=∑_μ(z_2)=μ(z_0)+μ(z_1)♯__2ℳ(H^𝒟,J^𝒟,α;z_0,z_1,z_2)z_2.
This can be proved by the usual transversality and gluing arguments, combined with a C^0-estimate for Floer trajectories (see Lemma <ref>). Moreover, by the standard cobordism arguments, one can show that the induced map
on homology is independent of the conformal structure on 𝒟 and of the choice of
perturbation datum (H^𝒟,J^𝒟). So we have a well-defined bilinear
map on homology
*_F:HW^*(L,H_0)⊗ HW^*(L,H_1)⟶ HW^*(L,H_2).
In a local holomorphic coordinate s+it for (𝒟,j), we have
1/2|du-X⊗α| Vol_𝒟=u^*dθ-u^*dH^𝒟_z∧ dα=u^*(θ-H^𝒟dα)+β(u(z))
from which, applying the boundary condition u(∂𝒟)⊂L and Stokes' theorem, we get
0≤ E(u) = 𝒜_L,H^2(z_2)-𝒜_L,H^0(z_0)-𝒜_L,H^1(z_1)+∫_𝒟β(u(z))
≤ 𝒜_L,H^2(z_2)-𝒜_L,H^0(z_0)-𝒜_L,H^1(z_1)
where in the second equality we have used the non-positive condition (<ref>).
So the product *_F induces a map on the filtered wrapped Floer homology
*_F:HW^*_(a_0,b_0](L,H)⊗ HW^*_(a_1,b_1](L,K)⟶ HW^*_(a_2,b_2](L,H* K).
for any a_i<b_i,i=0,1 with a_2=a_0+a_1 and b_2=max{a_0+b_1,a_1+b_0}.
For our purpose we consider the open subset
Σ=×[-1,1]∖[0,∞)×{0}
and equip it with the conformal structure such that one can map Σ holomorphically onto the interior of 𝒟. Besides, we require that under this biholomorphism the negative (positive) ends of Σ correspond to the negative (positive) punctures
of 𝒟, see Figure <ref>.
Given H,K∈^reg which are linear outside M and satisfy the following condition
∂^rH_t/∂ t^r|_t=1=∂^r K_t/∂ t^r|_t=0, for all r∈,
we define the concatenation H*K as
(H*K)_t=
H_t+1, t∈[-1,0],
K_t t∈[0,1].
Now we choose the following perturbation Hamiltonian
H^Σ=
H_t+1, (s,t)∈ [0,∞)×(-1,0),
K_t (s,t)∈ [0,∞)×(0,1),
(H*K)_t (s,t)∈ (-∞,0]×(0,1),
and the family of complex structures (J^Σ_(s,t))_(s,t)∈×[-1,1] which is of contact type outside M and
J^Σ_(s,t)=J_t+1^0, (s,t)∈ [1,∞)×(-1,0),
J_t^1 (s,t)∈ [1,∞)×(0,1),
J_t^2 (s,t)∈ (-∞,-1]×(0,1)
where J^i∈𝒥_θ,i=0,1,2 are complex structures such that HW^*(L,H), HW^*(L,K) and HW^*(L,H*K) are well-defined respectively. The aforementioned biholomorphism ϕ:Σ→𝒟^∘ gives rise to the corresponding perturbation datum on the interior 𝒟^∘ and hence one on the whole disk by continuity which we denoted by (H^𝒟,J^𝒟). In this case, if we take α=ϕ_*dt then the non-positive condition (<ref>) holds obviously.
Moreover, for generic choice of J^𝒟 the datum (H^𝒟,J^𝒟) is regular. Hence, by (<ref>) we obtain the product
*_F:HW_(a,∞)^*(L,H)⊗ HW_(b,∞)^*(L,K)⟶ HW_(a+b,∞)^*(L,H*K).
for any a,b∈∪{-∞}.
§ SPECTRAL INVARIANTS FROM WRAPPED FLOER THEORY
§.§ Morse cohomology
Fix a Riemannian metric g on L.
A function f∈ C^∞(L) is said to be adapted to L if the following conditions hold:
(1) no critical points of f occur in L∖ L≅∂ L×(1,∞);
(2) the gradient vector field ∇ f of f with respect to g points outward along ∂ L;
(3) f|_L is a C^2-small Morse function and (f,g) is a Morse-Smale pair.
Fix an adapted function f and denote _k(f) the set of critical points of f with Morse index k. We define the free _2-modules CM^k(L,f,g)=⟨_k(f)⟩__2 and a differential δ by counting isolated negative gradient flow lines of ∇ f, ie
δ:CM^k(L,f,g)⟶ CM^k+1(L,f,g),
δ q=∑_m_f(p)=k+1♯_2ℳ_p,q(f,g)· p,
where the moduli space ℳ_p,q(f,g) is given by
ℳ_p,q(f,g):={γ∈ C^∞(,L)|γ̇=-∇ f(γ(t)); γ(-∞)=p, γ(∞)=q}/.
The cohomology of the complex (CM^k(L,f,g),δ) is called the Morse cohomology for the pair (f,g) which we denote by HM^*(L,f,g). It can be shown that HM^*(L,f,g) is isomorphic to the singular cohomology of H^*(L) over /2,
and it does not depend on the Morse-Smale pair (f,g), see for instance <cit.>
§.§ Lagrangian PSS morphism
The Piunikhin-Salamon-Schwarz (PSS) homomorphism was firstly introduced to compare Hamiltonian Floer homology with singular homology, see <cit.>. After that, it was adapted to the Lagrangian setting for the case of cotangent bundles by Katić and Milinković <cit.>, and later in more generality by Barraud and Cornea <cit.>, Albers <cit.>, Leclercq and Zapolsky <cit.>.
The Lagrangian PSS morphism is useful for us in the present paper since it respects the product structures between Morse cohomology and wrapped Floer cohomology, and thus plays an important role in the properties of spectral invariants, see Proposition <ref>.
Let f be a function adapted to L and H∈ a non-degenerate admissible Hamiltonian that is linear outside M.
Following <cit.>, we now briefly recall the construction of the Piunikhin-Salamon-Schwarz homomorphism from HM^*(L,f,g) to HW^*(L,H) by counting isolated spiked Floer strips, see Figure <ref>.
Let χ:→ [0,1] be a smooth cutoff function defined by χ(s)=1 if s≤ 0, χ(s)=0 if s≥ 1 and χ'(s)≤ 0 for all s.
Let p∈ L be a critical point of f and let x∈𝒞(L,H). Let (J^s)_s∈⊂_θ be a family of complex structures so that J^s_t is constant outside s∈[0,1] and is independent of s,t for s≥ 1. We
consider the moduli space (p,x;H,J,χ,f,g) of the pairs of maps
z:[0,∞)⟶ L, u:× [0,1]⟶M
which satisfy
dz/dt=-∇ f(z(t)),
∂_s u+J_t(∂_t u-χ(s)X_H(u))=0 with E(u)<∞,
z(∞)=p, u(-∞,t)=x(t),
u(s,0), u(s,1), u(0,t)∈ L,
z(0)= u(+∞).
Since both the critical points of f and Hamiltonian chords of H lie in the compact set M, and out of it H is linear and ∂_s(χ(s)H)≤ 0, and -∇ f points inward along ∂ L, by Lemma <ref> every pair (z,u) in the above space is entirely contained in M.
Up to generic choices of (f,g,J), the space (p,x;H,J,χ,f,g) is a smooth manifold with dimension
μ(x)-m_f(p), where m_f(p) denotes the Morse index of f at p, x is the capping disk of the chord x given by the half disk u[The Floer strip u is holomorphic for s≥ 1 because of the cut-off function χ. The finite energy condition for u implies u can be extended continuously to the point u(+∞) in L, meaning that topologically u is a half disk.].
We define the morphism ψ^H_f:CM^k(L,f,g)→ CW^k(L,H) on generators by
ψ^H_f(p)=∑_μ(x)=m_f(p)♯__2(p,x;J,H,χ,f,g)· x.
and extend this map by linearity over _2. This extended map is a chain map and induces the PSS-type homomorphism
ψ_f^H:HM^k(L,f,g)⟶ HW^k(L,H).
Since H^*(L)≅ HM^*(L,f,g), and for any H∈ by definition HW^*(L,H)= HW^*(L,H) with H∈^reg a regular perturbation of H, for simplicity we denote the corresponding homomorphism from H^*(L) to HW^*(L,H) by ψ_pss^H.
If H,K∈ with slopes τ_H,τ_K such that τ_H≤τ_K, then the following diagram commutes:
HW^*(L,H)
[rr]^Φ_HK
HW^*(L,K)
H^*(L)
[ul]^ψ_pss^H[ur]_ψ_pss^K
Moreover, by the standard compactness and gluing arguments <cit.>, one can show that the following diagram communicates:
H^*(L)⊗ H^*(L)
[r]^ ∪[d]^ψ^H_pss⊗ψ^K_pss
H^*(L)
[d]^ψ^G_pss
HW^*(L,H)⊗ HW^*(L,K)
[r]^ *_F
HW^*(L,G)
where H,K,G∈ are admissible Hamiltonians such that the product as in (<ref>) is well-defined.
Moreover, if H∈_<τ^reg is a C^2-small Morse function in M, then all intersections φ_H(L)∩L lie in M∖∂ M, and φ_H(L) lies in a Weinstein neighborhood that is symplectically diffeomorphic to T^*L. Under this identification φ_H(L) is the graph of a function ζ∈ C^∞(L), and
the generators of the complex CW^*(L,H) correspond to the critical points of ζ on L and the Floer strips u for H correspond to the negative gradient flow lines of ζ by u→ u(s,0). In this case HW^*(L,H) computes H^*(L) and ψ^H_f is indeed an isomorphism, see <cit.>. It follows from Lemma <ref> and Lemma <ref> that for any H∈_<τ^reg we have the PSS-type isomorphism
ψ^H_f:HM^*(L,f,g)≅⟶ HW^*(L,H).
§.§ The wrapped Floer cohomology for compactly supported Hamiltonians
Recall that the set _c(M) consists of Hamiltonians H∈ C^∞([0,1]× M) which satisfy supp(dH_t)⊂ [0,1]× M∖∂ M. Without loss of generality we can assume that H_t=H(t,·) vanishes near t=0,1. Indeed, one can replace H by H'(t,·)=H(χ(t),·) where χ:[0,1]→[0,1] is a monotone map with χ(t)=0 near t=0 and χ(t)=1 near t=1, then the Hamiltonian flow of H' is a reparametrization of that of H, and in particular φ_H'^1=φ_H^1.
For each H∈_c(M) we take an admissible Hamiltonian H∈_<τ^reg such that H|_M is a C^2-small perturbation of H and then define the wrapped Floer cohomology of H as
HW^*(L,H):=HW^*(L,H).
By Lemma <ref> the wrapped Floer cohomology HW^*(L,H) only depends on the slope of H at infinity, the above definition of wrapped Floer cohomology for (L,H) is well-defined, i.e. different choices of Hamiltonians H yield isomorphic cohomologies HW^*(L,H).
In the above definition of the wrapped Floer cohomology for a Hamiltonian H on M with supp(dH_t)⊂ [0,1]× M∖∂ M, we specify the range of an auxiliary Hamiltonian H where H is linear. Without this restriction there is no problem to define HW^*(L,H) by using an admissible Hamiltonian with slope less than τ at infinity, but this point will be crucial for us to show the continuousness of the Lagrangian spectral invariant (to be defined later) for H with respect to the Hofer metric, see Lemma <ref>.
§.§ The wrapped Floer capacity
Let ℛ^-(∂ L,θ) be the symmetric set of ℛ(∂ L,θ) with respect to zero
ℛ^-(∂ L,θ):={-a|a∈ℛ(∂ L,θ)}.
Note that HW^*_(a,b](L) changes only when the action window crosses ℛ^-(∂ L,θ), meaning that
HW^*_(a,b](L)≅ HW^*_(c,d](L) if (a,b]∩ℛ^-(∂ L,θ)=(c,d]∩ℛ^-(∂ L,θ).
For δ∈(0,τ), we have (-δ,∞)∩ℛ^-(∂ L,θ)=∅, and
ψ^f_pss:H^*(L)≅⟶ HW^*(L,f)≅ HW^*_(-δ,∞)(L).
where f∈_<τ^reg.
The wrapped Floer capacity for L is defined by
c_HW(L)=inf{a>0|ι_-a^L∘ψ^f_pss(1_L)=0}
where 1_L∈ H^0(L) is the fundamental class. And by convention we set
c_HW(L)=∞ if ι_-a^L∘ψ^f_pss(1_L)≠0 for all a>0.
Since HW^*(L) admits a ring structure and the PSS-map preserves the ring structures of H^*(L) and HW^*(L,f), and so for ι_-a^L∘ψ^f_pss, the capacity c_HW(L) is finite if and only if HW^*(L)=0.
§.§ Spectral invariants from wrapped Floer cohomology
We pick a non-degenerate admissible Hamiltonian H∈_<τ^reg. For a∈, we consider the following short exact sequence
0⟶ CW^*_(a,∞)(L,H)ι_a⟶ CW^*(L,H)π_a⟶ CW^*_(-∞,a](L,H)⟶ 0.
This induces the long exact sequence
⟶ HW^*_(a,∞)(L,H)ι_a⟶ HW^*(L,H)π_a⟶ HW^*_(-∞,a](L,H)⟶
Let 0≠α∈ H^*(L). Its associated Lagrangian spectral invariant for H is defined by
ℓ(H,α):=sup{a∈|π_a∘ψ_pss^H(α)=0}.
Clearly, the above definition is equivalent to the following definition
ℓ(H,α):=sup{a∈|ψ_pss^H(α)∈(ι_a)}.
If H is a non-degenerate admissible Hamiltonian with slope μ_H∉(L,H), using a similar way one can define the corresponding Lagrangian spectral invariant for H which we will not pursue in this paper.
The following lemma implies that the spectral invariant ℓ depends on continuously the restrictions of Hamiltonians in _<τ^reg to M with respect to C^0-topology.
For H,K∈_<τ^reg and non-zero class α∈ H^*(L),
∫^1_0min_M(H_t-K_t)dt≤ℓ(H,α)-ℓ(K,α)≤∫^1_0max_M(H_t-K_t)dt.
For an arbitrary ϵ>0 we will prove the following inequality
ℓ(H,α)-ℓ(K,α)≥∫^1_0min_M(H_t-K_t)dt-ϵ
Once this inequality is proved, by exchanging the role of H and K we get the other inequality.
To prove (<ref>) we proceed in three steps:
Step 1.
Assume that H^0,H^1∈_<τ^reg are the same on M_r for some r∈(0,1) and satisfy
H^i(t,ρ,x)=h^i(ρ) on [0,1]×[r,∞)×∂ M, i=0,1
for two smooth functions h^0,h^1∈ C^∞((0,∞),) with 0≤(h^0)'(ρ)≤(h^1)'(ρ)<τ. We claim that ℓ(H^0,α)=ℓ(H^1,α).
Let s↦β(s) be a smooth cutoff function on such that β(s)=0 for s≤ 0, β(s)=1 for s≥ 1, and β'(s)≥ 0.
Consider the homotopy
H^s=β(s)H^0+(1-β(s))H^1
which satisfies
∂^2 H^s/∂ s∂ρ(t,ρ,x)=β'(s)((h^0)'(ρ)-(h^1)'(ρ))≤ 0 on [0,1]×[r,∞)×∂ M.
It follows from Lemma <ref> that solutions to the Floer equation (<ref>) connecting x_-∈𝒞(L,H^1) to x_+∈𝒞(L,H^0) can not escape from M_r. Since the continuation map Φ_H^0H^1 is independent of the choices of homotopies (H^s,J^s) used to define it and since Hamiltonian chords of H^0 and H^1 are the same and lie in M_r⊂ M, one can choose generic families of almost complex structures J^s such that the continuation homomorphism Φ_H^0H^1 induced by (H^s,J^s) is the identity map on the chain level of the cohomology HW^*(L,H^0). Therefore, we have ℓ(H^0,α)=ℓ(H^1,α).
Step 2. When H^0,H^1∈_<τ^reg have the same slope, we will prove that (<ref>) holds, i.e.
ℓ(H^1,α)-ℓ(H^0,α)≥∫^1_0min_M(H^1_t-H^0_t)dt-ϵ
To this end we only need to prove that for each ϵ>0 one can find a regular homotopy (H^s,J^s) connecting (H^0,J^0) to (H^1,J^1) such that for all a∈ the corresponding continuation morphism
Φ_H^0H^1:(CW^*(L,H^0),d_H^0,J^0)⟶(CW^*(L,H^1),d_H^1,J^1)
maps CW^*_(a,∞)(L,H^0) into CW^*_(a+b,∞)(L,H^1) for b=∫^1_0min_M(H^1_t-H^0_t)dt-ϵ. Then the following commutative diagram
H^*(L)
[r]^ψ^H^0_pss [dr]_ψ^H^1_pss
HW^*(L,H^0)
[r]^π_a [d]^Φ_H^0H^1
HW^*_(-∞,a](L,H^0)
[d]^Φ_H^0H^1
HW^*(L,H^1)
[r]^π_a+b
HW^*_(-∞,a+b](L,H^1)
implies (<ref>).
As before we first consider the special homotopy of Hamiltonians
H^s=β(s)H^0+(1-β(s))H^1.
Although in this case H^s in general is not regular, one can pick a regular homotopy of Floer data (K,J) such that K^s,H^s are the same on [0,1]×M∖ M for all s, K^s is independent of s out of the interval [0,1], and
max_(s,t,z)∈ [0,1]×[0,1]× M|∂_s K^s_t/∂ s(z)-∂_s H^s_t/∂ s(z)|≤ϵ,
see Proposition <ref> for a proof of this fact. Here we may require that J^s≡ J∈_θ which is of contact type and time-independent outside M.
Let u be a solution to the Floer equation (<ref>) for K^s connecting x∈(L,H^1) to y∈(L,H^0). Since H^0,H^1 are linear functions in -coordinate outside M with the same slope, it follows from Lemma <ref> that u is contained in M. By the energy identity (<ref>), we have
𝒜_L,H^0(y)-𝒜_L,H^1(x)=-E(u)+∫_× [0,1](∂_sK^s_t)(u(s,t))dsdt
≤ ∫_× [0,1](∂_sH^s_t)(u(s,t))dsdt+
∫_× [0,1](∂_s K^s_t/∂ s-∂_s H^s_t/∂ s)(u(s,t))dsdt
≤ ∫^1_0∫^∞_-∞β'(s)max_z∈ M(H^0_t(z)-H^1_t(z))dsdt+ϵ
= ∫^1_0max_z∈ M(H^0_t(z)-H^1_t(z))dt+ϵ
which implies
𝒜_L,H^1(x)≥𝒜_L,H^0(y)+∫^1_0min_z∈ M(H^1_t(z)-H^0_t(z))-ϵ.
Step 3. If H,K∈_<τ^reg, we modify H near ∂ M into
a regular Hamiltonian H'∈_<τ^reg which satisfies
* H,H' are the same on M_r for some r∈(0,1) and satisfy (<ref>);
* H', K have the same slope on [1,∞)×∂ M
* and the estimate
max_(t,z)∈[0,1]× M|H'_t(z)-H_t(z)|<ϵ/2.
We refer to Figure <ref> for a schematic graph of these Hamiltonians.
By step 1,
ℓ(H,α)=ℓ(H',α).
By step 2,
∫^1_0min_M(H'_t-K_t)dt-ϵ/2≤ℓ(H',α)-ℓ(K,α).
Combining (<ref>), (<ref>) and (<ref>) yields (<ref>). This completes the proof.
Thanks to Lemma <ref>, now one can C^∞-continuously extend ℓ to a map ℓ:_<τ→ because _<τ^reg is C^∞-dense in _<τ, and then restrict ℓ to a map ℋ_c(M)× H^*(L)∖{0}→ which is still denoted by ℓ for simplicity. Explicitly, for H∈ℋ_c(M) we take a sequence of Hamiltonians (H^k)_k∈⊂_<τ^reg such that H^k|_M→ H in C^∞-topology as k→∞, then define
ℓ(H,)=lim_k→∞ℓ(H^k,).
Notice that there is another way to define the spectral invariant for H∈_<τ. Indeed, we have already defined the wrapped Floer homology HW^*(L,H) for all H∈ℋ_c(M), see (<ref>), and for a,b∉(L,H) the filtered wrapped Floer homology HW^*_(a,b](L,H) can be defined in a similarly vain. Moreover, as in the regular case we have the long exact sequence
⟶ HW^*_(a,∞)(L,H)ι_a⟶ HW^*(L,H)π_a⟶ HW^*_(-∞,a](L,H)⟶
Hence, we may define
ℓ(H,α)=sup{a∈∖(L,H)|π_a∘ψ_pss^H(α)=0}.
It is not hard to verify that these two definitions of the spectral invariants for H∈ℋ_c(M) coincide.
Clearly, for all H,K∈ℋ_c(M) and non-zero class α∈ H^*(L) we have
∫^1_0min_M(H_t-K_t)dt≤ℓ(H,α)-ℓ(K,α)≤∫^1_0max_M(H_t-K_t)dt.
Since the wrapped Floer homology HW^*(L,H) for H∈ corresponds precisely to the Lagrangian Floer cohomology HF^*(φ^1_H(L),L) and since the generators of HW^*(L,H) for H∈_<τ are not wrapped chords, as in <cit.> we call the map
ℓ:ℋ_c(M)× H^*(L)∖{0}→
the Lagrangian spectral invariant for the pair (L,H). By convention we set ℓ(H,0)=∞ for the zero class 0∈ H^*(L).
Recall that for H,K∈ℋ_c(M), the composite map φ_H^t∘φ_K^t and the inverse map (φ_H^t)^-1 are generated by
H_t♯ K_t=H_t+K_t∘ (φ_H^t)^-1, H_t=-H_t∘φ_H^t
respectively. A smooth Hamiltonian H_t on a symplectic manifold (M,ω) is called normalized if ∫_MH_tω^n=0 for all t.
In the following we list some basic properties of ℓ that can be proved by adapting the methods used by <cit.> with minor changes.
The map ℓ: _c(M)× H^*(L)∖{0}→ has the following properties:
(1) Continuity: ℓ(H,) is Lipschitz in H in the C^0-topology.
(2) Spectrality: ℓ(H,)∈(L,H).
(3) Normalization: If c is a function of time then
ℓ(H+c,)=ℓ(H,)+∫^1_0c(t)dt.
We define ℓ(,0)=0 for all ∈ H(L)∖{0}.
(4) Monotonicity: ℓ(H,)≥ℓ(K,) for any 0≠∈ H_*(L) provided that H≥ K.
(5) Homotopy invariance: ℓ(H,)=ℓ(K,), when φ_H=φ_K in the universal covering of the group of Hamiltonian diffeomorphisms with compact supports in M∖∂ M, where H and K are normalized.
(6) Anti-triangle inequality: ℓ(H♯ K,∪β)≥ℓ(H,)+ℓ(K,).
(7) Non-positivity: ℓ(H,1_L)+ℓ(H,1_L)≤0.
(8) Lagrangian control: For all H∈_c(M) we have
∫^1_0min_LH_tdt≤ℓ(,H)≤∫^1_0max_LH_tdt.
(9) Symplectic invariance: ℓ(H,)=ℓ'(H∘ϕ^-1,ϕ^*()) for any symplectomorphism ϕ with compact support in M∖∂ M satisfying L'=φ(L), where
ℓ':H^*(L')∖{0}×_c(M)→
is the corresponding spectral invariant.
Property (1) follows from (<ref>) immediately.
To prove property (2), arguing by contradiction we assume that ℓ_α=ℓ(H,)∉(L,H). Since (L,H) is a nowhere dense and closed subset of , there exists δ>0 such that
(ℓ_α-δ,ℓ_α+δ)∩(L,H)=∅.
Hence, the quotient map
HW^*_(-∞,ℓ_+δ](L,H)⟶ HW^*_(-∞,ℓ_-δ](L,H)
is an isomorphism. This implies π_ℓ_+δ∘ψ_pss^H(α)=0 since by definition of ℓ(H,) we have π_ℓ_-δ∘ψ_pss^H(α)=0. Therefore, by definition ℓ(H,)≥ℓ_+δ which is absurd.
Property (3) follows from properties (1) and (2). Indeed, for the homotopy H^s=H+sc,s∈[0,1] the Hamiltonian chords are the same for all s, hence
(L,sH)={a+s∫^1_0c(t)dt|a∈(L,H)}.
Then by property (2) we have ℓ(H+sc,)-s∫^1_0c(t)dt∈(L,H). Since the action spectrum (L,H) is a closed nowhere dense set in and ℓ(H+sc,) is continuous with respect to s by property (1), ℓ(H+sc,)-s∫^1_0c(t)dt must be constant. In particular, for s=0,1 we have
ℓ(H+c,)-∫^1_0c(t)=ℓ(H,)
which implies property (3). Property (4) follows from the following commutative diagram
H^*(L)
[r]^ψ^K_pss [dr]_ψ^H_pss
HW^*(L,K)
[r]^π_a [d]^Φ_KH
HW^*_(-∞,a](L,K)
[d]^Φ_KH
HW^*(L,H)
[r]^π_a
HW^*_(-∞,a](L,H)
Property (5) can be deduced from the following lemma:
If there exists a homotopy (H^s)_s∈[0,1] between two normalized Hamiltonians H and K such that φ_H^s^1=φ_H^1 and H^s∈_c(M) are normalized for all s, then (L,H)=(L,K).
We now show property (5) assuming Lemma <ref>. Let (H^s)_s∈[0,1] be a homotopy as in Lemma <ref>. By property (2) and Lemma <ref> we have
ℓ(H^s,)∈(L,H^s)=(L,H) ∀ s∈[0,1].
It follows from property (1) and the fact that (L,H) is a closed nowhere dense set in that ℓ(H^s,) is independent of s. So we have ℓ(L,H)=ℓ(L,K).
Proof of Lemma <ref>. Since φ_H^s^1=φ_H^1 for all s∈[0,1], if p∈(φ_H^1)^-1(L)∩ L, then x^s(t)=φ_H^s^t(p) is a Hamiltonian chord of H^s for each s, ie x^s∈(𝒜_L,H^s). Hence,
𝒜_L,H^1(x^1)-𝒜_L,H^0(x^0) = ∫^1_0d/ds𝒜_L,H^s(x^s)ds
= ∫^1_0d𝒜_L,H^s(x^s)[∂_sx^s]ds+∫^1_0∫^1_0∂_sH^s_t(x^s(t))dsdt
= ∫^1_0∫^1_0∂_sH^s_t(x^s(t))dsdt≜ I.
Since φ_H^1=φ_K^1, the map
(𝒜_L,H) ⟶(𝒜_L,K)
φ_H^t(p) ⟼φ_K^t(p) ∀ p∈(φ_H^1)^-1(L)∩ L
is bijective. To finish the proof it suffices to show the last expression in (<ref>) vanishes.
Let λ:[0,1]→ [0,1] be a smooth function such that λ(t)=1 if 0≤ t≤ 1/8 and λ(t)=1 if t≥ 1/4.
Consider the homotopy G^s given by the concatenation of H^s and H with respect to the time variable
G^s(t,x)=
λ'(t)H^s(λ(t),x), t∈[0,1/2],
-λ'(1-t)H(λ(1-t),x), t∈[1/2,1].
For each s∈[0,1], G^s generates the flow
φ_G^s^t=φ_H^s^λ(t), t∈[0,1/2],
φ_H^λ(1-t), t∈[1/2,1],
which is a loop in _c(M,dθ). Thus, fixing s∈[0,1], for every p∈ M, y^s_p(t)=φ_G^s^t(p) is a critical point of the functional
𝒜_G^s(x)=∫^1_0G^s(x(t))dt-∫ x^*θ.
Clearly, 𝒜_G^0(y^0_p)=0. So we have
𝒜_G^1(y^1_p) = 𝒜_G^1(y^1_p)-𝒜_G^0(y^0_p)
= ∫^1_0d/ds𝒜_G^s(y^s_p(t))ds
= ∫^1_0d𝒜_G^s(y^s_p)[∂_s y^s_p(t)]dt+∫^1_0∫^1_0∂_sG^s_t(y^s_p(t))dsdt
= ∫^1_0∫^1_0∂_sH^s_t(φ_H^s^t(p))dsdt
where in the last equality we have used y^s_p∈(𝒜_G^s). Since the functional 𝒜_G^1 must be constant along every connected critical submanifold and since the map p↦ y_p^1 is a smooth embedding from M to (𝒜_G^1), the last term in (<ref>) is independent of p∈ M.
Now we show the last expression in (<ref>) vanishes. Indeed, putting ω=dθ we have
I·∫_Mω^n = ∫^1_0∫^1_0∂_sH^s_t(φ_H^s^t(p))dsdt·∫_Mω^n
= ∫^1_0∫^1_0(∫_M∂_s H^s_t(φ_H^s^t(p))ω^n_p)dsdt
= ∫^1_0∫^1_0(∫_M∂_s H^s_t(p)ω^n_p)dsdt
= ∫^1_0dt∫_M(K_t(p)-H_t(p))ω_p^n=0
where in the third equality we used the fact that φ_H^s^t preserve the symplectic form ω for all s,t, and in the last equality we used the condition that H,K are normalized Hamiltonians. This completes the proof of Lemma <ref>.
To prove property (6), we notice that (<ref>) implies the following commutative diagram
H^*(L)⊗ H^*(L)
[r]^ ∪[d]^ψ^H_pss⊗ψ^K_pss
H^*(L)
[d]^ψ^H* K_pss
HW^*(L,H)⊗ HW^*(L,K)
[r]^ *_F
HW^*(L,H*K)
It follows from (<ref>)
that ℓ(H* K,∪β)≥ℓ(H,)+ℓ(K,) where H_t,K_t are supposed to be zero near t=0,1 by reparametrazing in time. Since φ^t_H♯ K and φ_H*K^t are homotopic relative to the ends at t=0,1 and ∫_M(H♯ K)_t(dλ)^n=∫_M(H*K)_t(dλ)^n for all t, it follows from property (5) that the desired inequality holds.
Property (7) follows from property (3) and (6). Indeed,
ℓ(H,1_L)+ℓ(H,1_L)≤ℓ(H♯H,1_L)=ℓ(0,1_L)=0.
To prove property (8), we first consider the case that the restriction of H to L is a function of time, ie H_t|_L=c(t). Let H^s=sH, s∈[0,1]. Then H^s|_L=sc(t), hence the Hamiltonian chords with ends in L are constant ones in L. So we have
(L,H^s)={s∫^1_0c(t)dt}, ∀ s∈[0,1].
By property (2), ℓ(H,)=∫^1_0c(t)dt. For H∈_c(M), we set
c(t)=max_x∈ LH(t,x)
and pick a function K∈_c(M) such that K_t|_L=c(t) and H≤ K. Then ℓ(K,)=∫^1_0c(t)dt.
It follows from property (4) that
ℓ(H,)≤ℓ(K,)=∫^1_0max_x∈ LH(t,x)dt.
The proof of the other direction of two inequalities in property (8) is similar.
Before giving a sketch of the proof of property (9), we notice that whenever ϕ is a symplectomorphism with compact support in M∖∂ M, L'=ϕ(L) is an admissible Lagrangian submanifold of (M,dθ) provided that it is so for L. Set H'=H∘ϕ^-1. By property (1), we only need to prove property (9) for the case that H is a restriction of a regular Hamiltonian H∈^reg_<τ to M.
It is easy to see that ϕ induces a canonical isomorphism between the wrapped Floer cohomologies
ϕ^*:HW^*(L',H')⟶ HW^*(L,H).
Since ϕ preserves the actions of the critical points of the respective action functionals, the standard cobordism argument yields
the following commutative diagram
H^*(L')
[r]^ψ_pss^H' [d]^ϕ^*
HW^*(L,H')
[r]^π_a [d]^ϕ^*
HW^*_(-∞,a](L,H')
[d]^ϕ^*
H^*(L)
[r]^ψ_pss^H
HW^*(L,H)
[r]^π_a
HW^*_(-∞,a](L,H)
which implies property (9).
§.§ The spectral metric
Recall that for any symplectic manifold (M,ω) the Hofer's bi-invariant metric on the Hamiltonian diffeomorphism group _c(M,ω) is defined by
d(φ,id)=inf{H|φ=φ_H^1}, d(φ,ψ)=δ_H(ψ^-1∘φ,id)
where
H=∫^1_0(sup_x∈ MH(t,x)-inf_x∈ MH(t,x))dt.
For our purpose, for a Lagrangian L⊆ (M,ω) we also use the following norm
H_L=∫^1_0(sup_x∈ LH(t,x)-inf_x∈ LH(t,x))dt.
Let L be a Lagrangian submanifold of (M,ω). Recall that
(L)={φ(L)|φ∈_c(M,ω)}, and that a pseudo-metric on (L) is given by
δ_H(L_1,L_2)=inf{H|φ^1_H(L_1)=L_2, H∈ C_c^∞([0,1]× M)}.
Following <cit.>, for an admissible Lagrangian submanifold L^n⊂ (M^2n,ω), we define
(L,H)=-ℓ(H,1_L)-ℓ(H,1_L), ∀ H∈_c(M).
By property (7) in Proposition <ref>, (L,H) is a non-negative function on _c(M).
This gives rise to a function :(L)×(L)→ [0,∞), which is called the Lagrangian spectral pseudo-metric, by setting
(L_1,L_2)=inf_H∈_c(M){(L,H)|φ_H^1(L_1)=L_2}, ∀ L_1,L_2∈(L).
Let f:L→ be a smooth C^2-small function with compact support in L∖∂ L, and let H∈_c(M) be an autonomous Hamiltonian which coincides with the lift of f to a Weinstein neighborhood of L in M. More precisely, identifying T^*L with some Weinstein neighborhood of L in M we let H=f∘π on a co-disk bundle D_RT^*L⊂ T^*L of radius R>0 containing L^f:={(q,∂_q f(a))∈ T^*L|q∈ L}, and H=0 outside T^*_R+1L in M, where π:T^*L→ L is the natural projection map.
Then γ(L,H)=H_L.
It suffices to prove the above assertion for a C^2-small Morse function f:L→ adapted to L and the corresponding lift H∈_τ^reg. Clearly, the PSS-map
ψ_pss^H:H^*(L)≅ HM^*(L,f,g)⟶ HW^*(L,H)
is a chain-level isomorphism sending critical points x∈(f) to the corresponding constant chords x∈.
Here g is a metric on L such that (f,g) is a Morse-Smale pair. Under the above map, 1_L∈ H^*(L) corresponds to
[∑^k_i=1x_i]∈ HW^*(L,H)
where x_i are critical points of f=H|_L with Morse index zero. So we have
ℓ(H,1_L)=min{H(x_i)|i=1,…,k}=min_x∈ L H(x).
Similarly,
ℓ(H,1_L)=min_x∈ LH(x)=-max_x∈ L H(x).
Therefore, γ(L,H)=H_L and hence the continuity property of ℓ concludes the desired result.
The pseudo-metric on (L) satisfies
(a) (L_1,L_2)=0 if and only if L_1=L_2;
(b) (L_1,L_2)=(L_2,L_1);
(c) (L_1,L_2)≤(L_1,L_3)+(L_2,L_3);
(d) (φ(L_1),φ(L_2))=(L_1,L_2) for all φ∈_c(M,dθ);
(e) (L_1,L_2)≤δ_H(L_1,L_2);
(f) (L_1,L_2)='(ϕ(L_1),ϕ(L_1)) for all ϕ∈_c(M∖∂ M,dθ), where ':(ϕ(L))×(ϕ(L))→ [0,∞) is the corresponding pseudo-metric for ϕ(L).
For proving property (a) in Theorem <ref> (which is Theorem <ref> of the introduction), we need a Lagrangian version of Schwarz's result <cit.>.
Let K∈_c(M) and let U⊆ M∖∂ M be a subset such that ∪_t∈[0,1] supp(K_t)⊆ U. Suppose that the Hamiltonian time-one map of H∈_c(M) displaces L from U, i.e. φ_H(L)∩ U=∅.
Then we have γ(L,K)≤ 2γ(L,H).
We shall prove Theorem <ref> assuming Lemma <ref>.
By definition, property (a) is equivalent to the following statement:
(L,φ_H(L))>0⟺ L≠φ_H(L) for H∈_c(M).
Assume that φ_H(L)≠ L, then one can find an open subset U of M∖∂ M such that U∩ L≠∅ and φ_H(L)∩ U=∅. Pick a C^2-small function f∈ C^∞(L) which is supported in U∩ L, and let
K∈_c(M) be the corresponding lift of f so that K_L>0. It follows from Lemma <ref> and Lemma <ref> that 2γ(L,φ_H(L))≥γ(L,K)=K_L>0. This completes the proof of property (a).
The triangle inequality (c) follows from property (6) in Proposition <ref>. The symmetry property (b) and -invariance property (d) follow from the definition of . The inequality (e) is implied by property (8) in Proposition <ref> (or by (<ref>)). The -invariance property (f) follows from property (9) in Proposition <ref>.
By assumption φ_K^t(U)=U and φ_K^t=id outside U. Since φ_H displaces L from U,
we have
φ_K∘φ_H(L)∩ L=φ_H(L)∩ L⊆ M∖ U.
As a consequence, there is a bijection between the chords of H and that of K♯ H by sending x∈ to y∈𝒞(L,K♯ H) with y(t)=φ_K^t(x(t)). Let x be a capping disk of x.
We put u(s,t)=φ_K^st(x(t)) where (s,t)∈ [0,1]×[0,1]. Then x♯ u is a capping disk of y denoted by y. Using (<ref>), a direct calculation shows that
𝒜_L,K♯ H(y)=𝒜_L,H(x), see <cit.> for the completely same computation.
Hence, (L,K♯ H)=(L,H). Now we pick a parameter ϵ≥0 and consider ϵ K in place of K.
By a similar argument as above we obtain (L,ϵ K♯ H)=(L,H). It follows from the spectrality property of ℓ that
ℓ(ϵ K♯ H,1_L)∈(L,ϵ K♯ H).
Since (L,H) is a nowhere dense and closed subset of , the continuity property of ℓ implies that
ℓ(ϵ K♯ H,1_L) does not depend on ϵ. In particular, we get
ℓ(K♯ H,1_L)=ℓ(H,1_L).
Note that K is also supported in U, it holds that
ℓ(H♯K,1_L)=ℓ(H,1_L).
Indeed, we have
φ_H∘φ_K(L)∩ L=φ_H(L)∩ L
and any φ_H(q) in the above intersection has q∉ U. Moreover, there is a bijective map between 𝒞(L,H) and 𝒞(L,H♯K) by sending x∈𝒞(L,H) to y∈𝒞(L,H♯K) with y(t)=φ_H^t∘φ_K^t(x(0)).
Then an analogous argument as above yields (<ref>).
So we have
γ(L,K♯ H) = ℓ(K♯ H,1_L)+ℓ(K♯ H,1_L)
= ℓ(K♯ H,1_L)+ℓ(H♯K,1_L)
= ℓ(H,1_L)+ℓ(H,1_L)
=γ(L,H)
It follows from the anti-triangle property of ℓ and the definition of γ that
γ(L,K)=γ(L,K♯ H♯H)≤γ(L,K♯ H)+γ(L,H)=2γ(L,H).
§ GANOR-TANNY BARRICADES ON LIOUVILLE DOMAINS
In <cit.> Ganor and Tanny constructed special perturbations of a Hamiltonian homotopy supported in a contact incompressible boundary domain (CIB) which, together with an almost complex structure, has barricades in CIB such that Floer trajectories cannot enter or exit CIB.
Following closely their construction, we introduce barricades in the above sense on Liouville domains with admissible Lagrangian submanifolds.
In what follows, we say that H∈ C^∞(×[0,1]×M) is stationary for |s|>R>0 if ∂_sH is supported in [-R,R]×[0,1]× M and H^s_t=μρ+b outside M where H^s_t:=H(s,t,·), and μ≥0,b∈ are two constants for all s∈ and t∈[0,1].
Let (M,dθ) be a Liouville domain and L an admissible Lagrangian submanifold. Let W_0=M_r' and W_1=M_r for 0<r'<r<1 so that L intersects ∂ W_i transversely, and for θ|_L=dk_L, k_L vanishes near ∂ W_i∩ L,i=0,1. Let (H^s)_s∈⊂_<τ be a homotopy, stationary for sufficiently large |s|, from H^- to H^+ and J=(J_t)_t∈[0,1] a family of complex structures such that J∈𝒥_θ. The pair (H,J) is said to have a barricade for L on Ω=W_1∖ int(W_0) if the Hamiltonian chords of H^± do not intersect the boundaries ∂ W_0,∂ W_1, and any solution u:× [0,1]→M to (<ref>) satisfying (<ref>) with asymptotic chords x_±∈𝒞(L,H^±) at ±∞ satisfies
I. if x_-⊂ W_0 then (u)⊂ W_0.
II. if x_+⊂ W_1 then (u)⊂ W_1.
Forbidden and allowed Floer strips are illustrated in Figure <ref>. We say that a pair (H,J) of a Hamiltonian H∈_<τ and a family of complex structures (J_t)_t∈[0,1]∈𝒥_θ has a barricade for L on Ω if the pair of the constant homotopy H^s≡ H and J=(J_t)_t∈[0,1] has a barricade in the above sense.
Let (H^s)_s∈ be a homotopy in _<τ, stationary for sufficiently large |s|, from H^- to H^+ such that
* H vanishes on ×[0,1]× M_r∖ M_r' for two numbers r,r'∈(0,1) with r>r',
* H^s_t=f^s(ρ) on M∖ M_r (with ∂_s∂_ρ f^s(ρ)= 0 outside M),
* for every s∈, f^s is a C^2-small and ∂_sf^s≤ 0 on [r,1]×∂ M.
Then there exists a C^∞-small perturbation h of H and a family of almost complex structures (J_t)_t∈[0,1] such that the pairs (h,J) and (h^±,J) are Floer-regular and have a barricade on M_r∖ int(M_r').
The proof of Theorem <ref> is an adaptation of the construction of barricades on CIB given by Ganor and Tanny in <cit.>.
The main tools in the proof involve C^0-estimates for Floer strips, transversality and Gromov-Floer compactness, which nowadays are standard techniques in Floer theory. The crucial insights from Ganor and Tanny are two aspects: 1. One can control the size of the support of the perturbations of a homotopy H with Floer-regular ends H^± with respect to J∈_θ to achieve transversality, see Proposition <ref>; 2. Barricades are robust under those C^∞-small perturbations that have support in I×[0,1]× M where I⊂ is a compact interval, see Proposition <ref>.
We postpone the proof of Theorem <ref> to Section <ref> since an amount of the analytical details of the construction of barricades, which may be of independent interest, are indispensable for preparing the complete proof.
§ THE PROOF OF THE MAIN THEOREM
The basic idea of proving Theorem <ref> is to find a sequence of cofinal Hamiltonians which is radial and convex to compute the filtered wrapped Floer cohomology for an admissible L. This technique was used repeatedly in <cit.>, etc.
§.§ The proof of Theorem <ref>
Note that if HW^*(L) vanishes then the c_HW(L) is finite.
In what follows we will show that for any H∈_c(M) it holds that ℓ(H,1_L)≥ -c_HW(L), and hence (L,H)≤ 2c_HW(L). So the diameter of metric space (ℒ(L),) has the upper bound 2c_HW(L).
We proceed in three steps:
Step 1. We construct a sequence of cofinal Hamiltonians to compute
the filtered wrapped Floer cohomology HW_(-a,∞)^*(L) with a∈(0,∞)∖ℛ(∂ L,θ).
Given μ∈ (0,∞), we choose parameters ϵ,δ∈(0,1) and denote by ℱ_μ the set of admissible functions F∈ which satisfy
* F=-ϵ on M_1-δ;
* F(ρ,x)=h(ρ) for some smooth convex function h on (0,∞)×∂ M with h'(ρ)≥ 0;
* F(ρ,x)=μ(ρ-1)-δ on M∖ M≅(1,∞)×∂ M.
Here we ask that ϵ,δ are sufficiently small. Consequently, for every F∈ℱ_μ, the actions of non-constant Hamiltonian chords for (L,F) approximate arbitrarily the periods of the corresponding Reeb chords.
For a∈(0,∞)∖ℛ(∂ L,θ), we will show that for every F∈ℱ_μ the natural homomorphism
σ_F:HW^*_(-a,∞)(L,F)⟶ HW^*_(-a,∞)(L)
is an isomorphism.
We first consider a sequence of increasing numbers {μ_k} such that μ_1=a and
μ_k→∞ as k→∞, and pick a sequence of functions {F_k} with F_1:=F such that F_k∈ℱ_μ_k and F_k≤ F_k+1 for all k∈. Moreover, we may require that {F_k} is an upward exhausting sequence of functions in the set
ℋ^a={H∈|-a∉(L,H)}
as illustrated in Figure <ref>.
Indeed, since ℛ(∂ L,θ) is a nowhere dense set in (0,∞), there exist two positive numbers μ<a<μ with (μ,μ)∩ℛ(∂ L,θ)=∅. Associated to the function sequence {F_k}, we have two sequences of numbers
A_k:=a-ϵ_k/1-δ_k, B_k:=aμ_k/μ_k+δ_k
where ϵ_k→0,δ_k→0 as k→0. Clearly, {A_k},{B_k} are two sequences which converge to a. Without loss of generality, we may assume that A_k,B_k∈(μ,μ) and θ|_L∩ (M∖ M_1-δ_k)=0 for all k∈. By our choice of F_k, the non-constant chords of each F_k lie in the region between ρ=1-δ_k and ρ=1+δ_k/μ_k. Let x be one of such chords. If the action of the chord x is -a, then by (<ref>) the line tangent to the graph of F_k at ρ=ρ(x_k) must pass through the point (0,-a) and have the slope between A_k and B_k, in contradiction to (μ,μ)∩ℛ(∂ L,θ)=∅. Therefore, F_k∈ℋ^a for all k.
Next we consider a sequence of monotone homotopies (H^s)_s∈[0,1] connecting F_k+1 to F_k such that -a∉(L,H^s) for every s∈[0,1], see Figure <ref>.
By Lemma <ref>, the monotone homomorphisms
Φ_F_kF_k+1:HW^*_(-a,∞)(L,F_k)⟶ HW^*_(-a,∞)(L,F_k+1)
are isomorphic, and hence we have the isomorphism
σ_F_1:HW^*_(-a,∞)(L,F_1)⟶ HW^*_(-a,∞)(L)
which is given by the natural homomorphism for the direct limit.
Step 2.
Given c∈(c_HW(L),∞)∖ℛ(∂ L,θ), for any F∈ℱ_c we will show that the PSS-map ψ_pss^F is trivial, i.e. ψ_pss^F(1_L)=0.
Fix η∈(0,min{τ,c}). We pick a function f∈ℱ_η such that f≤ F, then we have the following commutative diagram
H^*(L)
[r]^ψ^f_pss[dr]_ψ^F_pss
HW^*(L,f)
[r]_≅ ^σ_f [d]^Φ_fF
HW^*_(-η,∞)(L)
[d]^ι
HW^*(L,F)
[r]_≅ ^σ_F
HW^*_(-c,∞)(L)
where the map ι is induced by the action window map. By Step 1 the natural homomorphisms σ_f and σ_F are isomorphic.
Since c>c_HW(L), by definition we have ι=0.
Hence ψ_pss^F(1_L)=0.
Step 3. Let H∈_c(M).
For every positive number c>c_HW(L) with -c∉ℛ^-(∂ L,θ)∪(L,H), we will show that ℓ(H,1_L)≥ -c.
This gives rise to ℓ(H,1_L)≥-c_HW(L).
From now on we fix F∈ℱ_c as in Step 2.
Assume that H is compactly supported in M_r for some r∈(0,1).
We pick H_i∈, i=1,2 such that
* H_1=H_2=H on M_r;
* H_i=h_i(ρ) for two smooth convex functions h_i on (r,∞)×∂ M and 0≤ h_1'≤ h_2';
* h_1,h_2 are linear on M∖ M with h_1'=η and h_2'=c where η is fixed as in Step 2.
By definition we have ℓ(H_1,1_L)=ℓ(H,1_L). So it suffices to show that π_-c∘ψ_pss^H_1(1_L)=0. To this end we consider the following
commutative diagram
H^*(L)
[r]^ψ^H_1_pss [d]^ψ^F_pss[dr]^ψ^H_2_pss
HW^*(L,H_1)
[r]^π_-c [d]
HW^*_(-∞,-c](L,H_1)
[d]^Φ_H_1H_2_≅
HW^*(L,F)
[r]^Φ_FH_2_≅
HW^*(L,H_2)
[r]^π_-c
HW^*_(-∞,-c](L,H_2)
Here the continuation map Φ_FH_2 is isomorphic because F and H_2 have the same slope at infinity, and the monotone homomorphism Φ_H_1H_2 is an isomorphism since there exists a homotopy H_s between H_1 and H_2 such that -c∉(L,H_s) for all s∈[0,1].
Therefore, by Step 2 the desired inequality follows from the above diagram.
§.§ The proof of Theorem <ref>
The key to prove Theorem <ref> is a “barricade" argument for Floer trajectories due to Ganor and Tanny <cit.>, which results in the following result.
Let (M^2n,dθ) be a Weinstein domain and L^n⊂ M an admissible connected Lagrangian submanifold. For any Hamiltonian H∈ C^∞([0,1]× M) with support in [0,1]× int(M) it holds that ℓ(H,1_L)≤ 0.
We now complete the proof of Theorem <ref> assuming Proposition <ref>. The strategy of the proof is to construct a family of compactly supported Hamiltonians H on M that have sufficiently large oscillations to force -ℓ(H,1_L) to be large enough, similar to the computation of the homological BPS capacity in <cit.> or <cit.>.
By our assumption that θ|_L=dk_L and k_L=0 on a neighborhood of ∂ L, we can modify the Liouville one-form such that θ|_L=0 as follows. Indeed, by the connectedness of L
we can first extend k_L to a smooth function on L and next to a compactly supported smooth function f:M→, then add -df to θ. With respect to this new one-form θ, L is still an exact Lagrangian but with θ|_L=0. Note that this progress does not change the symplectic form, and hence the actions of chords of any Hamiltonian with ends in L. Besides, it only changes
θ in the interior of M, but not the boundary ∂ M and so for the Reeb vector field. So, with respect to this new form, the corresponding spectral invariant is not changed, and this is what we are ultimately interested in. In the following proof we assume that θ|_L=dk_L with k_L≡ 0 on L.
Fix ρ_0∈(0,1) and let δ∈ (0,ρ_0). Pick a sequence of numbers (a_k) such that a_k∈ (0,∞)∖ℛ(∂ L,θ) and a_k→∞ as k→∞. For each k∈, we take a piecewise linear curve c_k. More precisely, we begin with a horizontal line segment with the starting point (0,-a_k(ρ_0-δ)). Upon reaching the point (δ,-a_k(ρ_0-δ)), we follow the line with slope a_k until meeting the ρ-axis, then follow it to the right until we arrive at (0,1), and finally we go along the line with slope η∈(0,τ). We smooth out the corners of each curve c_k, and hence obtain a sequence of functions H_k∈_<τ whose graphs are these smooth curves as illustrated in Figure <ref>.
The chords of H_k are divided into four classes according to the regions which they lie in. The only non-constant 1-period chords of H_k arise near ρ=δ,ρ_0. The actions of non-constant chords for h_k are -ρ h_k'(ρ)+h_k(ρ). Write η_k the distance between a_k and ℛ(∂ L,θ). Then η_k>0 since the Reeb periods are a discrete subset of [0,∞). The non-constant chords of H_k near ρ=δ, ρ_0 have actions in
[-a_kρ_0+η_kδ, -τδ-a_k(ρ_0-δ)] and [-ρ_0(a_k-η_k),-ρ_0τ]
respectively. The constant chords of H_k have actions
close to -a_k(ρ_0-δ) in the region M_δ and to 0 outside M_ρ_0.
For sufficiently small δ>0 there is a positive number ϵ_k∈(0,1) such that
a_kδ<ϵ_k<η_kρ_0. Fixing such δ we can separate the action values of the 1-period chords lying in M_δ and those outside M_δ. More precisely, the action values of chords lying in two regions M_δ and M∖ M_δ belong to (-∞,-a_kρ_0+ϵ_k) and (-a_kρ_0+ϵ_k,∞) respectively.
Now we deform H_k by the monotone homotopy as indicated in Figure <ref> to a function F_k which is convex outside M_δ. The graph of this new function is obtained by following H_k until we arrive at ρ=ρ_0 and keep going linearly with slope a_k. Since the vertical axis intercepts of the lines tangent to the graphs at points where non-constant chords occur during the homotopy are larger than -a_kρ_0+ϵ_k, by Lemma <ref> we obtain the monotone isomorphism
Φ_H_kF_k:HW^*_(-∞,-a_kρ_0+ϵ_k](H_k)≅⟶HW^*_(-∞,-a_kρ_0+ϵ_k](F_k).
Pick c>0. Since by our construction all chords of F_k have action values less than the negative number -a_kρ_0+ϵ_k, by Lemma <ref> again the constant homotopy F^s_k=F_k for all s∈[0,1] gives rise to the isomorphism
Φ_F_kF_k: HW^*_(-∞,c](F_k)≅⟶ HW^*_(-∞,-a_kρ_0+ϵ_k](F_k).
For simplicity we set A_k=-a_kρ_0+ϵ_k. Then we have the following diagram:
HW^*(L,H_k)
[r]^π_A_k [d]_Φ_H_kF_k
HW^*_(-∞,A_k](L,H_k)
[d]^Φ_H_kF_k_≅
HW^*(L,F_k)
[r]^π_A_k [d]_σ_F_k[dr]^π_c
HW^*_(-∞,A_k](L,F_k)
[d]^Φ^-1_F_kF_k_≅
HW^*(L)
[dr]^π_c^L_≅
HW^*_(-∞,c](L,F_k)
[d]^σ_F_k
HW^*_(-∞,c](L)
By (<ref>), the top rectangular block of the diagram and the middle triangle commute since the monotone homomorphisms commute with the quotient maps π_A_k and π_c. The bottom parallelogram block of the diagram commutes due to (<ref>). By (<ref>), we have
σ_H_k=σ_F_k∘Φ_H_kF_k:HW^*(L,H_k)⟶ HW^*_(-∞,c](L).
Since σ_H_k respects the ring structures on HW^*(L,H_k) and HW^*(L), we deduce from our assumption HW^*(L)≠ 0 that
σ_H_k(ψ_pss^H_k(1_L))≠0. It follows from the above diagram that
π_A_k(ψ_pss^H_k(1_L))≠0
which implies ℓ(H_k,1_L)≤ A_k=-a_kρ_0+ϵ_k. By Proposition <ref> we have ℓ(H_k,1_L)≤ 0. So we get
(L,H_k)≥ a_kρ_0-ϵ_k ≥ a_kρ_0-1.
Letting k→∞ we obtain the desired result.
Without loss of generality we may assume that supp(H)⊂ [0,1]× M_r' for some r'∈(0,1). Pick K∈_<τ such that K=H on M_r and its restriction on M∖ M_r is a C^2-small radial function for some r∈ (r',1). We also pick a time independent Hamiltonian F∈_<τ with F|_M∖ M=K|_M∖ M whose restriction on M is a C^2-small radial function and vanishes on M_r. Let H^s be a linear homotopy from H^-=F to H^+=K as in (<ref>). It follows from Theorem <ref> that there exists a C^∞-small perturbation h of H and a family of almost complex structures (J_t)_t∈[0,1] such that the pairs (h,J) and (h^±,J) are Floer-regular and have a barricade on M_r∖ int(M_r') for r close enough to r'. By the barricade construction, given ϵ>0, one can further choose h^-∈_<τ such that h^-|_M is a time independent C^2-small function which is an extension of the lift of a C^2-small Morse function f∈ C^∞(L) on some Weinstein neighborhood of L to M, and h^+∈_<τ with
h^+-H^+_C^2([0,1]× M)≤ϵ such that h^+|_M∖ M_r is also an extension of the lift of some C^2-small Morse function defined on L∩ (M∖ M_r) to M∖ M_r.
Moreover, we may assume that h^- has a unique local (and global) minimum point q contained in L∩ (M∖ M_r). We refer to Figure <ref> for a schematic graph of these Hamiltonians.
Note that we have a chain-level isomorphism between the Morse complex CM^*(L,f,g) and the wrapped Floer complex CW^*(L,h^-) via the PSS-map ψ_f^h^- by sending critical points of f to the corresponding constant chords of h^-. So the point q represents the fundamental class
[q]=1_L∈ H^*(L) HW^*(L,h^-).
By our construction h^+,h^- have the same slope outside M, it follows from Lemma <ref> that
the continuation map Φ_h^+h^-:CW^*(L,h^+)→ CW^*(L,h^-) associated to h induces an isomorphism on homology. Therefore, there exist a cycle representing Φ_h^+h^-^-1([q]) in CW^*(L,h^+) is mapped by Φ_h^+h^- to
the constant chord q. Since the pair (h,J) has a barricade for L on M_r∖ int(M_r'), the Floer strip u of this continuation map starting at x_-=q∈ L∩(M∖ M_r) must end at some chords x_+ of h^+ outside M_r, and hence the cycle Φ_h^+h^-^-1(q) equals to the sum ∑_ip_i of some critical points p_i of h^+ on L∩ (M∖ M_r). Since h^+ is C^∞-close to H^+=K and the values of K are close to zero in the region M∖ M_r, the action of every constant chord p_i is close enough to zero. It follows from the continuity property of the spectral invariant ℓ and Φ_h^+h^-∘Ψ_pss^h^+(1_L)=Ψ_pss^h^-(1_L) (see Lemma <ref>) that
ℓ(H,1_L)≤ℓ(K,1_L)+ϵ≤ℓ(h^+,1_L)+2ϵ≤𝒜_L,h^+(Φ_h^+h^-^-1(q))+2ϵ≤ 3ϵ.
Since ϵ>0 can be arbitrarily small, we conclude the desired inequality.
§ THE PROOF OF THEOREM <REF>
§.§ Constructing Ganor-Tanny barricades
We consider a pair (H,J) of a Hamiltonian homotopy (H^s)_s∈ and a family of almost complex structures (J_t)_t∈[0,1]. Fix μ∈(0,τ) (where τ=minℛ(∂ L,θ)). We say that the pair (H,J) has a cylindrical bump of slope μ on Ω=W_1∖ int(W_0) if
1. H=0 on × [0,1]×∂Ω and ∂_s H^s≤ 0 outside W_0;
2. J is of contact type, i.e. dρ∘ J=-θ (or equivalently JY=R), near the boundaries ∂ W_0,∂ W_1;
3. ∇_J H=μ V_θ near ×[0,1]×∂ W_0 and ∇_J H=-μ V_θ near ×[0,1]×∂ W_1 where ∇_J is the Levi-Civita connection with respect to the metric dλ(·,J·);
4. If x_±∈𝒞(L,H^±) are not contained in W_0, then x_± are critical points of H^± on L with values in the interval (-μ,μ).
Assume that (H,J) is a pair with a cylindrical bump of slope μ∈(0,τ) on Ω. Then (H,J) has a barricade on Ω.
For proving Proposition <ref>, we use three lemmata as in <cit.> to exclude certain types of Floer trajectories. In what follows we denote W:=W_0 or W_1 for simplicity.
Let (H,J) be a pair of a Hamiltonian homotopy and a family of almost complex structures. We say that
(H,J) is μ-cylindrical near ∂ W with μ∈∖{0} if
* J is of contact type, ie dρ∘ J=-θ near ∂ W;
* H∈_<τ is independent of the -coordinate and the time coordinate near ∂ W and ×[0,1]×∂ W={H=a} is a regular level set of H;
* ∇_J H=μ V_θ on a neighborhood of ∂ W and H has no chords x∈ intersecting this neighborhood.
The first lemma follows from an argument that has appeared first in <cit.>. We include the proof for completeness because our setup differs slightly from the one there.
Let (H,J) be a pair which is μ-cylindrical near W. If ∂_s H≤ 0 on W^c, then every finite-energy solution u with both asymptotes contained in W is entirely contained in W.
Suppose the contrary that u is not entirely contained in W.
Note that Σ:=u^-1(M∖ W^∘) is a compact surface with corners and the corners divide the boundary ∂Σ into two pieces: the piece landing in the boundary ∂ W and the one landing in L. We write ∂Σ=∂_b Σ∪∂_l Σ according to these two pieces. Clearly, by our assumption ∂_b Σ≠∅.
We denote by j the restriction of the complex structure from the strip ×[0,1] to Σ.
The Floer equation for u can be read as (du-X_H^s(u)⊗ dt)^0,1=0. Using ∂_sH^s≤ 0 outside W and Stokes' theorem, we have
E(u|_Σ) = 1/2∫_Σ|du-X_H^s⊗ dt|^2Vol_Σ
= ∫_Σ u^*dθ-u^*(dH^s)∧ dt
= ∫_Σ d(u^*θ-(u^*H^s)dt)+(∂_sH^s)ds∧ dt
≤ ∫_Σ d(u^*θ-(u^*H^s)dt)
= ∫_∂Σ u^*θ-(u^*H^s)dt
For any connected component γ of ∂_lΣ we have (u^*H^s)dt|_γ=0, and since u^*θ|_L=u^*dk_L by Stokes' theorem we get ∫_γ u^*θ=0 for circles γ, while for intervals γ, ∫_γ u^*θ=k_L(p)-k_L(q) for corners p,q∈∂_bΣ∩∂_lΣ and hence this also vanishes by the assumption that k_L|_∂ W∩ L=0. Therefore, from the last term in (<ref>) we deduce that E(u|_Σ)≤∫_∂_bΣ u^*θ-(u^*H^s)dt.
Since ∇_J H=μ V_θ and dρ∘ J=-θ on a neighborhood of ∂ W, we have X_H^s=J∇_J H=μ R and hence θ(a/ηX_H^s)=H along ∂ W where η=μ r or μ r' depending on W=M_r or M_r'. Using this and the contact condition dρ∘ J=-θ near ∂ W,
E(u|_Σ) ≤ ∫_∂_bΣθ∘(du-a/ηX_H^s(u)⊗ dt)
= ∫_∂_bΣ (θ∘ J)∘(du-a/ηX_H^s(u)⊗ dt)∘(-j)
= ∫_∂_bΣ dρ∘(du-a/ηX_H^s(u)⊗ dt)∘(-j)
= ∫_∂_bΣ dρ∘ du∘(-j).
Let ξ be a tangent vector to ∂_bΣ which gives rise to the boundary orientation. Then jξ points into Σ, and thus du(jξ) does not point outwards along ∂ W, so dρ∘ du(jξ)≥ 0. Integrating that it follows from (<ref>) that E(u|_Σ)=0. This implies that each connected component of u|_Σ is contained in a single orbit of X_H^s. Since ∂_b Σ≠∅ and X_H^s is tangent to ∂ W this orbit must be contained in ∂ W. This contradicts our assumption that H has no chords near ∂ W.
Next we give an upper bound for the integral of θ along the oriented curve γ:=∂ ((u)∩ W^c) as illustrated in Figure <ref>, where u:× [0,1]→M is a solution of the s-dependent Floer equation (<ref>) with finite energy and the boundary condition (<ref>).
Let u be as above with asymptotic chords x_±∈𝒞(L,H^±). If u intersects ∂ W transversely, then
∫_Γθ=
-μ, if x_-⊂ W, x_+⊂ W^c,
μ, if x_-⊂ W^c, x_+⊂ W,
0, if x_±⊂ W or x_±⊂ W^c
where Γ:=(u)∩∂ W is oriented as the boundary of (u)∩ W^c.
The third lemma is an application of Lemma <ref> which is useful to bound the actions of the ends of Floer strips that cross the boundary of W provided that the homotopy H is non-increasing outside W.
Let (H,J) be a pair which is μ-cylindrical near W. If ∂_sH≤ 0 on W^c, then every finite-energy solution u with asymptotic chords x_±∈𝒞(L,H^±) satisfies
𝒜_L,H^+(x_+)<a-μ whenever x_-⊂ W and x_+⊂ W^c, or
𝒜_L,H^-(x_-)>a-μ whenever x_-⊂ W^c and x_+⊂ W. Here a is the value of H on ∂ W.
The proofs of Lemma 3.2 and Lemma 3.3 in <cit.> can be carried over to the above two lemmata respectively in a direct fashion. The only difference in the proofs is that in our case the portion Σ:=(u)∩ W^c of a Floer strip u outside W has an additional boundary ∂_lΣ landing in L. But this would not cause new problems since in the proof of Lemma <ref> the extra term ∫_∂_lΣdt in the formula (16) on page 17 in <cit.> vanishes, and since in the proof of Lemma <ref> θ|_L=dk_L with k_L vanishing near ∂ W∩ L the extra term ∫_∂_lΣθ in the formula (19) on page 19 in <cit.> disappears.
As a consequence of Lemma <ref>, we have the following:
If (H,J) has a cylindrical bump of slope μ∈(0,τ) on Ω, then every solution u:× [0,1]→M to (<ref>) satisfying (<ref>) with asymptotic chords x_±∈𝒞(L,H^±) has the following properties:
* if x_-⊂ W_0 and x_+⊂ W_0^c:=M∖ W_0 then 𝒜_L,H^+(x_+)<-μ;
* if x_+⊂ W_1 and x_-⊂ W_1^c:=M∖ W_1 then 𝒜_L,H^-(x_-)>μ;
Let u:× [0,1]→M be a Floer strip for the pair (H,J) with asymptotic chords x_±∈𝒞(L,H^±) at ±∞. We only prove the first case in the definition of barricade since the proof of the second case is similar. Suppose that x_-⊂ W_0. If x_+⊂ W_0^c, then x_+ is a critical point of H^+ on L with value in (-μ,μ). On the other hand, it follows from the first statement of Proposition <ref> that 𝒜_L,H^+(x_+)<-μ. So we get a contradiction and hence x_+⊂ W_0. Then Lemma <ref> concludes that (u)⊂ W_0.
§.§ Transversality
To achieve transversality of moduli spaces we would like to perturb a given homotopy in suitable Banach spaces. The Floer C_ε-space introduced by Floer <cit.> provides us a separable Banach space to use Sard-Smale theorem for genericity arguments.
For our purpose, given a compact interval I⊂ with non-empty interior, we consider the following perturbation space C^∞_ε,I(M) defined as follows.
For h∈ C^∞(×[0,1]×M) with supp(h(s,t,·))⊂ M for all s,t, if ε=(ε_k)_k=0^∞ is a sequence of positive numbers with ε_k→0 we define the C_ε-norm ·_ε by
h_ε=∑_k=0^∞ε_ksup_×[0,1]× M|d^kh|.
Denote by C^∞_ε,I(M) the space of functions h∈ C^∞(×[0,1]×M) satisfying that
h is supported in I×[0,1]× M, and the C_ε-norm of h is finite, i.e. h_ε<∞.
It can be shown that C^∞_ε,I(M) is a separable Banach space, see <cit.> or <cit.>. Moreover, there is a sequence ε such that the set C^∞_ε,I(M) is dense in the space C^∞_I(M) of smooth functions h:×[0,1]×M→ with compact supports in I×[0,1]× M with respect to C^1-topology, see <cit.> or <cit.>. In the following we fix such ε.
Let (H,J) be a pair of a stationary homotopy in _<τ and an almost complex structure of contact type outside M such that (H^±,J) are Floer-regular. Let I⊂ be a compact interval with non-empty interior. Then there is a residual subset 𝒱^reg_ε⊂ C^∞_ε,I(M) such that for every h∈𝒱^reg_ε the pair (H+h, J) is Floer-regular.
For the pair (H,J) given as in the above proposition, we consider the solutions u:ℝ× [0,1]→M of the PDE
(∂_H,J(u))(s,t):=∂_su(s,t)+J_t(u(s,t))∂_tu(s,t)+∇_JH^s_t(u(s,t))=0
with finite energy and subject to boundary condition u(·,{0,1})⊂L, and denote by ℳ_H,J the set of all such solutions u. For two Hamiltonian chords x_±∈𝒞(L,H^±), we denote by ℳ_H,J(x_-,x_+) the set of the above solutions u with asymptotic chords x_± at ±∞. It can be shown that ℳ_H,J=⋃_x_±∈𝒞(L,H^±)ℳ_H,J(x_-,x_+).
Let C^∞_exp(x_-,x_+) denote the space of smooth maps u:×[0,1]→ M converging to x_± at the ends with exponentially decaying derivatives and satisfying u(·,{0,1})⊂ L. Since outside M the slopes τ_H^s of H^s do not depend on s∈, it follows from Lemma <ref> that every solution u to (<ref>) has image contained in int(M). Using the condition that H^±∈_<τ are non-degenerate one can further show that each u∈ℳ_H,J(x_-,x_+) belongs to C^∞_exp(x_-,x_+).
For k>2/p and p>1, we define
𝒫(x_-,x_+):={u:×[0,1]→ M| u(s,t)=exp_w(s,t)ξ(s,t) where w∈ C^∞_exp(x_-,x_+),
ξ∈ W^k,p(w^*TM) with ξ(s,0)∈ T_u(s,0)L
and ξ(s,1)∈ T_u(s,0)L}.
Let W^k-1,p(x_-,x_+) denote the Banach vector bundle over 𝒫(x_-,x_+) whose fiber at u∈𝒫(x_-,x_+) is W^k-1,p(u^*TM).
As in <cit.>, to prove Proposition <ref> it suffices to prove
The section
ℱ:𝒫(x_-,x_+)× C^∞_ε,I(M) ⟶ W^k-1,p(x_-,x_+)
(u,h) ⟼∂_H+h,J(u)
is smooth and its linearization is surjective on its zero set
𝒵(x_-,x_+)={(u,h)∈𝒫(x_-,x_+)× C^∞_ε,I(M)|∂_H+h,J(u)=0}.
To prove the smoothness of ℱ, we choose an unitary trivialization (preserving the symplectic structure and the almost complex structure)
Φ:u^*TM→×[0,1]×^2n
such that Φ(u(·,0)^*TL)=×{0}×^n and Φ(u(·,1)^*TL)=×{1}×^n. Under this trivialization, 𝒫(x_-,x_+) is modeled over the Banach space
W^k,p_L(× [0,1];^2n)={ξ∈ W^k,p(× [0,1];^2n)|ξ(·,0),ξ(·,1)∈^n},
and the linear operator Dℱ has the form
Υ:W^k,p_L(× [0,1];^2n)× C^∞_ε,I(M) ⟶ W^k-1,p(× [0,1];^2n)
(ξ,η) ⟼ D(∂_H+h,J)_u(ξ)+∇_uη.
The smoothness follows from the above form immediately.
Note that the operator E_u:=D(∂_H+h,J)_u is of perturbed Cauchy-Riemannian type, i.e.
E_u=∂ +T=∂/∂ s+j∂/∂ t+T
where T:× [0,1]→End(^2n), and has the asymptotic limits of the form
∂ +T^± with T^±:[0,1]→End(^2n).
Since x_± are non-degenerate, E_u are Fredholm operators for all u∈ℳ_H,J(x_-,x_+) and have index
Ind(E_u)=μ(x_-)-μ(x_+),
see for instance <cit.>.
Before proving the rest of the statement in Lemma <ref>, we show how Proposition <ref> follows from this lemma. Notice that the linearization Dℱ equals to the sum of a Fredholm operator and a linear operator, it has right inverse, see <cit.>. This, together with Lemma <ref>, implies that ℱ intersects the zero section transversally. By the implicit function theorem, the zero set 𝒵(x_-,x_+) is a smooth Banach submanifold of 𝒫(x_-,x_+)× C^∞_ε,I(M). The fact that C^∞_ε,I(M) is separable implies that 𝒵(x_-,x_+) is also separable. Clearly, the projection map
π:𝒵(x_-,x_+)⟶ C^∞_ε,I(M) (u,h)⟼ h
is smooth. Moreover, π is a Fredholm map which has the same Fredholm index as D(∂_H+h,J)_u's. Since π^-1(h)=ℳ_H+h,J(x_-,x_+), if h∈ C^∞_ε,I(M) is a regular value of π then D(∂_H+h,J)_u are surjective for all u∈ℳ_H+h,J(x_-,x_+), i.e. (H+h,J) is Floer-regular. By the Sard-Smale Theorem, the set of regular values of π is of the second category in C^∞_ε,I(M) (a countable intersection of open and dense set). We define the set 𝒱^reg_ε⊂ C^∞_ε,I(M) as the intersection of the regular values of the projections for all pairs (x_-,x_+) where x_±∈𝒞(L,H^±).
Notice that adding any h∈ C^∞_ε,I(M) to the homotopy H does not change the ends H^± of the resulting homotopy.
Consequently, E_u has a closed range with finite-dimensional cokernel for every u∈𝒵(x_-,x_+), and hence the range of Υ is closed. Therefore, to prove Υ is surjective on 𝒵(x_-,x_+) it suffices to prove that the image (Υ) is dense.
Suppose the contrary, then there exists a nonzero continuous linear functional Γ on W^k-1,p(× [0,1];^2n) such that
Γ(E_uξ)=0 ∀ξ∈ W^k,p_L(× [0,1];^2n)
and
Γ(∇_uη)=0 ∀η∈ C^∞_ε,I(M).
By the elliptic regularity theory, Γ can be represented by some nonzero vector field γ∈ W^l,q with 1/p+1/q=1 for some l∈ so that for every ζ∈ W^k-1,p,
Γ(ζ)=⟨γ,ζ⟩=∫_×[0,1]γ(s,t)·ζ(s,t)dsdt.
In particular, for every ξ∈ W^k,p_L(× [0,1];^2n) and every η∈ C^∞_ε,I(M),
⟨γ,E_uξ⟩=0,
⟨γ,∇_uη⟩=0.
By (<ref>), γ is in the kernel of the L^2-adjoint operator E_u^* associated to E_u which is also a perturbed Cauchy-Riemannian operator. The unique continuation <cit.> implies that if γ has an infinite-order zero, then it is identically zero. To arrive at a contradiction we shall show that γ vanishes on I×[0,1] (hence γ≡0).
We define the map
u:×[0,1] ⟶×[0,1]×M
(s,t) ⟼(s,t,u(s,t)).
Clearly, this map is an embedding. We pull back γ as the vector field along u to ×[0,1]×M that has no components in the directions ∂/∂ s∈ T and ∂/∂ t∈ T[0,1]. We see that γ is not tangent to (u) at the points where it is not zero.
Now we suppose that there exists a point (s_0,t_0)∈ I×[0,1] such that γ(s_0,t_0)≠0. Without loss of generality, we may further assume that (s_0,t_0) is an interior point in I×[0,1], then there exists a small open neighborhood Ω in int(I×[0,1]) such that γ(s,t) is nonzero on Ω. Hence in this neighborhood γ is transversal to u. Pick a smooth function χ:×[0,1]→ with support in U which satisfies
∫_×[0,1]χ(s,t)dsdt≠ 0.
Let U⊂ I×[0,1]× int(M) be a tubular neighborhood of u(Ω) such that U∩(u)=u(Ω). We pick a smooth function h:×[0,1]×M→ with support in U such that if ϕ_s,t(r) is a parameterized integral curve of γ passing through u(s,t) at r=0, then
h(s,t,ϕ_s,t(r)):=χ(s,t)r ∀ r∈(-ϵ,ϵ)
where ϵ is sufficiently small. The condition that γ is transversal to u(Ω) guarantees that such h can be well defined. Write h_s,t=h(s,t,·). Then we compute
⟨γ,∇_uh⟩ =∫_×[0,1]γ(s,t)·∇_uh(s,t)dsdt
=∫_×[0,1]dh_s,t(γ(s,t))dsdt
=∫_×[0,1]dh_s,t(d/dr|_r=0ϕ_s,t(r))dsdt
=∫_×[0,1]d/dr|_r=0h_s,t(ϕ_s,t(r))dsdt
=∫_×[0,1]χ(s,t)dsdt≠0.
From the construction we see that h∈ C^∞_I(M). Using the fact that C^∞_ε,I(M) is dense in the space C^∞_I(M) with the C^1-topology, one can choose η∈ C^∞_ε,I(M) which approximates arbitrarily to h so that the equality (<ref>) does not hold for this η. So we achieve a contradiction which means that Υ is surjective.
§.§ Gromov-Floer compactness and robustness of barricades under small perturbations
Let (H^s)_s∈⊂_<τ be a homotopy, stationary for |s|≥ R with some R>0, from H^- to H^+ such that
H^± are non-degenerate and the pairs (H,J) and (H^±,J) have a barricade on Ω:=M_r∖ int(M_r'). Then, for every C^∞-small perturbation H of H which satisfies supp (∂_sH^s-∂_sH^s)⊂ [-R,R]× [0,1]× M and H^±=H^±, the pairs (H,J) and (H^±,J) also have a barricade on Ω.
The proof of Proposition <ref> is parallel to that of <cit.>,
we give a sketch of the proof for the sake of completeness.
The crucial ingredient of the proof is to apply the following Gromov-Floer compactness result.
Let (H^s)_s∈⊂_<τ be a homotopy, stationary for |s|≥ R with some R>0, with non-degenerate ends H^±. Let H_n be a sequence of homotopies in _<τ such that the sequence {H_n-H}_n⊂ C^∞_I(M) converges to 0 in C^∞-topology where I=[-R,R].
Let {u_n}⊂ℳ_H_n,J(x_-,x_+) be a sequence of solutions and {σ_n} a sequence of real numbers. Then there exist subsequences of {u_n} and {σ_n} (still denoted by {u_n} and {σ_n} for simplicity), Hamiltonian chords x_i∈𝒞(L,H^-),i=0,…,k and y_j∈𝒞(L,H^+),j=0,…,l, and sequences of real numbers {ς_n^i} for 1≤ i≤ k and {τ_n^j} for 1≤ j≤ l such that
u_n ⟶ w∈ℳ_H,J(x_k,y_0),
u_n(·+ς_n^i,·) ⟶ v_i∈ℳ_H^-,J(x_i-1,x_i),
u_n(·+τ_n^j,·) ⟶ v_j'∈ℳ_H^+,J(y_j-1,y_j)
for 1≤ i≤ k and 1≤ j≤ l in C^∞_loc-topology, and the sequence u_n(·+σ_n) converges to one of v_i,w,v_j' in C^∞_loc-topology up to a shift in the s-coordinate.
For x∈𝒞(L,H^-) and y∈𝒞(L,H^+), we write
ℳ:=∪_nℳ_H_n,J⋃ℳ_H,J, ℳ(x,y):=∪_nℳ_H_n,J(x,y)⋃ℳ_H,J(x,y).
The finite sequence (v_0,…,v_k,w,v_0'…,v_l') is called a broken trajectory of (H,J) as illustrated in Figure <ref>.
The proof of <cit.> or <cit.> can be carried over to Theorem <ref> in a direct fashion. The key observation is that the energies E(u) for all u∈ℳ have a uniform bound. For simplicity we set H_0:=H. In our setting, even although the target manifold M is non-compact, since H_n∈_<τ for all n∈∪{0} and their slopes outside M do not depend on s we deduce from Lemma <ref> that (u)⊂ M. And since ∂_sH_n are supported in [-R,R]×[0,1]× M it follows from the energy identity (<ref>) that for every u∈ℳ_H_n,J(x_-,x_+),
E(u)≤𝒜_L,H^-(x_-)-𝒜_L,H^+(x_+)+2R·sup_nsup_[-R,R]× [0,1]× M{∂_sH^s_n}
where the fact that all ∂_sH^s_n satisfy a uniform bound on [-R,R]× [0,1]× M has been used due to the uniform convergence with derivatives of the sequence {H_n} to H. Letting
A:=max_x_±∈𝒞(L,H^±)(𝒜_L,H^-(x_-)-𝒜_L,H^+(x_+))++2R·sup_nsup_[-R,R]× [0,1]× M{∂_sH^s_n},
we have that E(u)≤ A for all u∈ℳ. This and the assumption that L is exact (hence no bubbling disks or spheres occur) will result in a uniform bound for the J-gradient vector field ∇_J u, that is,
There exists a constant C>0 such that for each u∈ℳ and each (s,t)∈×[0,1],
∂_su(s,t)^2_J+∂_tu(s,t)_J^2≤ C.
The rest of the proof of Theorem <ref> is a consequence of repeatedly applying the above lemma, Arzelá-Ascoli theorem and elliptic regularity, we refer to <cit.> for the completely parallel proof.
Let (H^s)_s∈⊂_<τ be a homotopy, stationary for |s|≥ R with some R>0, from H^- to H^+ such that
H^± are non-degenerate and the pairs (H,J) and (H^±,J) have a barricade on Ω:=M_r∖ int(M_r'). We will see that the similar restrictions of the barricade on Floer trajectories persist for broken trajectories of (H,J). More precisely, we have
Let v=(v_1,…,v_k,w,v_1'…,v_l') be a broken trajectory of (H,J) connecting x_±∈𝒞(L,H^±). Then it holds that
I. if (x_-)⊂ W_0:=M_r' then v⊂ W_0.
II. if (x_+)⊂ W_1:=M_r then v⊂ W_1.
We only prove statement I since statement II can be proved in almost exactly the same way. We first notice that x_1:=lim_s→+∞v_1(s,·) has image in W_0 because (H^-,J) has a barricade on Ω and x_-=lim_s→-∞v_1(s,·) has image in W_0. Since by definition x_1 is also the negative end of v_2, i.e., x_1:=lim_s→-∞v_2(s,·) and since (H^-,J) has a barricade on Ω, we see that (v_2)⊂ W_0. Repeatedly using the above argument we find that the images of v_1,…,v_k are all contained in W_0. Next, by definition we have x_k:=lim_s→+∞v_k(s,·)=lim_s→-∞w(s,·). It follows from the assumption that (H,J) has a barricade on Ω that (w)⊂ W_0. Since (H^+,J) has a barricade on Ω and y_0:=lim_s→-∞v_0'(s,·)=lim_s→+∞w(s,·), we see that v_0' has image in W_0. Finally, arguing in the same way we conclude that all v_j',1≤ j≤ l have images in W_0. Therefore, the broken trajectory v is entirely contained in W_0.
This completes the proof of the claim.
Now we are in position to finish the proof of Proposition <ref>. Suppose the contrary that statement I in the definition of barricade does not hold. Then there exists a sequence {H_n} of regular homotopies with the sequence {H_n-H}_n⊂ C^∞_[-R,R](M) converging to 0 in C^∞-topology such that for every n, the moduli space ℳ_H_n,J admits an element u_n which satisfies that the limit x_-^n:=lim_s→-∞u_n(s,·) is contained in W_0 but u_n is not. For every n we pick a number σ_n∈ such that u_n(σ_n,·) is not contained in W_0.
Since H^±_n=H^± and H^± admit only finitely many Hamiltonian chords, we may assume that x_±^n=x_± (after passing to a subsequence) for all n. In this case we have u_n∈ℳ(x_-,x_+) for all n. By Theorem <ref>, there exist subsequences of {u_n} and {σ_n} (still denoted by {u_n} and {σ_n}) such that {u_n} converges to a broken trajectory v of (H,J) and u_n(·+σ_n,·) converges to one of the solutions in v (perhaps up to a shift). Since x_-=x_0⊂ W_0, the first statement of Claim <ref> implies that v is completely contained in W_0, and hence lim_n→∞u_n(·+σ_n,·)⊂ W_0. Since the latter limit is taken in C^∞_loc-topology, we have that
lim_n→∞u_n(σ_n,·)=lim_n→∞u_n(0+σ_n,·) is also contained in W_0 — a contradiction!
Using the second statement of Claim <ref> and arguing in the same way, we see that if n is sufficiently large, then every solution u_n∈ℳ_H_n,J ending in W_1 is contained in W_1.
§.§ Finishing the proof of Theorem <ref>
Let (H^s)_s∈ be the homotopy as in the assertion. Let (J_t)_t∈[0,1]⊂𝒥_θ be any family of almost complex structures (to be determined later) that
are of contact type near the boundaries ∂ M_r',∂ M_r. For sufficiently small μ∈(0,τ) we make a C^∞-small perturbation of H into a homotopy h such that (h,J) admits a cylindrical bump of slope μ∈(0,τ) on Ω:=M_r∖ int(M_r') and h^±∈_<τ are non-degenerate. This can be done by adding first a C^∞-small radial bump function χ to H on a small neighborhood of Ω in M, and then perturbing its ends H^±+χ out of a small neighborhood of ∂Ω in M into non-degenerate Hamiltonians that are C^2-small Morse functions on [r',1]×∂ M, finally making a generic perturbation of H+χ out of a small neighborhood of ∂Ω in M so that the ends h^± of the resulting homotopy h agree with the non-degenerate perturbations of H^±+χ respectively. Clearly, (h,J) is a pair with a cylindrical bump of slope μ∈(0,τ) on Ω. It follows from Proposition <ref> that the pairs (h,J) and (h^±,J) have a barricade on Ω. Moreover, we may require that supp(∂_sh-∂_s H)⊂ [-R,R]× [0,1]× M for large R>0 since H is stationary for sufficiently large |s|.
The remaining problem is that the pairs (h,J) and (h^±,J) may be not regular. Suppose this case, we need to perturb the homotopy and its ends again to achieve regularity while keeping the barricade condition on Ω. Indeed, this is possible by following an argument due to Ganor and Tanny <cit.>. The crucial point is that if (h^±,J) are regular, then for any compact interval I with non-empty interior one can perturb h on the set I× [0,1]× M such that the resulting homotopy h' is Floer-regular meanwhile the barricade property of h on Ω survives for h'.
Now since h^± are non-degenerate and has no chords near ∂Ω, by the usual Floer theory one can choose generically a family of time-dependent almost complex structures J=(J_t)_t∈[0,1]∈𝒥_θ such that (h^±,J) are regular and of contact type near the boundary ∂Ω.
Fix such J. If the homotopy h constructed above is a constant homotopy, we have already got the desired result. If h is not a constant homotopy, by Proposition <ref> one can choose a homotopy h' close enough to h with supp(∂_sh'-∂_sh)⊂ I× [0,1]× M for a fixed compact interval I such that (h',J) are regular. Then Proposition <ref> gives rise to the desired pairs (h',J) and (h'^±,J).
SK
APM A. Abbondandolo, A. Portaluri and M. Schwarz. The homology of path spaces and Floer homology with conormal boundary conditions. J. Fixed Point Theory Appl. 4 (2008), 263–293.
AM A. Abbondandolo and M. Schwarz, Floer homology of cotangent bundles and the loop product. Geom. Topol. 14 (2010), 1569–1722.
AS M. Abouzaid and P. Seidel, An open string analogue of Viterbo functoriality. Geom. Topol. 14 (2010), 627–718.
Al P. Albers, A Lagrangian Piunikhin-Salamon-Schwarz morphism and two comparison homomorphisms in Floer homology. Int. Math. Res. Not. IMRN 2008, Art. ID rnm134, 56 pp.
AD M. Audin and M. Damian. Morse theory and Floer homology, Springer, 2014.
BK G. Benedetti and J. Kang. Relative Hofer-Zehnder capacity and positive symplectic homology. J. Fixed Point Theory Appl. 24: 44, 2022.
BC P. Biran and O. Cornea, Bounds on the Lagrangian spectral metric in cotangent bundles. Comment. Math. Helv. 96 (2021), 631–691.
BPS P. Biran, L. Polterovich and D. Salamon, Propagation in Hamiltonian dynamics and relative symplectic homology, Duke Math. J. 119 (2003), 65–118.
BM M.-S. Borman and M. McLean, Bounding Lagrangian widths via geodesic paths. Compos. Math. 150 (2014), 2143–2183.
Ch Y.-V. Chekanov, Invariant Finsler metrics on the space of Lagrangian embeddings. Math. Z. 234 (2000), 605–619.
Di P. Dietzsch, Bounding the Lagrangian Hofer metric via barcodes. arXiv:2304.05628, May (2021).
Dim G. Dimitroglou Rizell, Families of Legendrians and Lagrangians with unbounded spectral norm. J. Fixed Point Theory Appl. 24 (2022), Paper No. 43, 32 pp.
EO T. Ekholm and A. Oancea, Symplectic and contact differential graded algebras. Geom. Topol. 21 (2017), 2161–2230.
Fl0 A. Floer, The unregularized gradient flow of the symplectic action. Comm. Pure Appl. Math. 41 (1988), 775–813.
Fl1 A. Floer, Morse theory for Lagrangian intersections. J. Differential Geom. 28 (1988), 513–547.
FHW A. Floer, H. Hofer and K. Wysocki, Applications of symplectic homology I. Math. Z. 217 (1994), 577–606.
FHS A. Floer, H. Hofer and D. Salamon, Transversality in elliptic Morse theory for the symplectic action. Duke Math. J. 80 (1995), 251–292.
FS U. Frauenfelder and F. Schlenk, Hamiltonian dynamics on convex symplectic manifolds. Israel J. Math. 159 (2007), 1–56.
FOOO K. Fukaya, Y.-G. Oh, H. Ohta and K. Ono, Spectral invariants with bulk, quasi-morphisms and Lagrangian Floer theory. Mem. Amer. Math. Soc. 260 (2019), no. 1254, x+266 pp.
GS Y. Ganor and S. Tanny, Floer theory of disjointly supported Hamiltonians on symplectically aspherical manifolds. arXiv:2005.11096, May, 2021.
Go1 W. Gong, Symplectic deformations of Floer homology and non-contractible periodic orbits in twisted disc bundles. Commun. Contemp. Math. 23 (2021), Paper No. 1950084, 36 pp.
GX W. Gong and J. Xue, Floer homology in the cotangent bundle of a closed Finsler manifold and noncontractible periodic orbits. Nonlinearity 33 (2020), no. 12, 6297–6348.
Go2 W. Gong, Lagrangian Ljusternik–Schnirelman theory and Lagrangian intersections. arXiv:
2111.15442, Nov. (2021).
HZ H. Hofer, and E. Zehnder, Symplectic invariants and Hamiltonian dynamics. Birkhäuser Advanced Texts: Basler Lehrbächer. Birkhäuser Verlag, Basel, 1994. xiv+341 pp.
Hu V. Humilière, Hofer's distance on diameters and the Maslov index. Int. Math. Res. Not.
IMRN 2012 (2012), 3415–3433.
KM J. Katić and D. Milinković, Piunikhin-Salamon-Schwarz isomorphisms for Lagrangian intersections. Differential Geom. Appl. 22 (2005), 215–227.
KMD1 J. Katić, D. Milinković and J. Nikolić, Spectral invariants in Lagrangian Floer homology of open subset. Differential Geom. Appl. 53 (2017), 220–267.
KMD2 J. Katić, D. Milinković and J. Nikolić, Spectral numbers and manifolds with boundary. Topol. Methods Nonlinear Anal. 55 (2020), 617–653.
Kh M. Khanevsky, Hofer's metric on the space of diameters. J. Topol. Anal. 1 (2009), 407–416.
KS A. Kislev and E. Shelukhin, Bounds on spectral norms and barcodes. Geom. Topol. 25 (2021), 3257–3350.
LM F. Lalonde and D. McDuff, The geometry of symplectic energy, Ann. of Math. 141 (1995), 349–371.
Le R. Leclercq, Spectral invariants in Lagrangian Floer theory. J. Mod. Dyn. 2 (2008), 249–286.
LZ R. Leclercq and F. Zapolsky, Spectral invariants for monotone Lagrangians. J. Topol. Anal. 10 (2018), 627–700.
Ma P.-A. Mailhot, The spectral diameter of a Liouville domain. arXiv:2205.04618, May (2022).
MVZ A. Monzner, N. Vichery and F. Zapolsky, Partial quasi-morphisms and quasi-states on cotangent bundles, and symplectic homogenization. J. Mod. Dyn. 6 (2012), 205–249.
Oh2 Y.-G. Oh, Symplectic topology as the geometry of action functional. I. Relative Floer theory on the cotangent bundle. J. Differential Geom. 46 (1997), 499–577.
Oh3 Y.-G. Oh, Symplectic topology as the geometry of action functional. II. Pants product and cohomological invariants. Comm. Anal. Geom. 7 (1999), 1–54.
Oh4 Y.-G. Oh, Spectral Invariants: Applications, Volume 2 of New Mathematical Monographs, pp. 348–407, Cambridge University Press, Cambridge, 2015.
Oh5 Y.-G. Oh, Construction of spectral invariants of Hamiltonian paths on closed symplectic manifolds, from “The breadth of symplectic and Poisson geometry", 525–570, Progr. Math., 232, Birkhäuser Boston, Boston, MA, 2005.
Oh6 Y.-G. Oh, Spectral invariants, analysis of the Floer moduli space, and geometry of the Hamiltonian diffeomorphism group. Duke Math. J. 130 (2005), 199–295.
Po L. Polterovich, The Geometry of the Group of Symplectic Diffeomorphisms, Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel, 2001.
PSS S. Piunikhin, D. Salamon and M. Schwarz, Symplectic Floer-Donaldson theory and quantum cohomology, from “Contact and symplectic geometry (Cambridge, 1994)", 171–200, Publ. Newton Inst., 8, Cambridge Univ. Press, Cambridge, 1996.
Rit A.-F. Ritter, Topological quantum field theory structure on symplectic cohomology. J. Topol. 6 (2013), 391–489.
RS J. Robbin and D. Salamon, The spectral flow and the Maslov index. Bull. London Math. Soc. 27 (1995) 1–33.
SZ D. Salamon and E. Zehnder, Morse theory for periodic solutions of Hamiltonian systems and
the Maslov index. Comm. Pure Appl. Math. 45 (1992), 130–1360.
Sch M. Schwarz, On the action spectrum for closed symplectically aspherical manifolds. Pacific J. Math. 193 (2000), 419–461.
Sch2 M. Schwarz, Morse Homology. Birkhäuser, 1993.
Se S. Seyfaddini, Unboundedness of the Lagrangian Hofer distance in the Euclidean ball. Electron. Res. Announc. Math. Sci. 21 (2014), 1–7.
Sh1 E. Shelukhin, Symplectic cohomology and a conjecture of Viterbo. Geom. Funct. Anal. 32 (2022), 1514–1543.
Sh2 E. Shelukhin, Viterbo conjecture for Zoll symmetric spaces. Invent. Math. 230 (2022), 321–373.
Su Y. Sugimoto, Hofer's metric on the space of Lagrangian submanifolds and
wrapped Floer homology. J. Fixed Point Theory Appl. 18(2016), 547–567.
Us1 M. Usher, Submanifolds and the Hofer norm. J. Eur. Math. Soc. 16 (2014), 1571–1616.
Us2 M. Usher, Hofer geometry and cotangent fibers. J. Symplectic Geom. 12 (2014), 619–656.
Us3 M. Usher, Hofer's metrics and boundary depth. Ann. Sci. Éc. Norm. Supér. 46 (2013), 57–128 (2013).
Vi1 C. Viterbo. Symplectic homogenization. J. Éc. polytech. Math. 10 (2023), 67–140.
Vi C. Viterbo, Symplectic topology as the geometry of generating functions. Math. Ann. 292 (1992), 685–710.
Vi2 C. Viterbo, Functors and computations in Floer homology with applications, I. Geom. Funct. Anal. 9 (1999), 985–1033.
We J. Weber, Noncontractible periodic orbits in cotangent bundles and Floer homology. Duke Math. J. 133 (2006), 527–568.
Za2 F. Zapolsky, Geometry of contactomorphism groups, contact rigidity, and contact dynamics in jet spaces. Int. Math. Res. Not. IMRN 20 (2013), 4687–4711.
Za3 F. Zapolsky, On the Hofer geometry for weakly exact Lagrangian submanifolds. J. Symplectic Geom. 11 (2013), 475–488.
|
http://arxiv.org/abs/2307.02771v1
|
20230706044516
|
Reconstructing the boundary of AdS from an infrared defect
|
[
"Cesar Arias"
] |
hep-th
|
[
"hep-th",
"gr-qc"
] |
1pt
Reconstructing the boundary of AdS
from an infrared defect
30pt
Cesar Arias
10pt
Departamento de Matemática, Pontificia Universidad Católica de Chile[Current affiliation. E-mail: mailto:[email protected]@mat.uc.cl.]
-2pt
and
-2pt
Department of Mathematics, University of California, Davis
30pt
Abstract
We argue that the boundary of an asymptotically anti-de Sitter (AdS) space of dimension d+1, say M^d+1, can be locally reconstructed from a codimension-two defect located in the deep interior of a negatively curved Einstein manifold X^d+2 of one higher dimension.
This means that there exist two different ways of thinking about the same d-submanifold, Σ^d: either as a defect embedded in the interior of X^d+2, or as the boundary of M^d+1 in a certain zero radius limit.
Based on this idea and other geometric and symmetry arguments, we propose the existence of an infrared field theory on a bulk ℤ_n-orbifold defect, located in the deepest point of the interior of AdS^d+2.
We further conjecture that such a theory gives rise to the holographic theory at the asymptotic boundary of AdS^d+1, in the limit where the orbifold parameter n→∞.
As an example, we compute a defect central when Σ is a 2-manifold of fixed positive curvature, and show that its n→∞ limit reproduces the central charge of Brown and Henneaux.
§ INTRODUCTION
§.§ Bulk defects as generalized boundaries
It has increasingly become a known fact that in order to fully characterize a quantum field theory one should consider not only local operators but also take into account defects of various codimensions.
A codimension-k defect is a d-dimensional submanifold with singular support, embedded in a manifold of dimension D>d, where k=D-d. Examples include line and surface defects, such as Wilson and 't Hooft loops and surfaces, and cosmic strings and membranes.
The properties of defects have shown to be relevant in the study of dualities in supersymmetric gauge theories <cit.>, boundary conformal field theories <cit.>, and in the study of generalized symmetries and charges in field theory <cit.> and higher spin gravity <cit.>.
The case of orbifold defects has been of importance in the computation of holographic Rényi and entanglement entropies <cit.>, and in the analysis of the Page curve of evaporating black holes <cit.>.
The aim of this article is to argue that, on general grounds, a bulk defect[In this work, we are interested in defects that are located in the interior of a manifold. A defect with support on a boundary subregion is sometimes referred to as a corner.] and a boundary are two different phases of the same object; a d-submanifold, say Σ^d, can be understood as a defect or as a boundary depending on the different limits of the theory one is looking at.
Moreover, as will elaborate on for the case of an asymptotically AdS space M^d+1, the boundary submanifold can be reconstructed from a bulk defect embedded in a manifold of one higher dimension, that hereafter we denote by X^d+2. The general scheme is illustrated in following diagram[
We decorate the manifold Σ_⋆^d with a “star" to specify that is being treated as a defect; we write Σ_⋆^d↪ X^d+2 to indicate that the defect Σ_⋆^d is embedded in X^d+2. When the manifold instead behaves as a boundary, we simply write Σ^d=∂ M^d+1 with no extra bells or whistles. The rest of the notation used through the paper is collected in appendix <ref>.
]:
at (0,2) Σ_⋆^d↪ X^d+2;
at (-2,1) transition;
at (0,0) Σ^d=∂ M^d+1;
[thick, ->](-0.9,1.7)–(-0.9,0.3);
[thick, ->] (1.2, 2)to[out=0,in=0](1.2, 0);
at (3,1.2) zero-radius;
at (3,0.7) (X→ M);
In this picture, all the geometric properties of the asymptotic AdS boundary may be thought of as being inherited from a higher (co)dimensional bulk defect, as a result of some type of transition Σ^d_⋆→Σ^d whereby the boundary Σ^d=∂ M^d+1 is truly a reincarnation of a defect Σ^d_⋆ embedded in X^d+2.
The existence of this transition yields inevitably to hypothesize that bulk defects should be able to holographically encapsulate <cit.> (just as a boundary does) degrees of freedom that can independently be described by means of some field theory; we conjecture that such a theory gives rise to the holographic theory at the boundary of AdS <cit.>, in a certain zero-radius limit.
§.§ Summary and plan of the paper
Having in mind the diagram displayed further above, we begin in <ref> by constructing the manifold X^d+2. For simplicity, we take X^d+2=D^2×Σ^d, where D^2 is a disk (with boundary a circle) and Σ^d is a d-dimensional manifold with no boundary (that we can think of as having sphere topolgy). Importantly, one can easily create a defect on X^d+2 by acting with ℤ_n on the disk; this originates a defect in codimension two, that we denote by Σ_⋆, which corresponds to the set of fixed points of the ℤ_n action on X^d+2 and it is thus located at the center of the disk. This is of course the deepest point of the interior of X^d+2.
We next ask ourselves whether if physically relevant spacetimes of this type can actually exist; requiring X^d+2 to be a negatively curved Einstein manifold, we show the existence of a family of such backgrounds—all of them supporting a defect Σ_⋆ in the deepest point of their interior—, of which pure AdS spacetime is an example.
Motivated by the ideas of the holographic renormalization group flow <cit.>, in which one identifies the AdS radius as the energy scale in the flow of the dual field theory, we refer to Σ_⋆ as an infrared defect.
In <ref> we study the local geometry close to an infrared defect by zooming into the region at center of the disk. About this region, the quotient D^2/ℤ_n is locally a cone, and the manifold X^d+2 is approximately the direct product of that cone (with the defect Σ^d_⋆ at the tip of it) with Σ^d; see figure Fig:22.
Importantly, the radius of the cone scales as 1/n, where n>1 is the ℤ_n-orbifold parameter. Thus, the limit[Here and in what follows, we implicitly assume the analytic continuation n∈ℝ_+.] n→∞ is equivalent to the zero-radius limit of the cone. In this limit, the cone shrinks to a small interval, say [0, ε) (where ϵ>0 defines the range of validity of the local approximation), and thus the full space X^d+2 collapses to M^d+1= [0, ε)×Σ^d. During the process, the defect submanifold Σ^d_⋆—originally embbeded in X^d+2—becomes the boundary of M^d+1, as illustrated in the diagram of the previous page. We denote this transition[In two bulk dimensions, related ideas have been explored in string theory and condensed matter physics when studying the behavior of boundary degrees of freedom under renormalization group flow <cit.>.] as Σ^d_⋆→Σ^d.
We continue by observing that the product [0, ε)×Σ^d=M^d+1, where we recall that Σ^d has no boundary, has the same form as the collar neighborhood one considers when studying the geometry close to the boundary of an asymptotically AdS space. Therefore, in <ref> we ask for the conditions under which the manifold M^d+1=X^d+2|_n→∞ (understood as a limit of X) can be regarded as an asymptotically AdS space.
These conditions follow from requiring that Einstein's equation for the metric close to the boundary Σ^d=∂ M^d+1—which can in general be solved asymptotically by means of the Fefferman–Graham expansion <cit.>—should arise from the large n limit of Einstein's equations for the metric on X^d+2, about the region close to Σ^d_⋆. We will refer to the procedure of imposing such conditions as boundary reconstruction.
Next, in <ref>, we show that, just as in the case of the boundary of AdS, the Einstein condition at finite n>1, on the metric close to the defect, can also be formally solved order by order in powers of the distance to the defect, in a metric expansion that resembles the Fefferman–Graham solution. We construct this expansion up to second order.
In <ref> we turn to the holographic implications of the Σ^d_⋆→Σ^d transition; because of the existence of a dual theory at the boundary Σ^d of an asymptotically AdS space <cit.>, it is plausible to think that (at least in some cases) such a theory exists already on an infrared defect Σ^d_⋆, and becomes a boundary theory only in the zero-radius limit n→∞.
Consequently, in <ref>, we argue that the parent bulk defect Σ^d_⋆ exhibits generalized versions of all the relevant features that we find at the asymptotic boundary of AdS, and that are indicative of the existence of a boundary holographic theory, namely:
♢ At the location of the defect, the spacetime symmetries are enhanced to those of the full conformal group.
♢ The singular nature of the defect permits the insertion—via a δ-function in codimension-two—of a local stress-energy tensor, which in principle suffices to define a conformal field theory.
♢ In a suitable gauge, the defect submanifold turns out to be naturally equipped with a conformal equivalence class of metrics (also known as a conformal structure). Furthermore, in that gauge and as spelled out in <ref>, there exists a formal asymptotic solution to Einstein's equations for the metric about the location of the defect, analogue to the Feffereman–Graham expansion, whose expansion coefficients encode relevant holographic quantities.
The previous elements lead us to propose the existence of a conformal field theory (CFT) on a ℤ_n-orbifold defect Σ_⋆^d, located in the deepest point of the interior of X^d+2.
We further argue that, by virtue of the Σ^d_⋆→Σ^d transition, this theory gives rise to the holographic theory at the boundary Σ^d of an asymptotically AdS space M^d+1, upon taking the n→∞ limit.
In <ref> we give a simple example.
We compute a central charge for the theory on the infrared defect in the case where the defect is a 2-manifold embedded in four bulk dimensions.
We show that when Σ^2_⋆ has scalar Ricci curvature equals to 4/R_0^2, where R_0 is the radius of the disk, the n→∞ limit of the charge reproduces the Brown–Henneaux <cit.> central charge of the theory at the asymptotic boundary of AdS^3.
We conclude with a brief discussion in <ref>.
We collect our conventions, notation and some details of our calculations in appendices <ref> and <ref>.
The main ideas presented here have been motivated by previous work <cit.>, in which the properties of defects in codimension-two were exploited to study the notion of entanglement in de Sitter space.
Different routes to generalize holography to higher codimensions, in which the dual CFT has support on a boundary corner, have been studied in <cit.>.
§ GLOBAL EINSTEIN GEOMETRIES WITH DEFECTS
In this section, we study certain type of Einstein geometries that admit a codimension-two defect in the deepest point of their interior. We demonstrate the existence of an entire family of such geometries and, as an example, we explicitly show that pure AdS spacetime belongs to this family.
The goal of this section is motivate a further local, asymptotic analysis about the location of one of these defects, in the same fashion one performs a local study about the boundary of an asymptotically AdS spacetime.
§.§ Geometries with a deep-in-the-bulk defect
To begin with, we consider a manifold X of dimension d+2 given by the direct product
X^d+2 = D^2/ℤ_n ×Σ^d ,
where D^2 is a two-dimensional disk and Σ^d is a d-dimensional manifold without boundary. It follows that[Whenever is clear from context, we will drop the dimension as a superscript and simply write X and Σ instead of X^d+2 and Σ^d.]
∂ X = S^1_r_n×Σ ,
where S^1_r_n denotes a circle of radius r_n; due to the ℤ_n action on D^2, this radius scales as r_n∼ 1/n.
We next endow X with a singular metric of the form
g_X = g_D^2/ℤ_n + h/u^2 .
Here, g_D^2/ℤ_n is a two-dimensional Euclidean metric on the conically singular orbifold D^2/ℤ_n (whose smooth limit is n=1), h is a Lorentzian metric on Σ^d, and u is a defining function whose zero locus determines the conformal infinity of the metric (<ref>), that is
Conf_∞ (g_X) := { p∈ X| u(p)=0 } .
Choosing the coordinates on D^2 to be (θ, ϕ), with 0≤θ≤π/2 and 0≤ϕ<2π, the coordinates on Σ to be x^i, with i=0,...,d-1, and recalling that ℤ_n acts on D^2 by the azimuthal identification ϕ∼ϕ+2π n^-1, the metric (<ref>) reads
g_X = R_0^2(θ^2 + n^-2sin^2θ ϕ^2)+h_ij(θ, x) x^i x^j/u^2(θ) .
In the above, R_0 denotes the radius of the disk, and we have taken the metric h to depend on x∈Σ and on the polar coordinate θ∈ D^2, while the defining function u only depends on the latter.
We are interested in the case in which the pair (X, g_X) is an Einstein manifold for a negative cosmological constant. When that is the case, Einstein equations for the components (g_X)_θθ, (g_X)_θ i, (g_X)_ϕϕ and (g_X)_ij are respectively given by[We have retained the overall factor of 1/n^2 in front of (<ref>) for reasons that will become clear later.]
0 =-1/2 Tr(h^-1h”)+1/4 Tr(h^-1h'h^-1h')+1/2u'/u Tr(h^-1h') +1+θ u'/u+(d+1)[u”/u-(u'/u)^2+R_0^2/L^2 u^2] ,
0 =h^jk(∇_j h'_ik -∇_i h'_jk) ,
0 =sin^2θ/n^2[1+u”/u+(u'/u-θ)(1/2 Tr(h^-1h')-(d+1)u'/u)+(d+1)R_0^2/L^2u^2] ,
0 =R_ij(h)-1/2R_0^2h”_ij +1/2R_0^2 (h'h^-1h')_ij- 1/2R_0^2(θ-d u'/u+1/2 Tr(h^-1h')) h'_ij +1/R_0^2[u”/u+θ u'/u +d+1/u^2(R_0^2/L^2-(u')^2)+1/2u'/u Tr(h^-1h')] h_ij .
The calculation of the above equations makes use of the conventions specified in appendix <ref> and the components of the Ricci tensor for g_X given in equation (<ref>). In order to lighten the notation, we have suppressed whenever is possible the indexes on the metric h, writing Tr(h^-1h”)=h^ijh”_ij, Tr(h^-1h'h^-1h')=h^ijh'_ikh^kl h'_jl, and (h'h^-1h')_ij=h'_ikh^klh'_jl, where the primes indicate derivatives with respect to θ. Also, L denotes the AdS^d+2 radius, and R_ij(h) denotes the components of the Ricci tensor built from h.
Global solutions to (<ref>)-(<ref>) are difficult to find, of course, and it is not our purpose here. However, we observe that a family of exact Einstein geometries can be obtained by taking h to be independent of θ (so that h'=h”=0), and by setting the radius of the disk to be equal to the AdS^d+2 radius, that is
h=h(x) and R_0= L .
Consequently, the defining function
u=cosθ
solves equations (<ref>)-(<ref>) in the special case in which h is itself any Einstein metric of negative scalar of curvature, namely
R_ij(h) + d-1/L^2 h_ij =0 .
We thus have the family of Riemannian geometries
ℱ_h:=(X, g_X(h)) ,
where X and g_X are defined as in (<ref>) and (<ref>), respectively, and h satisfies the Einstein condition (<ref>). From (<ref>) and (<ref>), it follows that in all these geometries the conformal infinity of g_X is located at θ=π/2 and thus coincides with the boundary of X:
Conf_∞ (g_X) = ∂ X .
A key feature of the family ℱ_h is that every geometry member of it contains two distinguished submanifolds, namely a codimension-one boundary and a codimension-two bulk defect. Indeed, recalling that X^d+2≅ D^2/ℤ_n ×Σ^d, one can see that the non-trivial ℤ_n>1 action has as a set of fixed points the center of the disk (see figure Fig:11), which is the deepest point of the interior of X (i.e. the furthest point from the boundary).
Metric-wise and locally about this point, in coordinates given by θ=0, we have that g_X≈ R^2_0(θ^2 + θ^2ϕ^2/n^2)+⋯, which is the singular geometry of a cone of deficit angle 2π(1-1/n).
[scale=0.8]
at (-3,2.7) Smooth geometry (n=1);
at (-3,-0.8) ∂ X= Conf_∞ (g_X);
[thick, darkred](-3,0) ellipse (2cm and 0.3cm);
[thick] (-1,0)
arc[start angle=0,end angle=-180,
x radius=2, y radius=-2];
(N) at (4,2);
(E1) at (2.5,0);
(E2) at (5.5,0);
at (4,2.7) Σ_⋆ defect (n>1);
[thick, darkred](4,0) ellipse (1.5cm and 0.3cm);
[thick] (N)to[out=-20,in=100](E2);
[thick] (N)to[out=-160,in=80](E1);
at (4.2,-0.8) ∂ X;
at (N) •;
[thick,darkblue,->,>=latex] (4.1,0) – (5.5,0);
at (6.9,0) r_n=R_0/n;
[text width=15cm, text justified] at (0.7,-5.2)
Fig:1 Fig. 1:
Depiction of the D^2/ℤ_n factor of the product manifold X^d+2=D^2/ℤ_n×Σ^d. For any h satisfying the Einstein condition (<ref>), the corresponding member of the family ℱ_h=(X, g_X(h)) has conformal infinity at the boundary of the disk, which coincides with the boundary of X. On the left, we illustrate the smooth, n=1 geometry, in which case the radius of the disk is R_0.
The value n>1, on the right, induces a conical defect Σ_⋆ (the set of fixed points of the ℤ_n-action) located at the center of the disk, which is the deepest point of bulk. In this case, the boundary circle has radius r_n=R_0/n. ;
In what follows, we will denote the codimension-two set of fixed points as
Σ_⋆ := X|_θ=0 ,
and we will refer to it as defect. By construction, Σ_⋆ has the same topology as Σ (and hence has no boundary), and it is endowed with the induced metric h_(0)=g_X|_θ=0 that in turns satisfy (<ref>).
§.§ Example: pure AdS
There exists a distinguished solution h to the Einstein condition (<ref>) by means of which the Riemannian manifold (X^d+2, g_X) turns into pure AdS^d+2 spacetime.
To this end, we take Σ^d to be two copies of AdS^d glued along their boundaries, that is
Σ^d = AdS^d_±:= AdS^d_+∪ AdS^d_- ,
where we have denoted by AdS^d_+ and AdS^d_- to each of these copies. Note that since the gluing is along the boundary, Σ has sphere topology and thus no boundary.
We next equip AdS^d_± with the line element
h=h_ AdS_±^d = L^2/cos^2 z[- t^2+ z^2+sin^2z Ω^2_d-2] .
These coordinates are sometimes referred to as the conformal compactification of AdS; in our case, the radial coordinate 0≤ z≤π, with 0≤ z≤π/2 for one AdS copy and π/2≤ z≤π for the second one. The two copies are glued along the boundary[In the special case of AdS^2 the gluing is made along the two disconnected boundaries located at z=0,π; the resulting extended z-coordinate runs then over an entire circle 0≤ z<2π.] located at z=π/2. As usual, the time coordinate -∞<t<∞, and Ω^2_d-2 denotes the induced metric on a sphere of dimension d-2.
With the choice (<ref>) and (<ref>), the full (d+2)-dimensional geometry (<ref>) becomes
g_X=L^2(θ^2 + sin^2θ ϕ^2) + h_ AdS_±^d/cos^2θ .
A direct calculation shows that (<ref>) is indeed the induced metric on the
AdS^d+2↪ℝ^2,d+1 hyperboloid
-(Z^0)^2 - (Z^0')^2+ ∑_a=1^d+1 (Z^a)^2 = -L^2 ,
where the Z's are coordinates on flat embedding space ℝ^2, d+1. To see this, it suffices to parametrize
Z^0= Lcos(t/L)/cosθcos z ,
Z^0'= Lsin(t/L)/cosθcos z ,
Z^i=Ltan z /cosθ y^i ,
Z^d=L tanθcosϕ ,
Z^d+1=L tanθsinϕ ,
where we recall that 0≤θ≤π/2 and 0≤ϕ≤2π.
The pullback of the flat embedding space metric η= diag (-1,-1,1,...,1) onto the hypersurface (<ref>) then gives (<ref>).
§ BOUNDARY RECONSTRUCTION AND LOCAL DEFECT GEOMETRY
We now abandon the global approach of <ref> and focus on the asymptotic geometry about the region close to the defect. Our first goal here is to examine, locally around Σ_⋆:= X|_θ=0, the n→∞ limit of equations (<ref>)-(<ref>).
This limit—which corresponds to the zero-radius limit of an azimuthal circle transverse to Σ_⋆—defines a transition whereby the defect submanifold, originally embedded in X^d+2, reincarnates as the boundary of the resulting space of one lower dimension, that we denote by M^d+1; the situation is depicted down below.
[thick] (0,0) – (2,1);
[thick] (0,0) – (2,-1);
[darkred] at (0,0) •;
at (-1,0.2) Σ_⋆↪ X;
at (0,-1.5) (a) Finite n>1;
[thick, dashed] (1.7,0)
arc[start angle=0,end angle=180,
x radius=0.2, y radius=0.75];
[thick, dashed] (1.7,0)
arc[start angle=0,end angle=-180,
x radius=0.2, y radius=0.75];
[thick] (5,0) – (7.5,0);
[darkred] at (5,0) •;
at (5,0.5) Σ=∂ M;
at (6,-1.5) (b) The limit n→∞;
[text width=14cm, text justified] at (2.7,-4)Fig:2Fig. 2: Local picture of the quotient D^2/ℤ_n about the center of the disk: (a) for finite n>1, the set of fixed points Σ_⋆ is a codimension two defect embedded in X; (b) when n→∞, the transverse circle shrinks to a point and D^2/ℤ_n collapses to the interval I. During the process, Σ_⋆ transition from being a defect embedded in X^d+2, to be the boundary of M^d+1≅ I×Σ^d.
;
Consequently, in <ref>, we establish the conditions under which M^d+1 can be generically considered an asymptotically AdS spacetime, with its ordinary asymptotic boundary Σ=∂ M being thought of as the large n phase of the defect, finite n>1 submanifold Σ_⋆.
The second goal of the section is to show that the local defect equations, given by the θ≪1 approximation of equations (<ref>)-(<ref>), can be formally solved order by order in the “radial" coordinate, in a metric expansion that resembles the Fefferman–Graham boundary expansion; we construct such an expansion up to second order in <ref>.
Because is needed for our purposes, we begin in <ref> by reviewing the relevant properties of the asymptotic geometry of the boundary of AdS.
§.§ The local geometry of the AdS boundary
In <cit.>, Fefferman and Graham established a link between a pseudo[The prefix pseudo here simply means that the (everywhere non-degenerate) metric g_Y needs not to be positive definite, so that is taken to be an indefinite bilinear form.]-Riemannian ambient manifold (Y^d+2, g_Y) of dimension d+2, and a conformal manifold Σ^d of dimension d, by means of which local conformal invariants on Σ can be constructed from Riemannian invariants on Y.
The construction of these invariants is carried out by formally solving a Ricci-flat condition for the ambient space metric; physics-wise, this Ricci-flat condition for g_Y happens to be equivalent to Einstein's equations on a negatively curved manifold M^d+1 with boundary Σ^d. Consequently, as suggested in <cit.>, the Fefferman-Graham construction naturally encapsulates some of the geometric textures appearing in Maldacena's AdS/CFT correspondence <cit.>. In particular, its usage has been relevant to the calculation of the holgraphic Weyl (boundary) anomaly <cit.>, as well as other types of submanifold anomalies <cit.>.
In what follows, we briefly review the aspects of the Fefferman-Graham expansion that are relevant for our purposes. Further details can be found in the monograph <cit.>.
Conformally compact Einstein metrics.
Let M= M∪ ∂ M be a compact manifold of dimension d+1 with interior M and boundary ∂ M=Σ.
A Riemannian metric g on M is said to be conformally compact if there exists a smooth defining function r∈𝒞^∞(M), in which case
r |_M >0 ,
r|_Σ =0 , r|_Σ≠ 0 ,
such that the metric
g= r^2 g
extends continuously to M. The structure (M, g) is referred to as a compactification of (M, g) <cit.>. Because the choice of defining function is not unique, the restriction h_(0) of g to ∂ M rescales upon different choices of r; this freedom invariantly defines a conformal class of metric [h_(0)] on ∂ M. The pair (Σ, [h_(0)]) is the conformal infinity of the metric g.
A metric g which in addition satisfies the Einstein condition ℓ^2 R_ij(g)+dg=0, where ℓ is the radius of curvature of the manifold M, is termed a conformally compact Einstein metric. Importantly, every conformally compact Einstein metric is asymptotically hyperbolic[In order to be consistent with the original literature, in this subsection we are considering spaces of Euclidean signature; statements regarding asymptotically hyperbolic spaces translate with no subtleties to asymptotically AdS spaces in Lorentzian signature.], meaning that its sectional curvature approach to -1/ℓ^2 at Σ. Sometimes in the math literature this type of metrics are dubbed Poincaré–Einstein metrics.
Graham–Lee normal form and Fefferman–Graham expansion. If g is an asymptotically hyperbolic metric on M, then a choice of a representative h_(0) in the conformal class [h_(0)] on Σ uniquely determines a defining function r such that, in a collar neighborhood Σ× [0,ε), the singular metric g takes the Graham–Lee normal form <cit.>
g=ℓ^2( r^2 + h_r)/r^2 ,
where h_r is a one-parameter family of metrics on Σ, with h_0=h_(0)∈ [h_(0)].
The Einstein condition ℓ^2 R_ij(g)+dg=0 can be asymptotically solved for a metric of the form (<ref>). The solution is a formal expansion h_r=∑_k≥0 h_(k)r^k, where the expansion coefficients h_(k) are determined inductively from the Einstein condition itself. In components, this condition reads
r Tr(h^-1h”)-r/2 Tr(h^-1h'h^-1h') - Tr(h^-1h') =0 ,
∇_i Tr(h^-1h')-∇^j h'_ij =0 ,
r h”_ij + (1-d)h'_ij- Tr(h^-1h')h_ij-r[(h'h^-1h')_ij -1/2 Tr(h^-1h')h'_ij+2 ℓ^2 R_ij(h)] =0 .
For simplicity, in the above display we have written the tensor h_r simply as h, whose component are denoted by h_ij; we have also denoted Tr(h^-1h”)=h^ijh”_ij, Tr(h^-1h'h^-1h')=h^ijh'_ikh^kl h'_jl, and (h'h^-1h')_ij=h'_ikh^klh'_jl, where the primes indicate derivatives with respect to r.
Successive derivatives of (<ref>) evaluated at r=0 give
[(k-d)∂_r^k h_ij - Tr (h^-1∂^k_r h)h_ij]|_r=0 = LOTs|_r=0 ,
where “LOTs" refers to lower order terms in derivatives of the metric h. From equation (<ref>) with h_(0) as a initial condition, the higher order coefficients can be iteratively determine as follows:
♢ For k<d, all the coefficients h_(k) can be computed in terms of h_(0) from ∂^k_r h evaluated at r=0. The case k=1 implies immediately that the first order expansion coefficient h_(1) vanishes. Consequently, since equation (<ref>) is invariant under r→-r, it follows that only even powers of the expansion have non-vanishing coefficients and thus h_(k)∼∂^k_rh|_r=0=0 for k odd.
When k=d the coefficient of the trace-free part of h_(d)∼∂^d_rh|_r=0 vanishes and this can be freely chosen; this is the second piece of initial data—in addition to h_(0)—needed to solve the second order Einstein condition. Furthermore
♢ If k=d is odd, the LOTs in (<ref>) vanish at r=0 and thus the trace Tr (h_(0)^-1 h_(d))=0.
♢ If k=d is even, the trace-free part of the LOTs in (<ref>) do not vanish at r=0 giving rise to what is known as the obstruction tensor. In order to circumvent this obstruction, one must include in the expansion a logarithmic term r^dlog r with a trace-free coefficient a_(d).
Tying the above arguments together one concludes that
h_r=
h_(0) + h_(2)r^2+even powers+h_(d-1)r^d-1 + h_(d)r^d⋯ , if d is odd.
h_(0) +h_(2)r^2+even powers+ a_(d) r^d log r+h_(d)r^d+⋯, if d is even.
The distinguished coefficients a_(d) and h_(d) can be characterized as the metric variation of the conformal anomaly, and the expectation value of the boundary stress-energy tensor, respectively.
As for the other two components (<ref>) and (<ref>) of the Einstein condition, it can be shown that they give no extra information at order k=d and lower. This is because, from the ambient space perspective, some of the components of the Ricci-flatness equation for the ambient space metric g_Y are identically satisfied due to the contracted Bianchi identities <cit.>.
Second order coefficient. For the sake of completeness, let us compute h_(2). Evaluating equation (<ref>) at r=0 implies that h_(1)=0; using this fact, the first derivative of that equation gives
[(2-d)h”_ij - Tr(h^-1h”)h_ij -2 ℓ^2 R_ij(h)]_r=0=0 .
Since h|_r=0=h_(0) and h”|_r=0= 2h_(2) (because a_(2) is traceless), it follows that for d=2 we can only determine the trace of the second order coefficient
Tr(h_(0)^-1h_(2))=-ℓ^2/2 R_Σ ,
where R_Σ= R(h)|_r=0 is the Ricci scalar of the boundary (built from the induced metric h_(0)).
It is not difficult to check that when d>2, taking the trace of (<ref>) and pluggin it back, one obtains
h^(2)_ij= ℓ^2/2-d[R^Σ_ij - R_Σ/2(d-1)h^(0)_ij] = -ℓ^2 P^Σ_ij ,
where P^Σ_ij is the Schouten tensor of the boundary.
§.§ Boundary reconstruction and the Fefferman–Graham–Lee limit
As anticipated, the limit n→∞ defines the transition Σ_⋆→Σ in which the defect submanifold Σ_⋆↪ X^d+2 becomes the boundary of M^d+1.
We now turn to the question of under which conditions the resulting manifold M^d+1 can be considered an asymptotically AdS spacetime, with Σ=∂ M. When such conditions are imposed, one may think of the boundary Σ as being reconstructed from Σ_⋆.
To begin with, we note that, close to Σ, the topology of M^d+1 is the same as the topology of the asymptotic boundary region of an asymptotically AdS space. Indeed, recalling from (<ref>) that X^d+2=D^2/ℤ_n ×Σ^d, we once again observe that the n→∞ limit corresponds to the zero-radius limit of the boundary circle S_r_n^1=∂ (D^2/ℤ_n); see figure Fig:11.
It follows that
X^d+2→ M^d+1:= I×Σ^d as n→∞ ,
where I=[0, R_0] is the interval that results from the large n limit of the quotient D^2/ℤ_n.
Thus, upon sending n to infinity and zooming into the region close to the origin[Note that, since we are only focusing on a small region about the origin, whatever happens at the right end of the interval I is irrelevant to us.], the topology of the space collapses to the collar Σ× [0,ε), for some ε>0; this is precisely the cylinder topology of the asymptotic boundary region of an arbitrary AAdS spacetime, in the sense of <ref>.
On geometric grounds, M^d+1 will be asymptotically AdS if, locally about Σ=∂ M, its line element can be written in Graham–Lee normal form; in our case, this means that the metric
g_M = g_X|_n→∞=R_0^2 θ^2 + h/u^2 ,
which is the metric on M^d+1 inherited from the metric on X^d+2 once n is sent to infinity, should equal (<ref>) for θ≪1. It is direct to verify that this will indeed be the case if we impose
u→θ and R_0→ℓ as n→∞ ,
and redefine the radial coordinate as r:=R_0 θ.
Consequently and in addition to imposing (<ref>), for M^d+1=X^d+2|_n→∞ to be asymptotically AdS, the metric (<ref>) should satisfy the local boundary equations (<ref>)-(<ref>), which thus should arise in the large n limit of the local approximation of the global defect equations (<ref>)-(<ref>). In other words, the known local equations for the asymptotic AdS boundary should be re-obtained through the following sequence:
at (-5,0.3)Global defect;
at (-5,-0.3)eqs. (<ref>)-(<ref>);
[->,line width=0.4mm](-3.2,0)–(-1.8,0);
at (-2.5,0.5)θ≪1;
at (0,0.3)Local defect;
at (0,-0.3)eqs.;
[->,line width=0.4mm](1.7,0)–(3,0);
at (2.4,0.5)n→∞;
at (5,0.3)Local boundary;
at (5,-0.3)eqs. (<ref>)-(<ref>);
The local defect equations (that complete the center of the diagram above) are given by the θ≪1 approximation of equations (<ref>)-(<ref>); recalling that at leading order θ≈1/θ, these are
0 ≈-1/2 Tr(h^-1h”)+1/4 Tr(h^-1h'h^-1h')+1/2u'/u Tr(h^-1h') +1+1/θu'/u+(d+1)[u”/u-(u'/u)^2+R_0^2/L^2 u^2] ,
0 ≈ h^jk(∇_j h'_ik -∇_i h'_jk) ,
0 ≈θ^2/n^2[1+u”/u+(u'/u-1/θ)(1/2 Tr(h^-1h')-(d+1)u'/u)+(d+1)R_0^2/L^2u^2] ,
0 ≈ R_ij(h)-1/2R_0^2h”_ij +1/2R_0^2 (h'h^-1h')_ij- 1/2R_0^2(1/θ-d u'/u+1/2 Tr(h^-1h')) h'_ij +1/R_0^2[u”/u+1/θu'/u +d+1/u^2(R_0^2/L^2-(u')^2)+1/2u'/u Tr(h^-1h')] h_ij ,
where thus the symbol “≈" stands for small θ approximation.
It is now direct to verify that the boundary equations (<ref>)-(<ref>) can be obtained from the n→∞ limit of the defect equations (<ref>)-(<ref>).
To this end, we first observe that, since the radial coordinate r=R_0θ, the derivative ∂_θ= R_0 ∂_r, so that each prime in the defect equations differ by a factor of R_0 with respect to a prime in the boundary equations.
We next observe that equation (<ref>)→(<ref>) as n→∞ if, in this limit, the second line in (<ref>) vanishes and u→θ. The last requirement is part of condition (<ref>), and in particular implies that, upon expanding u≈ u_0+u_1θ+u_2θ^2, the coefficients u_0→0, u_1→1 and u_2→0 in that limit. As for the second line in (<ref>), using the above expansion for u, we can write
1 +1/θu'/u+(d+1)[u”/u-(u'/u)^2+R_0^2/L^2 u^2]
=1/u^2{u_1u_0/θ
+[u_0^2 + u_1^2 + (d+1)( 2u_0u_2 -u_1^2+ R_0^2L^2)]
+θ[ 2u_0u_2 +(1-2d)u_1u_2] + 𝒪(θ^2) } .
Recalling that u_0→0, u_1→1 and u_2→0 as n→∞, we conclude that the above display vanishes in that limit if we require
R_0^2/L^2→d/d+1 as n→∞ ,
which thus guarantees that equation (<ref>) gives (<ref>) when n is large.
Equation (<ref>) trivially gives (<ref>) when n→∞. Also, in this limit, equation (<ref>) is identically satisfied without imposing any further constraint on the geometry of M^d+1. Finally and by virtue of (<ref>) and (<ref>), it is also direct to show that equation (<ref>) follows from (<ref>).
Having looked at the n→∞ regime of the local defect equations, from which we infer that they reproduce the boundary equations if both conditions (<ref>) and (<ref>) hold, we will next show that they can formally be solved order by order in θ; in the next subsection, we will explicitly construct this expansion up to second order.
§.§ Local defect geometry
We now turn to the construction of an asymptotic solution to the local defect equations (<ref>)-(<ref>)—which we recall are valid in the regime where n>1 is finite—, of the form
h(θ, x) = h_(0) + θ h_(1)(x) + θ^2 h_(2)(x)+⋯
u(θ) = u_0 + u_1 θ + u_2 θ^2 + ⋯
where h_(0) is induced metric on Σ_⋆ and u_0 is the defining function zero-mode which we assume to be non-vanishing for finite n.
For simplicity and because it suffices for our purposes, we will consider an expansion up to second order in θ, and leave a more technical analysis of the higher order terms, including possible obstructions, for a separate work.
In order to determine the expansion coefficients in (<ref>), we first rewrite equations (<ref>), (<ref>) and (<ref>) respectively as
0 =u'/u + θℱ_θθ ,
0 =u'/u -1/2(d+1) Tr(h^-1h') + θℱ_ϕϕ ,
0 =u'/u h_ij -1/2 h'_ij + θℱ_ij ,
where
ℱ_θθ :=-1/2 Tr(h^-1h”)+1/4 Tr(h^-1h'h^-1h')+1/2u'/u Tr(h^-1h') +1+(d+1)[u”/u-(u'/u)^2+R_0^2/L^2 u^2] ,
ℱ_ϕϕ :=1/d+1(1+u”/u+1/2u'/u Tr(h^-1h')) +1/u^2(R_0^2/L^2-(u')^2) ,
ℱ_ij := R_0^2 R_ij(h) -1/2h”_ij +1/2(h'h^-1h')_ij+ 1/2(d u'/u-1/2 Tr(h^-1h')) h'_ij +[u”/u +d+1/u^2(R_0^2/L^2-(u')^2)+1/2u'/u Tr(h^-1h')] h_ij .
As in the case of the boundary equation (<ref>), the defect equation (<ref>) will not play any role due to one of the Bianchi identities, and we have thus not considered it in the above two displays.
Importantly, we are a priori assuming that all the ℱ's defined above are finite at θ=0 and hence (θℱ)|_θ=0=0. We will a posteriori realize that this is not really an assumption but a consistency condition.
The first order expansion coefficients in the anstaz (<ref>) can be determined by first evaluating (<ref>) at θ=0. This implies u'|_θ=0=0, so that the defining function u can not have a linear term at finite n (but because of the condition (<ref>), note that u will actually be linear for large n).
Using this fact and further evaluating (<ref>) or (<ref>) at θ=0, it follows that, since h_(0) is the induced metric on Σ_⋆ and thus it is non-degenerate, the linear term h'|_θ=0=0. Then, at this order we conclude that
u_1 =0 and h_(1)=0 .
The second order coefficients follow from the first derivative of the defect equations. Taking the derivative of (<ref>) and evaluating at the origin using (<ref>) gives
0= (u”/u + ℱ_θθ)|_θ=0
=2u_2/u_0- Tr (h_(0)^-1h_(2))+1 +2(d+1)u_2/u_0 + (d+1)R_0^2/L^2u_0^2 ,
where the last four terms come from ℱ_θθ|_θ=0. It follows that
Tr(h^-1_(0) h_(2)) = 1+2(d+2) u_2/u_0 + (d+1)R_0^2/L^2u_0^2 .
It is not hard to check that the derivative of (<ref>) gives the same information as (<ref>). As for the derivative of (<ref>), recalling that h'|_θ=0=0 and u'|_θ=0=0, we have that
0=(u”/u h_ij -1/2 h”_ij + ℱ_ij)|_θ=0
= 2u_2/u_0 h^(0)_ij - h^(2)_ij + R_0^2 R^Σ_⋆_ij - h^(2)_ij + [2u_2/u_0 + (d+1) R_0^2/L^2u_0^2] h^(0)_ij ,
where the last four terms follow from ℱ_ij|_θ=0, and where we have denoted by R^Σ_⋆_ij=R_ij(h)|_θ=0 to the Ricci tensor of the defect submanifold (Σ_⋆, h_(0)). Thus, solving for the second order coefficient gives
h^(2)_ij = R_0^2/2 R^Σ_⋆_ij + [2u_2/u_0 + (d+1)R_0^2/2L^2u_0^2] h^(0)_ij ,
whose trace should then be consistent with (<ref>); this fixes the curvature of the defect in terms of the defining function expansion coefficients u_0 and u_2, and the scales R_0 and L
R_Σ_⋆ = 2/R_0^2[1+ 4u_2/u_0 + (1-d/2) (d+1)R_0^2/L^2u_0^2] .
The above implies that the defect submanifold is constrained to have constant curvature.
Because it will be useful afterwards, let us write down the explicit form of the above solution for the case in which Σ_⋆ is a 2-manifold. Note that, when d=2, the last term in (<ref>) vanishes identically, so that the defect curvature is simply given by
R_Σ_⋆ = 2/R_0^2(1+4 u_2/u_0) .
When the above is the case, the defining function
u = u_0 [ 1 + 1/4(R_0^2/2 R_Σ_⋆-1) θ^2] ,
and the second order expansion coefficient
h^(2)_ij = R_0^2/2 R^Σ_⋆_ij + (2u_2/u_0 + 3R_0^2/2L^2u_0^2) h^(0)_ij .
Equation (<ref>) will be of particular relevance in <ref>.
A consistency check.
The local formulæ (<ref>), (<ref>)-(<ref>) can be scrutinized by studying how the global family of solutions (<ref>) spelled out in <ref> behaves close to the location of the defect (Σ_⋆, h_(0)).
In the global case, we have that
h(θ, x) = h_(0)(x) ,
u(θ) =cosθ , R_0=L ,
where h_(0) satisfy the Einstein constraint (<ref>), which in turn implies
R_Σ_⋆=-d(d-1)/L^2 ,
with R_Σ_⋆=R(h_(0))=R(h)|_θ=0.
Close to the defect, the defining function in (<ref>) goes as u≈1-1/2θ^2, so that the expansion coefficients u_0=1 and u_2=-1/2; interestingly, substituting these values in the local formula (<ref>) (remembering from (<ref>) that R_0=L) we obtain exactly (<ref>).
It is also interesting to note that the global solution for h in (<ref>) has vanishing second order coefficient h_(2); replacing u_0=1, u_2=-1/2 and R_0=L in the local formula (<ref>) and imposing h_(2)=0 gives now exactly the Einstein condition (<ref>). In other words, we have learnt that the Einstein constraint on h_(0) is equivalent to the vanishing of h_(2).
Defining function as an order parameter.
It is important to note that condition (<ref>) and solution (<ref>), (<ref>)-(<ref>) imply that the defining function u exhibits rather different behaviors depending on whether the submanifold Σ is in a defect phase (finite n) or a boundary phase (large n). Indeed, condition (<ref>) enforces the defining function to become linear in the n→∞ limit, suppressing the zero-mode u_0 and second order coefficient u_2 in the boundary phase. Solution (<ref>), on the other hand, requires no contribution from the linear term in u-expansion within the defect phase. This behavior suggests the existence of a (presumably abrupt) phase transition, whereby the defining function may be thought of as the order parameter; the situation is qualitatively represented down bellow.
1cm
[scale=0.8]
[thick, xmin = 0, xmax = 5,ymin = 0, ymax = 10,axis lines* = left, xtick = , ytick = , clip = false]
[dashed, domain = 0:5, restrict y to domain = 0:10, samples=100, line width=0.5mm, color = darkred]6*(1+(1/4)*x^2);
[dashed,domain = 0:5, restrict y to domain = 0:10, samples=100, line width=0.5mm, color = darkred]5*(1+(1/4)*x^2);
[dashed,domain = 0:5, restrict y to domain = 0:10, samples=100, line width=0.5mm, color = darkred]4*(1+(1/4)*x^2);
[dashed,domain = 0:5, restrict y to domain = 0:10, samples=100, line width=0.5mm, color = darkred]3*(1+(1/4)*x^2);
[dashed,domain = 0:5, restrict y to domain = 0:10, samples=100, line width=0.5mm, color = darkred]2*(1+(1/4)*x^2);
[dashed,domain = 0:5, restrict y to domain = 0:10, samples=100, line width=0.5mm, color = darkred]1*(1+(1/4)*x^2);
[domain = 0:5, restrict y to domain = 0:10, line width=0.4mm, color = darkblue]x;
[thick,->] (0,0) –(0,6);
[thick,->] (0,0) –(7,0);
at (0,6.5) u≈ u_0+u_1θ+u_2θ^2;
at (7.3,0) θ;
[darkred] at (7,6) T>0;
[darkred] at (7,5.4) defect phase;
[darkred] at (7,4.8) (u_1=0);
[darkblue] at (5.6,1.7)T=0;
[darkblue] at (5.6,1.1) boundary phase;
[darkblue] at (5.6,0.5) (u_0=0=u_2);
[text width=15cm, text justified] at (3,-3)Fig:3Fig. 3:
Asymptotic behavior of the defining function u. The defect phase is defined by a finite temperature T=1/n and the absence of linear terms in u; all the dashed red curves depicted have equal u_2/u_0=1/4. The boundary phase is reached at zero temperature, in which case the defining function becomes linear with no zero-mode. In blue we display the u_1=1 defining function.;
§ DEFECT CENTRAL CHARGE AND ITS BOUNDARY LIMIT
In <ref> we argued that the zero-radius limit n→∞ defines a defect-to-boundary transition, in which the defect submanifold Σ^d_⋆↪ X^d+2 becomes the boundary Σ^d=∂ M^d+1 of an asymptotically AdS space M^d+1.
Motivated by this transition, in this section we argue that it is plausible to think that, at least in some non-trivial cases, the holographic CFT at the asymptotic boundary of AdS^d+1 is truly a reincarnation of some defect, infrared field theory with support on the interior of X^d+2.
Here, we collect some arguments supporting the existence of such a theory.
§.§ Heuristics
Symmetries. There is a simple symmetry argument, similar to the one used to justify the existence of a CFT at the boundary of AdS, which can be equally invoked to support the existence of a CFT on Σ_⋆. To this end, recall that the symmetry group of AdS^d+2 has as a subrgroup
SO(2, d+1)⊃ SO(p) × SO(2,q) , p+q=d+1 .
In our construction, because X^d+2=D^2/ℤ_n×Σ^d and the explicit form of the metric (<ref>) and (<ref>), each factor at the right hand side of (<ref>) correspond to the (manifest) isometries of one factor in the decomposition of X^d+2; SO(p) corresponds to the symmetries of the quotient D^2/ℤ_n and SO(2,q) to the symmetries of Σ^d. Away from the singular point θ=0, the symmetries of the former are those of a circle, which fix p=2 and consequently q=d-1, so that Σ^d acquires SO(2,d-1) symmetries. But at the singular point, the circle above shrinks to a point, which fix p=1 and in turn equips the defect submanifold Σ^d_⋆ with SO(2,d) summetry. This is of course the full conformal group in dimension d.
(-1.35,0) – (0,2);
(1.35,0) – (0,2);
[darkred] at (0,2) •;
[thick, dashed] (1,0.5)
arc[start angle=0,end angle=180,
x radius=1, y radius=0.2];
[thick] (1,0.5)
arc[start angle=0,end angle=-180,
x radius=1, y radius=0.2];
at (0,-0.5) D^2/ℤ_q×Σ^d;
at (4,2) {p}×Σ_⋆^d≅ SO(1)× SO(2,d);
[thick,->, darkred](1.2,2)–(0.3,2);
at (4.5,0.5) S^1×Σ^d≅ SO(2)× SO(2,d-1);
[thick,->, darkblue](1.5,0.5)–(1.1,0.5);
[text width=14cm, text justified] at (3,-2.2)Fig:4Fig. 4: The symmetries of the manifold X^d+2. Away from the center of the disk, Σ^d has SO(2,d-1) symmetry. At the locus of the orbifold singularity, these symmetries are enhanced to those of the conformal group SO(2,d).
;
Existence of a local stress tensor.
Strictly speaking, in AdS space, any submanifold at some fixed radius has as a symmetry group the conformal group in in one lower dimension. However, among all these codimension-one submanifolds, there exists one and only one equipped with an stress-energy tensor; this is the AdS boundary, endowed with the Brown–York stress tensor <cit.>.
From the geometric point of view, the existence of a local stress-energy tensor on a given submanifold is related to the way this submanifold is embedded into the full space. In the case of the boundary Σ^d of an asymptotically AdS space M^d+1, there exists a gauge (the Graham–Lee normal form (<ref>)) in which the spacetime metric blows up at the location of the boundary. This means that the boundary submanifold Σ^d can be thought of as being embedded into M^d+1 via a one-dimensional delta function[Another way to see this is by gluing two copies of M along their conformal boundary. The gluing procedure enhances a ℤ_2-symmetry whereby the two fully overlapped conformal infinities become a single domain wall. Because of the latter symmetry, the metric will contain the absolute value of the radial coordinate whose second derivative is a delta function (in codimension one).]
Σ^dδ↪ M^d+1 (Σ=∂ M).
It is precisely the existence of such a singular embedding what permits the insertion of a stress-energy operator at the location of the boundary, and such a stress tensor in principle suffices to define a CFT[From an axiomatic point of view, a stress tensor is a sufficient but not a necessary condition to define a conformal field theory. Indeed, there exist a number of CFT's that have no stress energy tensor <cit.>.].
A similar reasoning in one higher codimension applies to a ℤ_n-orbifold defect. In this case, when n>1 is finite, the codimension-two set of fixed points Σ^d_⋆ embeds into X^d+2=D^2/ℤ_n×Σ^d via a delta function in codimension two, that is
Σ^d_⋆δ^2↪ X^d+2 (Σ_⋆= defect, finite n>1).
This is because the ℤ_n>1 action locally induces a conically singular geometry about the center of the disk and, on this background, some of the components of the Einstein tensor contain a term of the form <cit.> (here we take ρ=R_0θ so that the metric about the center of D^2/ℤ_n is locally given by ρ^2 + n^-2ρ^2ϕ^2)
(1-1/n)∇^2logρ∼(1-1/n)δ^2(ρ) ,
which is not present in the smooth case n=1. Due to (<ref>) and in order to have a well defined variational principle, one needs to couple to the gravitational action a Nambu–Goto term with support on Σ_⋆, which in turns fixes the form of the stress-energy tensor to
T^Σ_⋆_ij = 1/4G_d+2(1-1/n) h^(0)_ij ,
where we recall that h^(0)_ij=h_ij(0, x) denotes the induced metric on Σ_⋆. Hence, just as in the boundary case, the existence of (<ref>)—whose insertion is possible because of the singular embedding (<ref>) and whose precise form is determined by consistency of the variational principle—is indicative of the existence of a CFT on Σ_⋆.
Defect conformal structure. The common lore states that the boundary of AdS is special because it carries a conformal structure. Although this is true, conformal structures can in general be attached to any submanifold embedded in AdS.
A conformal structure is a metric-dependent[The boundary of a manifold, on the other hand, is a metric-independent notion that only depends on the topology of the manifold, regardless of which metric one puts on it.] notion which refers to an equivalence class of metrics on a given submanifold.
Consider for instance the conformal infinity of (X^d+2, g_X) (as defined in <ref>), whose location coincides with that of the boundary of X at θ=π/2. The induced metric on this submanifold is
g_∂ X = u^2 g_X|_θ=π/2=R_0^2/n^2ϕ^2 + h(π2, x) .
Then, the fact that the defining function u is not unique implies that the rescaling
u→Ω u
(where Ω is a positive smooth function with no poles at θ=π/2) induces the conformal class of metrics
[g]_∂ X=Ω^2(π2) g_∂ X ,
on the boundary of X.
The same argument above applies to the defect submanifold Σ_⋆=X|_θ=0. Indeed, due to the non-uniqueness of the defining function (<ref>), Σ_⋆ is naturally equipped with the conformal class
[h]_Σ_⋆=Ω^2(0) h(0,x) ,
where h(0,x)=h_(0) is the induced metric on Σ_⋆. Note that, because of the boundary reconstruction discussed in <ref>, the conformal structure (<ref>) becomes the conformal structure at the boundary of M^d+1 when n→∞.
§.§ Defect central charge and its boundary limit
Because of the arguments given in <ref>, we hereafter assume the existence of a conformal field theory on Σ^d_⋆.
Our aim is now to illustrate with a simple example how the Σ_⋆→Σ transition amounts to computing the central charge of the holographic boundary CFT, defined on Σ^d=∂ M^d+1, from the central charge of the CFT defined on Σ^d_⋆.
To this end, we specialize to the d=2 case and consider a two-dimensional defect embedded in a 4-manifold. As discussed in <ref>, when n>1, the singular embedding Σ^2_⋆↪ X^4 amounts to the insertion of the local stress-energy tensor (<ref>) with support on Σ_⋆, which we recall is given by
T^Σ_⋆_ij = 1/4G_4(1-1/n) h^(0)_ij ,
where h^(0)_ij the induced metric on Σ_⋆ and G_4 is Newton's constant.
Defect central charge.
From the defect point of view, the trace of (<ref>) is classically anomalous in the sense that
Tr(h^-1_(0)T_Σ_⋆)= c_⋆/24π R_Σ_⋆ ,
where c_⋆ denotes the central charge of the CFT on Σ^2_⋆.
In the above, since Tr(h^-1_(0)h_(0))=2, the left hand side gives
Tr(h^-1_(0)T_Σ_⋆)= 1/2G_4(1-1/n) ,
while the defect Ricci scalar at the right hand side was determined in (<ref>); it crucially depends on the ratio of the defining function expansion coefficients, that we denote by μ:
R_Σ_⋆ = 2/R_0^2(1+4μ) , μ:=u_2/u_0 .
From (<ref>), (<ref>) and (<ref>), and recalling that Newton's constant can be dimensionally reduced à la Kaluza–Klein as G_4= Vol (S^1_R_0) G_3=2π R_0 G_3 (we denote by Vol(S^1_R_0) the volume of the transverse circle of radius R_0), it follows that
c_⋆ = (1-1/n) 3R_0/(1+4μ) G_3 .
Equation (<ref>) provides a formal expression for the central charge c_⋆ of the field theory on Σ_⋆ in terms of the curvature of the defect—which is in turn controlled by μ:= u_2/u_0—, the scale R_0, and Newton's constant in dimension three.
Note that, since orbifold parameter n>1 and the radius R_0>0, the sign of c_⋆ is controlled by the curvature coefficient μ. This means that unitarity of the theory on Σ_⋆ depends on the curvature of that manifold; defects whose curvature 4μ>-1 will support unitary theories, while defects with 4μ<-1 will admit non-unitary ones.
Boundary limit.
Let us conclude by thinking of the Σ_⋆→Σ transition. Recalling from (<ref>) that, when n is large, R_0→ℓ (where ℓ is the AdS^d+1 radius), it follows that
c_⋆→ c= 3ℓ/(1+4μ) G_3 as n→∞ .
Thus, the resulting boundary central charge, denoted by c, will necessarily retain the information about the curvature of the parent defect Σ_⋆ from which the boundary submanifold Σ emerge in the limit n→∞.
Importantly, because of the μ-dependence of (<ref>), we observe that only a defect with positive, μ=1/4 curvature[Note that this is consistent with the fact that stability of the dual CFT requires a positively curved boundary <cit.>.] will give rise to a boundary CFT with central charge equals to the Brown–Henneaux <cit.> central charge c=3ℓ/2G_3.
Indeed, according to our findings, there exist a number theories on Σ_⋆ whose boundary limit gives rise to holographic, possibly non-unitary theories with different values of their central charge.
For instance, for the background constructed in <ref>, the defining function u=cosθ≈ 1-1/2θ^2+⋯, so that the expansion coefficients u_0=1 and u_2=-1/2. In that case μ=-1/2 and thus the resulting boundary theory is a non-unitary CFT with central charge c=-3ℓ/G_3.
A diagram with the space of possible defect theories and their boundary limit is depicted in figure Fig:55 below.
[scale=0.9]
[fill=yellow!6] (0,-2)–(0,4)–(5,4)–(5,-2);
[fill=red!6] (0,-2)–(0,-4)–(5,-4)–(5,-2);
at (2.5,-0.7) Unitary CFT's;
at (2.5,-1.2) c_⋆>1;
[line width=0.4mm,->] (4.2,-1.2)–(4.2,-0.4);
at (2.6,-2.8) Non-unitary CFT's;
at (2.6,-3.3) c_⋆<1;
[line width=0.4mm,->] (0.5,-2.5)–(0.5,-3.3);
[line width=0.5mm,dashed,darkblue] (0,2) –(5,2);
[line width=0.5mm,dashed,darkred] (0,-2) –(5,-2);
[darkblue, line width=0.6mm,->] (0,-4.5) –(0,4.5);
at (0,5) μ=u_2/u_0;
[thick,dashed] (5,-4.5) –(5,4.5);
at (5,-5) T=1;
at (0,5) μ=u_2/u_0;
[thick,->] (0,0) –(6,0);
at (7,0) T=1/n;
at (0,-5.6) Boundary CFT's;
at (0,-5) T=0;
[darkblue] at (0,2) •;
[darkblue] at (6.2,2) μ=+1/4;
[darkred] at (6.2,-2) μ=-1/4;
at (-0.7,0) μ=0;
[line width=0.4mm,->] (-1,2)–(-0.2,2);
at (-2.5,2.5) Boundary CFT ;
at (-2.5,1.9) with c=3ℓ/2G_3;
[text width=16cm, text justified] at (2.5,-9.5)
Fig:5 Fig. 5: Possible CFT's in (μ, T) space. The curvature parameter μ:=u_2/u_0 controls the sign of the central charge of a given theory, and the “temperature" T=1/n defines two different phases.
For T>0 (defect phase), the possible CFT's on Σ_⋆ are unitary for μ>-1/4 and non-unitary otherwise.
When n is large one reaches the zero-temperature, boundary phase. In this phase, there is a unique point representing a holographic theory with Brown–Henneaux central charge.
Note that, although both u_0, u_2→0 as n→∞, the ratio μ=u_2/u_0 remains finite.
;
§ DISCUSSION
In this work we have argued that the dynamics and geometry of the boundary of an asymptotically AdS space can be reconstructed from a conical bulk defect embedded in one higher (co)dimension. Consequently, all the properties of the boundary submanifold—including the capability of encapsulating localizable degrees of freedom in an holographic fashion—, can be thought of as inherited from a parent bulk defect.
Based on this idea, we have conjectured that the holographic theory at the boundary of AdS arises in a certain zero-radius limit of a field theory on an infrared defect.
In order to illustrate our conjecture, we worked out the lowest dimensional case and showed that the Brown–Henneaux central charge arises from the zero-radius limit r_n∼1/n→0 of the central charge on a two-dimensional defect (at fixed curvature) embedded in four dimensions.
Our findings seem to manifest the need for the inclusion of bulk defects into the holographic framework. Indeed, following the ideas of the holographic renormalization group flow, one may hypothesize that the conformal field theory on the defect Σ_⋆ represents the infrared fixed point of the dual flow, with the boundary dual theory being the ultaviolet fixed point.
In these regards, the Σ_⋆→Σ transition here proposed would represent a second type of flow in the space of holographic theories, in which the direction of the flow is reversed with respect to the direction of the renormalization group flow, from the infrared to the ultraviolet, at the cost of suppresing one spacetime dimension.
The situation is sketched below:
20pt
at (-2,3) IR;
at (-2,2.5) (Σ_⋆ defect);
at (2,3) UV;
at (2,2.5) (Σ boundary);
[thick, dashed] (-2,2)–(2,0);
[darkblue, line width=0.5mm,->] (-0.5,1.25) –(0.55,0.7);
[darkred] at (-2,2) •;
[thick, dashed] (-2,0)–(2,-2);
[darkblue, line width=0.5mm,->] (-0.5,-0.75) –(0.55,-1.3);
at (1,-0.9) transition;
at (1,-0.6) n→∞;
[thick, dashed] (-2,-2)–(2,-4);
[darkblue,line width=0.5mm,->] (-0.5,-2.75) –(0.55,-3.3);
[darkblue] at (2,-4) •;
[thick, dashed] (-2,0)–(2,0);
at (-4.7,0) AdS^d+2 :;
at (-2.7,0) CFT^d;
at (2.9,0) CFT^d+1;
[darkred] at (-2,0) •;
[darkblue] at (2,0) •;
at (0,0.3) holographic RG flow;
[darkred, line width=0.6mm,<-] (-0.55,0) –(0.5,0);
[thick] (-2,-2)–(2,-2);
at (-4.7,-2) AdS^d+1 :;
at (-2.8,-2) CFT^d-1;
at (2.7,-2) CFT^d;
[darkred] at (-2,-2) •;
[darkblue] at (2,-2) •;
at (0,-2.3) holographic RG flow;
[darkred, line width=0.6mm,<-] (-0.55,-2) –(0.5,-2);
The above elements led us to speculate that gauge/gravity duality belongs a to broader scheme, in which dual gauge theories does not necessarily have support on boundary submanifolds. In such scheme, bulk gravitational theories ought to be formulated on manifolds with multiple boundaries and extended objects in all possible codimensions; Hilbert spaces are assigned to boundaries—encapsulating states of a boundary, large N gauge theory—, as well as to defects—encoding defect states labeled by the codimension number and described by means of presumable a finite N gauge theory—.
Furthermore, Hilbert spaces associated to boundaries and defects are expected to be related via a (co)dimensional ladder of dualities involving different limits of the moduli parameters of the theory.
Clearly, many open questions remain to be investigated. The very existence of the infrared type of theories postulated in this work, as well as the universality and robustness of our framework require to be further studied; this is the subject matter of some of our current, ongoing research.
§ ACKNOWLEDGEMENTS
I am indebted to A. Waldron for several discussions related to the topic of this article, and his encouragement, support, and coffee invites during my years in Davis.
I would also like thank to G. Arenas, F. Diaz and P. Sundell for collaboration at the initial stage of this project.
The main part of this work was partially supported by the fellowship Postdoctorado en el Extranjero Becas Chile N^ o 74200106, carried out at UC Davis.
I'm currently supported by the grant Fondecyt Postdoctorado N^ o 3220236, hosted by PUC Chile.
I am also grateful to Y. Burak and the Hebrew University of Jerusalem for the kind hospitality and financial support during the completion of this manuscript.
§ CONVENTIONS
Through the body of this article, we take the bulk dimension to be
D=d+2 , d≥2 ,
and often indicate the dimension of a manifold as a superscript; we write X^d+2 to denote a smooth Riemannian manifold of dimension D=d+2. On tensors, we sometimes attach a manifold as a sub or superscript. For instance, we may write R_Σ to indicate that such tensor is intrinsically defined on Σ or constructed from the induced metric on that manifold.
We omit decorations when all is clear from context.
Given a metric g compatible with a (Levi-Civita) connection ∇ g=0, the Christoffel symbols are given by
Γ^ρ_μν = 1/2 g^ρσ(∂_μ g_νσ+∂_ν g_μσ - ∂_σ g_μν) .
The components of the Riemann and Ricci tensors, and the Ricci scalar are defined as
R_μνρ^σ= -2 ∂_[μΓ_ν]ρ^σ -2 Γ_λ[μ^σΓ_ν]ρ^λ ,
R_μν=R_μλν^λ ,
R=g^μν R_μν .
Einstein equations are
R_μν-1/2 R g_μν+Λ g_μν=0 , Λ=-(D-1)(D-2)/2L^2<0 ,
or equivalently
R_μν+ D-1/L^2 g_μν=0 ,
where L is the AdS^d+2 radius.
§ CURVATURES
Here we collect the Christoffel symbols and components of the Ricci tensor involved in the calculation of Einstein's equations of <ref>.
Consider the globally defined metric (<ref>)
g_X = R_0^2(θ^2 + n^-2sin^2θ ϕ^2)+h_ij(θ, x) x^i x^j/u^2(θ) .
The non-vanishing Christoffel symbols are
Γ^θ_θθ=-u'/u , Γ^θ_ϕϕ = sin^2θ/n^2(u'/u-θ) , Γ^θ_ij =1/R_0^2(u'/u h_ij-1/2h'_ij) ,Γ^ϕ_θϕ =θ-u'/u , Γ^i_θ j=1/2 h^ikh'_jk -u'/uδ^i_j ,
where the prime denotes derivative with respect to θ. The non-zero components of the Ricci tensor of g_X are
R_θθ = -1/2 Tr(h^-1h”)+1/4 Tr(h^-1h'h^-1h')+1/2u'/u Tr(h^-1h')+1+θu'/u+(d+1)[u”/u-(u'/u)^2] ,
R_θ i = 1/2h^jk∇_j h'_ik -1/2h^jk∇_i h'_jk ,
R_ϕϕ
=sin^2θ/n^2[1+u”/u+(u'/u-θ)(1/2 Tr(h^-1h')-(d+1)u'/u)] ,
R_ij
=R_ij(h)-1/2R_0^2h”_ij +1/2R_0^2 (h'h^-1h')_ij- 1/2R_0^2[θ-d u'/u+1/2 Tr(h^-1h')] h'_ij +1/R_0^2[u”/u-(d+1)(u'/u)^2+θu'/u+1/2u'/u Tr(h^-1h')] h_ij .
In the above, we have introduced the simplified notation Tr(h^-1h”)=h^ijh”_ij, Tr(h^-1h'h^-1h')=h^ijh'_ikh^kl h'_jl, and (h'h^-1h')_ij=h'_ikh^klh'_jl.
Also, in the last equation, we have explicitly indicated that R_ij(h) is Ricci tensor of the metric h.
JHEP
|
http://arxiv.org/abs/2307.01438v1
|
20230704020910
|
Cubature Kalman filter Based on generalized minimum error entropy with fiducial point
|
[
"Jiacheng He",
"Gang Wang",
"Zhenyu Feng",
"Shan Zhong",
"Bei Peng"
] |
cs.IT
|
[
"cs.IT",
"math.IT"
] |
GGGGG, GGGGGG, GGGG, GGG
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
Cubature Kalman filter Based on generalized minimum error entropy with fiducial point (This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version
may no longer be accessible.)
Jiacheng He, Gang Wang, Zhenyu Feng, Shan Zhong, Bei Peng
The NNSFC funded this research with Grant 51975107, together with the Sichuan Science and Technology Major Project Nos. 2022ZDZX0039, No. 2019ZDZX0020, and No. 2022YFG0343. (Corresponding author: Bei Peng.)
J. He, Z. Feng, S. Zhong, and B. Peng are with the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China (UESTC) (e-mail: [email protected]; [email protected]; [email protected]; [email protected]).
G. Wang is with the School of Information and Communication Engineering, UESTC (e-mail: [email protected]).
August 1, 2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In real applications, non-Gaussian distributions are frequently caused by outliers and impulsive disturbances, and these will impair the performance of the classical cubature Kalman filter (CKF) algorithm. In this letter, a modified generalized minimum error entropy criterion with fiducial point (GMEEFP) is studied to ensure that the error comes together to around zero, and a new CKF algorithm based on the GMEEFP criterion, called GMEEFP-CKF algorithm, is developed. To demonstrate the practicality of the GMEEFP-CKF algorithm, several simulations are performed, and it is demonstrated that the proposed GMEEFP-CKF algorithm outperforms the existing CKF algorithms with impulse noise.
cubature Kalman filter, GMEEFP, impulse noise.
§ INTRODUCTION
For linear dynamic systems influenced by white Gaussian noise, the Kalman filter offers the best solution for state estimation problems utilizing the minimum mean square error criterion. Numerous nonlinear extensions, including extended KF (EKF) <cit.>, unscented KF (UKF) <cit.>, cubature KF (CKF) <cit.>, and their variants, have been derived for nonlinear dynamical systems. The CKF is widely applied as a result of its third-order computational accuracy and greater numerical stability <cit.>. In reality, non-Gaussian noise <cit.> frequently taints measurement data, which can materially impair the accuracy of the traditional CKF algorithm.
To improve this situation the CKF algorithm takes advantage of the fact that information contaminated by non-Gaussian noise does not work well. In recent years, cost functions (learning criteria) based on information theoretic learning (ITL) have received a lot of attention, they have also been widely combined with CKF. Several CKF algorithms incorporating maximum correntropy criterion (MCC) are proposed <cit.>. In addition, the CKF algorithms based on a variant of the MCC are studied <cit.>. Furthermore, a new robust learning criterion called minimum error entropy (MEE), ITL, performs better than MCC. The CKF algorithms <cit.> based on MEE and mixture MEE is a natural development.
However, the Gaussian function, in MEE and mixture MEE <cit.>, is invariably the best choice for kernel function, further, a more robust generalized MEE (GMEE) learning criterion is proposed <cit.>. It is naturally inferred that the GMEE criterion is expected to improve the performance of the existing CKFs, furthermore, the existing GMEE criterion aims to minimize the difference in error, which may lead to errors that do not converge to near zero. These two points constitute the main motivation for this letter.
In this letter, a modified GMEE criterion with a fiducial point (GMEEFP) is proposed to ensure that the error converges to around zero. A new CKF method based on the proposed GMEEFP criterion is developed. A few simulations are implemented to prove the algorithm's feasibility.
§ PROBLEM FORMULATION
A nonlinear dynamic system is presented
{x_k = f( x_k - 1) + q_k - 1,
y_k = h( x_k) + r_k.
.
Here x_k∈ℝ^n × 1 represents the state vector at moment k, y_k∈ℝ^m × 1 stands for the measurement vector; the state transfer and measurement function are f( ·) and h( ·); q_k - 1 and r_k are zero-mean process and measurement noises with covariance matrix Q_k - 1 and R_k. The traditional CKF is a classical algorithm that uses observed information to derive an estimation of x_k. Prediction and update are the main steps of the class conventional CKF method.
§.§.§ Prediction Step
generate cubature points ξ_i;k - 1|k - 1 using ξ_i;k - 1|k - 1 = S_k - 1|k - 1φ_i + x̂_k - 1|k - 1.
Here φ_i is set as φ_i = √(n)a_i for i = 1,2, ⋯ ,n and φ_i = - √(n)a_i for i = n + 1, ⋯ ,2n, and a_i represents the unit vector; S_k - 1|k - 1 can be obtained by the Cholesky decomposition of P_k - 1|k - 1.
Perform propagation calculations for ξ_i;k - 1|k - 1 using
X_i;k|k - 1 = f( ξ_i;k - 1|k - 1),( i = 1,2, ⋯ 2n).
Calculate x̂_k|k - 1 and P_xx;k|k - 1 by fusing all X_i;k|k - 1 with weight 1 /
.
-2n
{x̂_k|k - 1 = 1/2n∑_i = 1^2nX_i;k|k - 1 ,
P_xx;k|k - 1 = 1/2n∑_i = 1^2nX̂_i;k|k - 1X̂_i;k|k - 1^T + Q_k - 1 ,
.
where X̂_i;k|k - 1 = X_i;k|k - 1 - x̂_k|k - 1, ( ·)^T is the transpose operation of a matrix.
§.§.§ Update Step
determine cubature points ξ_i;k|k - 1 utilizing ξ_i;k|k - 1 = S_k|k - 1φ_i + x̂_k|k - 1,
where S_k|k - 1 can be obtained utilizing the Cholesky decomposition of P_xx;k|k - 1. Then, calculated ξ_i;k|k - 1 using
γ_i;k = h( ξ_i;k|k - 1),( i = 1,2, ⋯ ,2n).
Then the predicted measurement ŷ_k|k - 1 vector, matrices P_yy;k|k - 1 and P_xy;k|k - 1 can be obtained by using
{ŷ_k|k - 1 = 1/2n∑_i = 1^2nγ_i;k ,
P_yy;k|k - 1 = 1/2n∑_i = 1^2nγ̂_i;kγ̂_i;k^T + R_k ,
P_xy;k|k - 1 = 1/2n∑_i = 1^2nX̂_i;k|k - 1γ̂_i;k^T ,
.
where γ̂_i;k = γ_i;k - ŷ_k|k - 1.
Calculate the posterior state vector x̂_k|k and covariance P_k|k utilizing
{x̂_k|k = x̂_k|k - 1 + K_k( y_k - ŷ_k|k - 1),
P_k|k = P_xx;k|k - 1 - K_kP_yy;k|k - 1K_k^T
.
with the Kalman gain K_k = P_xy;k|k - 1P_yy;k|k - 1^ - 1.
However, due to the impulse disturbance, outliers, or other factors, the distributions of r_k are generally no longer Gaussian, and reveal the heavy-tail properties. Such non-Gaussian distributions will degrade the performance of the existing KF algorithms since they are initially devised under Gaussian assumptions. To deal with the performance degradation, in this work, a robust KF algorithm is developed to estimate the state x_k utilizing the information y_k contaminated by non-Gaussian noise. Specifically, a GMEEFP criterion is developed, and it is combined with the cubature KF filter to dampen the negative effect of the non-Gaussian noises.
§ CKF BASED ON GMEE WITH FIDUCIAL POINT
This part develops a modified GMEE criterion with fiducial point, and the cubature KF combined with the proposed criterion is presented.
§.§ The GMEE with fiducial point
The information potential (IP) V̂_α ,β( X,Y) of GMEE criterion <cit.> is presented in (<ref>)
V̂_α ,β( X,Y) = V̂_α ,β( e) = 1/N^2∑_i = 1^N ∑_j = 1^N G_α ,β( e_i - e_j) ,
where X and Y denote random vectors, parameter α > 0 and β > 0 are shape parameter and scale parameter; N stands for the number of error in e = [e_1,e_2, ⋯ ,e_N], G_α ,β( e ) = [ α/
.
-2βΓ( 1/α)]exp( - | e |^α/
.
-β ^α) represents the generalized Gaussian density <cit.>.
From (<ref>), one can obtain that the function of (<ref>) is to be able to minimize the disparities among errors, which may result in the errors not converging to around 0, for example, each error is large, but the difference among them is small. To address this shortcoming of the GMEE criterion, we construct a modified error vector e_m = [ e_0,e], where e_0 = 0 represents a constant error that provides a reliable datum for all errors. Considering the fiducial point, the IP V̂_α ,β( e) of the generalized error entropy can be rewritten as
V̂_α ,β( e_m) = 1/( N + 1)^2∑_i = 0^N ∑_j = 0^N G_α ,β( e_i - e_j)
= 1/( N + 1)^2[
2∑_i = 1^N G_α ,β( e_i) + G_α ,β( 0 ) +
∑_i = 1^N ∑_j = 1^N G_α ,β( e_i - e_j) + G_α ,β( 0 )
].
Minimizing the generalized error entropy with fiducial point implies maximizing the IP, and constants G_α ,β( 0 ) and 1 /
.
-( N + 1)^2 do not affect the result of maximizing the IP. The leraning criterion is called GMEEFP criterion. Therefore, constants G_α ,β( 0 ) and 1 /
.
-( N + 1)^2 are ignored, and we can obtain
J = 2∑_i = 1^N G_α _1,β _1( e_i) + ∑_i = 1^N ∑_j = 1^N G_α _2,β _2( e_i - e_j) .
From (<ref>), it can be derived that the new IP is a linear combination of the generalized maximum correntropy and the GMEE IP. In order to balance the ratio of these two IPs, (<ref>) can be written as
J = λ∑_i = 1^N G_α _1,β _1( e_i) + ( 1 - λ)∑_i = 1^N ∑_j = 1^N G_α _2,β _2( e_i - e_j) ,
where λ∈[ 0,1] is equilibrium factor. The best result can be reached using the GMEEFP criterion when the errors are forced to decrease to zero. From (<ref>), one can obtain that the GMEEFP criterion combines the features of the generalized MCC and GMEE, where the GMEE term minimizes the is able to minimize the difference among errors, the MCC serves to fix all errors around 0, the scaling factor is able to balance the percentage between GMEE and GMCC.
When α _1 = α _2 = 2, (<ref>) reduces to a linear combination of the MCC and the MEE IP, which means the MEE with fiducial point <cit.> is a special case of GMEEFP.
When λ = 1, the GMEEFP criterion reduces to generalized maximum correntropy; when λ = 0, the GMEEFP criterion reduces to GMEE criterion. It is clear that the generalized maximum correntropy and GMEE criteria are special cases of the GMEEFP criterion.
§.§ The proposed Cubature Kalman filter
In the regression-based KF solution, the measurement equation and filter update are reformulated as a regression problem <cit.>, therefore, the measurement function and state prediction error are combined to create a regression model with the form of
[ [ x̂_k|k - 1; y_k ]] = [ [ x_k; h( x_k) ]] + [ [ - ε_k|k - 1; r_k ]],
where ε_k|k - 1 = x_k - x̂_k|k - 1 represents the prediction error. The hidden state x_k is challenging to extract from the nonlinear measurement equation. A linearized measurement function can be derived by using the statistical linearization in <cit.> as shown below:
y_k = ŷ_k|k - 1 + H_kε_k|k - 1 + r_k + v_k,
where the linearized matrix H_k is obtained using H_k = ( P_xx;k|k - 1^ - 1P_xy;k|k - 1^ - 1)^T.
Combining (<ref>), (<ref>) can be further wirtten as
[ [ x̂_k|k - 1; y_k - ŷ_k|k - 1 + H_kx̂_k|k - 1 ]] = [ [ I_n; H_k ]]x_k + μ_k
with
μ_k = [ [ - ε_k|k - 1; r_k + v_k ]].
where I_n stands for an identity matrix. The covariance of augmented error μ_k is calculated using
E[ μ_kμ_k^T] = Θ_kΘ_k^T
= [ [ Θ_p;k|k - 1Θ_p;k|k - 1^T 0; 0 Θ_r;kΘ_r;k^T ]],
where Θ_k, Θ_p;k|k - 1, and Θ_r;k can be achieved using the Cholesky decomposition of E[ μ_kμ_k^T], P_xx;k|k - 1, and P_yy;k|k - 1 + P_xy;k|k - 1^TP_xx;k|k - 1^ - 1P_xy;k|k - 1, respectively.
We can obtained (<ref>) by multiplying both sides of (<ref>)
d_k = W_kx_k + e_k
with
d_k = Θ_k^ - 1[ [ x̂_k|k - 1; y_k - ŷ_k|k - 1 + H_kx̂_k|k - 1 ]],
W_k = Θ_k^ - 1[ [ I_n; H_k ]],
and
e_k = Θ_k^ - 1[ [ - ε_k|k - 1; r_k + v_k ]].
According to the proposed GMEEFP criterion, the following is an expression for the cost function:
J_GMEEFP = λ∑_i = 1^N G_α _1,β _1( e_i) +
( 1 - λ)∑_i = 1^N ∑_j = 1^N G_α _2,β _2( e_i - e_j) ,
where e_i = d_i;k - w_i;kx_k and d_i;k represent the ith element of e_k and d_k respectively; w_i;k represents the ith row of W_k, and N = m + n. The optimal estimate of the system state can be achieved by calculating x̂_k = max_x_kJ_GMEEFP( x_k).
Taking the derivative of the (<ref>) on x_k, and we can obtain
∂J_GMEEFP/∂x_k = W_k^TΛ_kd_k - W_k^TΛ_kW_kx_k
with
{Λ_k = λ _1Π_k + λ _2( Ψ_k - Φ_k),
[ Ψ_k]_ij = {∑_j = 1^N G_α _2,β _2( e_i;k - e_j;k)| e_i;k - e_j;k|^α - 2,i = j,
0,i j,
.
[ Φ_k]_ij = G_α _2,β _2( e_j;k - e_i;k)| e_j;k - e_i;k|^α - 2,
[ Π_k]_ij = {G_α _1,β _1( e_i)| e_i|^α _1 - 2,i = j,
0,i j,
.
λ _1 = λ( α _1/
.
-β _1^α _1),
λ _2 = ( 1 - λ)( 2α _2/
.
-β _2^α _2).
.
The derivative of (<ref>) is set to zero. Similar to the derivation in <cit.>, we can obtain
x_k = ( W_k^TΩ_kW_k)^ - 1W_k^TΩ_kd_k,
where Ω_k = λ _1Π_k + λ _2( Ψ_k^TΨ_k + Φ_k^TΦ_k). It is clear that (<ref>) is a function on x_k. Hence, a fixed point iterative (FPI) equation is as follows:
x̂_k;t + 1 = f( x̂_k;t) = ( W_k^TΩ_k;tW_k)^ - 1W_k^TΩ_k;td_k,
where t is the number of the FPI, and the initial value of the FPI is x̂_k;0 = x̂_k|k - 1.
The matrix Ω_k;t can also be expressed as follows:
Ω_k;t = [ [ Ω_x;k;t Ω_yx;k;t; Ω_xy;k;t Ω_y;k;t ]]
with
{Ω_x;k;t∈ℝ^n × n,Ω_xy;k;t∈ℝ^m × n,
Ω_yx;k;t∈ℝ^n × m,Ω_y;k;t∈ℝ^m × m.
.
Substituting (<ref>) into W_k^TΩ_k;tW_k yields
W_k^TΩ_k;tW_k = P̅_k|k - 1;t^x + H_k^TP̅_k|k - 1;t^xy +
( P̅_k|k - 1;t^yx + H_k^TP̅_k|k - 1;t^y)IH_k
with
{P̅_k|k - 1;t^x = ( Θ_p;k|k - 1^ - 1)^TΩ_x;k;tΘ_p;k|k - 1^ - 1,
P̅_k|k - 1;t^xy = ( Θ_r;k^ - 1)^TΩ_xy;k;tΘ_p;k|k - 1^ - 1,
P̅_k|k - 1;t^yx = ( Θ_p;k|k - 1^ - 1)^TΩ_yx;k;tΘ_r;k^ - 1,
P̅_k|k - 1;t^y = ( Θ_r;k^ - 1)^TΩ_y;k;tΘ_r;k^ - 1.
.
In a similar way, W_k^TΩ_k;td_k can be further represented as
W_k^TΩ_k;td_k = P̅_k|k - 1;t^xx̂_k|k - 1 + H_k^TP̅_k|k - 1;t^xyx̂_k|k - 1
+ P̅_k|k - 1;t^yx( y_k - ŷ_k|k - 1 + H_kx̂_k|k - 1) +
H_k^TP̅_k|k - 1;t^y( y_k - ŷ_k|k - 1 + H_kx̂_k|k - 1).
For calculating W_k^TΩ_k;tW_k, the matrix inversion lemma is employed, and we can obtain
( W_k^TΩ_k;tW_k)^ - 1 = ( P̅_k|k - 1;t^x + H_k^TP̅_k|k - 1;t^xy)^ - 1 -
( P̅_k|k - 1;t^x + H_k^TP̅_k|k - 1;t^xy)^ - 1( P̅_k|k - 1;t^yx + H_k^TP̅_k|k - 1;t^y)
×[ I + H_k( P̅_k|k - 1;t^x + H_k^TP̅_k|k - 1;t^xy)^ - 1
×( P̅_k|k - 1;t^yx + H_k^TP̅_k|k - 1;t^y)
]^ - 1×
H_k( P̅_k|k - 1;t^x + H_k^TP̅_k|k - 1;t^xy)^ - 1.
Substituting (<ref>) and (<ref>) into (<ref>), and x̂_k;t + 1 can be further written as
x̂_k;t + 1 = x̂_k|k - 1 + K_k;t( y_k - ŷ_k|k - 1)
with
K_k;t = ( W_k^TΩ_k;tW_k)^ - 1( P̅_k|k - 1;t^yx + H_k^TP̅_k|k - 1;t^y).
If the result satisfies ||x̂_k;t + 1 - x̂_k;t||/
.
-||x̂_k;t||≤τ, the FPI loops are considered to be convergent, and K_k;t = K_k.
Finally, the posterior covariance matrix can be updated using
P_k|k = ( I - K_kH_k)P_k|k - 1( I - K_kH_k)^T + K_kR_kK_k^T.
§ SIMULATION
In this part, the efficiency of the GMEEFP-CKF is compared to that of the CKF <cit.>, MCCKF <cit.>, and MEEF-CKF <cit.>. All simulations are averaged over 200 Monte Carlo runs, where 200 samples are used to calculate the mean-square deviation (MSD) that is utilized to evaluate the effectiveness of the proposed method in relation to its competitors. The concept of MSD is defined as MSD = 10log _10||x_k - x̂_k|k||^2, where x_k denotes the real state of the system.
A vehicle tracking model is considered, and a process equation is given as
x_k = [ [ I_2 Δ TI_2; 0 I_2 ]]x_k - 1 + q_k - 1,
where I_2 denotes the unit matrix and Δ T = 0.5s. State x_k = [ [ p_1;k p_2;k v_1;k v_2;k ]]^T contains position p_1;k and velocity v_1;k of the target in the x-axis and the position p_2;k and velocity v_2;k in the y-axis.
The measurement equation with the distance and angle of the target is written as:
z_k = [ [ √(p_1;k^2 + p_2;k^2); arctanp_2;k/p_1;k ]] + r_k,
In the numerical simulation, the process noise of the system is set to Gaussian noise 𝒩( 0,0.1), and the measurement noise is set to mixed-Gaussian noise <cit.> [r_k]_i∼0.96𝒩( 0,1) + 0.04𝒩( 0,100). The initial values of x̂_0|0 and P_0|0 are set to
{x̂_0|0∼𝒩( x_0,I_n),
P_0|0 = I_n,
.
where x_0 = [ 1,1,10,20]^T is the true state of target.
Fig. <ref> displays the performance of several methods in terms of MSD. Table <ref> presents the steady MSD of the GMEEFP-CKF method employing different α and β, and Fig. <ref> shows the convergence curve of the MSD with different λ. From these simulation results, one can obtain that 1) the proposed GMEEFP-CKF algorithm outperforms the existing CKF algorithms with mixed-Gaussian noise; 2) When α _2 = 2.2 and β _2 = 6.0 the proposed algorithm obtains the optimal performance with mixed-Gaussian noise; the performance of the GMEEFP-CKF method decreases as λ increases.
§ CONCLUSION
In this letter, the GMEEFP criterion is proposed to ensure the error converges to around zero.
In combination with the GMEEFP criterion, a CKF is derived to reduce the effect of non-Gaussian noise. The suggested technique outperforms existing methods for nonlinear system state estimation with non-Gaussian noise, according to simulation findings.
unsrt
|
http://arxiv.org/abs/2307.02429v1
|
20230705165154
|
DarkHorse: A UDP-based Framework to Improve the Latency of Tor Onion Services
|
[
"Md Washik Al Azad",
"Hasniuj Zahan",
"Sifat Ut Taki",
"Spyridon Mastorakis"
] |
cs.CR
|
[
"cs.CR",
"cs.NI"
] |
@IEEEtitlepagestyle
oddfoot
evenfoot
t]@l@ This paper has been accepted for publication by the 48th IEEE Conference on Local Computer Networks (LCN). © 2023 IEEE. Personal use
of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing
this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted
component of this work in other works.
: A UDP-based Framework to Improve the Latency of Tor Onion Services
Md Washik Al Azad
University of Notre Dame
[email protected]
Hasniuj Zahan
University of Nebraska at Omaha
[email protected]
Sifat Ut Taki
University of Notre Dame
[email protected]
Spyridon Mastorakis
University of Notre Dame
[email protected]
Received ; accepted
========================================================================================================================================================================================================================================================
Tor is the most popular anonymous communication overlay network which hides clients’ identities from servers by passing packets through multiple relays. To provide anonymity to both clients and servers, Tor onion services were introduced by increasing the number of relays between a client and a server. Because of the limited bandwidth of Tor relays, large numbers of users, and multiple layers of encryption at relays, onion services suffer from high end-to-end latency and low data transfer rates, which degrade user experiences, making onion services unsuitable for latency-sensitive applications. In this paper, we present a UDP-based framework, called , that improves the end-to-end latency and the data transfer overhead of Tor onion services by exploiting the connectionless nature of UDP. Our evaluation results demonstrate that is up to 3.62× faster than regular TCP-based Tor onion services and reduces the Tor network overhead by up to 47%.
Tor, Onion Services, Anonymous Communication, Latency, UDP
§ INTRODUCTION
Maintaining privacy while accessing the Internet has been a major concern over the past several years.
Researchers have tried to address this problem by proposing various network anonymization techniques, such as Mix-Net <cit.>, Babel <cit.>, and Mixminion <cit.>. However, these proposals were not widely adopted because of the issues with impractically high latency. The Onion Routing project (Tor) <cit.> was able to provide anonymous services to anonymous users while achieving substantially lower latency than the previous approaches. As such, Tor established itself as the most popular low-latency anonymous network service to this day.
Global Internet traffic is growing rapidly year over year. According to the CISCO Annual Internet Traffic Report, the fixed broadband Internet speed and the mobile cellular Internet speed will reach 110.4 Mbps and 43.9 Mbps, respectively, by the end of 2023 <cit.>. The massive increase in Internet speed allows service providers to deploy applications over the Internet that were not previously possible, such as video streaming, Augmented Reality/Virtual Reality (AR/VR), robotics, and healthcare applications <cit.>. The requirement for
low latency networks is expected to only grow in the future.
Although Tor offers a relatively low-latency anonymous overlay network, the overall performance and data transfer rates are still low. Tor works by deploying a number of relay nodes around the world and routing data through these nodes via multiple layers of encryption to achieve anonymity. Routing data through the relay nodes incurs an overhead on the network, which increases with the number of relay nodes. Moreover, the relay nodes have limited computation power and bandwidth, which further impacts the performance of Tor.
In 2003, Tor onion services (also known as hidden services) were introduced <cit.> to provide anonymity to both clients and servers at the same time by doubling the number of relays between two communicating parties <cit.>. As a result, the latency of onion services increased further, making onion services impractical to use for
low-latency network applications (video streaming over Tor). Furthermore, transferring large data files through onion services is unreliable due to low data transfer rates, availability, and unpredictable performance of Tor relays. As such, reducing the number of relay nodes between the client and the onion server can improve the overall performance and the limited resources of the overall Tor overlay network can be effectively utilized. And, as a result, the capacity of the Tor network to serve the total number of clients will increase.
To address these limitations, in this paper, we propose , a UDP-based framework for onion services.
exploits the connectionless nature of UDP to create a unidirectional path from a sender to a receiver through the Tor overlay network using a temporary source IP address for transmitting packets. As a result, enables onion services to use 50% fewer relay nodes, improving the performance and latency of the network while preserving the anonymity of both clients and servers. The contributions of our paper are the following:
* We present the design of , which improves the performance and latency of onion services.
* We develop a prototype and evaluate its performance by comparing (1) bootstrap time, (2) end-to-end per packet delay, (3) data transfer time, and (4) overhead with the vanilla onion service design[In this paper, we use the term “vanilla Tor” to refer to regular TCP-based Tor and the term “vanilla onion services” to refer to regular TCP-based Tor onion services.].
* We scale our evaluation up to five hundred concurrent client connections to analyze how the performance of is impacted as the number of concurrent connections grows.
The rest of the paper is organized as follows: Section <ref> discusses previous work that aimed to improve the performance of the Tor network. Section <ref> presents the design and workflow of . Section <ref> presents our evaluation results and comparison with vanilla onion services. Finally, Section <ref> concludes our paper and discusses future work.
§ RELATED WORK
Tor <cit.> is a realization of onion routing <cit.> to provide anonymity to clients on the public Internet through an overlay network. Tor uses multiple relay nodes (called a circuit, usually, with three nodes: a guard node, a middle node, and an exit node) to send packets from clients to servers in order to hide the identity of clients. To ensure the anonymity of both clients and servers, onion services (formerly known as hidden services) <cit.> were later introduced, where additional relay nodes are used to offer server anonymity. As the relay nodes are spread all over the world with limited bandwidth and multiple layers of encryption, the anonymity provided by onion services comes at the cost of high latency <cit.>.
Different approaches have been proposed to improve the performance of the Tor network from various perspectives, such as efficient path (relay) selection, multi-path routing, and multi-threaded relays. Sherr et al. <cit.> and Panchenko et al. <cit.> proposed two path selection algorithms based on the measured latency between two endpoints. In ShorTor <cit.>, the authors exploited multi-hop overlay routing of Content Delivery Networks (CDNs) to find an optimal path between users and servers with the goal of reducing the latency. Multi-path routing-based approaches were proposed in <cit.> and <cit.> to improve the performance of Tor for bandwidth-intensive applications. A multi-threaded internal architecture for relays has also been proposed, so that resource utilization is enhanced and, as a result, the throughput of the relays and the capacity of the network are increased <cit.>.
Finally, a UDP-based approach, called UDP-OR, has been proposed <cit.>, where a client is connected to an onion proxy using TCP, connections between two intermediate relays of a circuit use UDP, and the exit node communicates with a server using TCP. Although UDP-OR can be used for onion services, it requires at least six relay nodes (for a standard onion service connection) between clients and servers in order to provide anonymity to both communicating parties. To this end, our proposed framework, , can preserve the anonymity of both clients and servers by using only three relay nodes for onion services.
§ DESIGN
§.§ Design Assumptions and Overview
We assume that Tor relays, clients, and onion servers allow UDP traffic apart from regular TCP-based Tor traffic.
We assume that a client can be anonymized from a server with a circuit consisting of n Tor relays, and with another set of m Tor relays, the server can also be anonymized from the client. As a result, to provide anonymity to both communicating parties (for an onion service), (n + m) Tor relays are needed in total. In general, m is equal to n to provide the same level of anonymity to both clients and servers, and n and m are set to three, since this is considered as a good balance between performance and anonymity by the Tor community <cit.>. All attacks, which are possible against vanilla Tor and vanilla onion services (such as traffic analysis and correlations, timing attack, and fingerprinting attack), are also possible against . For our proposed framework, there is no need to modify the existing relay selection algorithms for Tor circuits. We do not claim to improve the anonymity of the client and the server with as compared to the vanilla onion service. Instead, our proposed framework improves the performance of onion services in terms of data transfer rates and latency by reducing the number of required relays by half compared to vanilla onion services.
As a result, the resource utilization of the Tor network, such as bandwidth and computing power of relays, for data transfers will be reduced. Finally, is only applicable to onion services (where the anonymity of both clients and servers is required) instead of regular Tor with three relays (where only the client's anonymity is required).
Figure <ref> shows the connections between a client and a server in . These connections are divided into two types of channels: (i) control channels; and (ii) data channels. A control channel is a session between the client and the onion server through vanilla onion service and will be used to exchange information that is needed in order to create data channels. A data channel is a one-way UDP connection (packets flow only in one direction) either from the client to the server or vice versa. To make a UDP connection unidirectional, the sender of the packets will use a temporary IP address (instead of its actual IP address) as the source IP address. As a result, the receiver of the packets cannot reach the sender using the same path. The reason we do not use TCP for unidirectional connections is that TCP requires a 3-way handshake between the sender and the receiver. There are two separate data channels in : one channel for sending packets from the client to the onion server and another one for sending packets in the opposite direction (onion server to client).
The operation of has two phases: (i) bootstrap phase; and (ii) data transfer phase. In the bootstrap phase, the control channel and the data channels are created. In the data transfer phase, data packets are transmitted and lost packets are recovered (retransmitted).
§.§ Bootstrap Phase of
Figure <ref> shows the workflow of the bootstrap phase. It starts with creating a control channel between the client and the server. After creating the control channel, both the client and the server select three relay nodes each to establish data channels. Tor's relay selection algorithm is used to choose these relays of the data channels. Similar to the TCP-based Tor circuit creation process, encryption keys are exchanged with these selected relays to build a UDP-based circuit and perform onion routing through that circuit. Then two random IP addresses Temp_IP_Server and Temp_IP_Client are selected by the client and the server respectively. Once these temporary IP addresses are selected, the client and the server exchange the IP address of the last relay of each data channel and the temporary IP addresses, so that data channels can be created. We discuss mechanisms to achieve the selection of temporary IP addresses in Section <ref>.
Figure <ref> shows the steps to create a data channel from a server to a client. First, the client selects three Tor relays (R1, R2, and R3) and creates a path. The client also selects a random IP address (Temp_IP_Server). The server will use Temp_IP_Server as a temporary IP address to send UDP packets toward the client through the path consisting of R1, R2, and R3. Since Temp_IP_Server is not the actual IP address of the server, the client cannot reach back to the server using the same path. After selecting the IP addresses, the client sends the IP address of R3 and Temp_IP_Server to the server. Finally, after receiving these IP addresses, the server begins to send UDP datagrams to R3 using Temp_IP_Server as the source IP address.
§.§ Data Transfer Phase of
Figure <ref> shows the operation workflow of transferring data packets using a data channel in . It consists of five steps as follows.
Step 1: The first step of the data transfer process is to send a request for data. supports two modes of sending requests. A request can be sent either over the control channel or over the data channel. If a request is sent over the control channel, then it can be an HTTP request. Otherwise, it is a UDP-based variant of HTTP request (such as an HTTP/3 request <cit.>) when sent over the data channel. Each mode of sending requests has its own advantages and disadvantages. Sending a request over the data channel is faster compared to sending a request over the control channel. However, due to the unreliable nature of UDP, sometimes a request cannot reach its destination. In this case, re-sending a request will be needed, which will lead to additional delay. On the other hand, using the control channel to send requests can be slow as the requests have to go through multiple Tor relays around the world. Nevertheless, unlike a data channel, packet losses and retransmissions will be handled by TCP, which is the transport layer protocol used by vanilla Tor. In , clients and servers can negotiate with each other and decide which type of channel is appropriate for a specific application.
Step 2: After receiving a request for data, the sender of the data will encapsulate the data into multiple fixed size packets. Figure <ref> shows the format of a UDP data packet. The payload of the packet is divided into two parts. The first part is allocated for a sequence number for each data packet and the second part contains a chunk of the actual data. The whole payload will be encrypted, and only the intended receiver of the packet will be able to decrypt it.
Step 3: The sender of the data will first send some metadata about the actual data (number of total packets, packet sizes, allocated bytes for the sequence number, size of a data chunk) to the requested party of the data. After that, the sender will encrypt the packets and begin sending packets over the data channel.
Step 4: When the receiver of the data receives a packet over the data channel, it first decrypts the packet and extracts the sequence number of the packet. Once all packets are sent over the data channel, the sender notifies the receiver over the control channel to identify the lost packet sequences.
Step 5: After identifying the lost packets, the receiver will send the list of lost packet sequences to the sender, and the lost packets may be retransmitted depending on the nature and needs of each application. The retransmission can done either over the data channel or over the control channel. Retransmitting lost packets over the data channel will be faster, however, it can result in subsequent packet losses if the network conditions degrade. On the other hand, retransmitting lost packets over the control channel will be slower, however, it will come with reliability due to the use of TCP between the relays of the channel.
§ EVALUATION
§.§ Evaluation Setup
In this section, we evaluate in three steps: (i) we evaluate the bootstrap time (the required time to create the control channel and data channels); and (ii) we conduct experiments based on a prototype that we have developed to evaluate its performance using different metrics; and (iii) we present results to understand how the performance of is impacted when the number of concurrent connections increases.
We compare these evaluation results to vanilla onion services, which we use as a baseline approach. To increase the statistical significance of our results, we conducted our experiments and collected data over a period of two months.
prototype implementation: We developed a prototype in Python[We make our implementation code publicly available to the research community at <https://github.com/malazad/Tor-with-spoofing>.]. The prototype has three main components: (i) a client module; (ii) a UDP-Tor relay module; and (iii) a server module. The client module initiates the bootstrap phase of by sending an HTTP request to the onion server using the Tor Stem library <cit.>. The server module was implemented using the Flask web development framework <cit.>. In both the client and the server modules, the Scrapy <cit.> framework was used to replace the actual source IP address of the sender with a temporary IP address to send packets through a data channel. Finally, the UDP-Tor relay module receives and forwards UDP packets to the next hop or to the destination. For our evaluation, we created a cloud testbed on Amazon Web Service (AWS) with instances around the world and deployed these modules on these instances.
Evaluation metrics: We have considered the following metrics for our evaluation:
* Bootstrap time: This is the time required to establish a connection between a client and a server. For vanilla onion services, it includes the time to select relay nodes by both the client and the server and the time to establish a connection through a rendezvous point. For , it is the time to create the control channel and the data channels.
* End-to-end per packet delay: This is the time required for a packet to be delivered from a sender to a receiver.
* Data transfer time: This is the time required to transfer data of a certain size from a client to an onion server or vice versa. For , the time to recover the lost packets is also included.
* Packet loss: The percentage of packets that are lost during the transmission through the data channel of .
* Overhead: The total number of bytes that need to be sent through the vanilla onion service and overlay networks to successfully deliver data of a certain size. It is calculated as: total number of packets × size of each packet × number of Tor relays between the sender and the receiver. For , the retransmitted packets to recover losses are also included.
§.§ Evaluation Results
§.§.§ Evaluation of the bootstrap phase of
Bootstrap time: We present the results for the bootstrap time of vanilla onion service and in Figure <ref>. The evaluation results show that requires around 26% more time than vanilla onion service to establish a connection between a client and an onion server. This extra bootstrap time overhead of comes from the time required to exchange information between a sender and a receiver to create data channels. However, the additional time is overall minor (less than one second).
§.§.§ Evaluation of the data transfer phase of
End-to-end per packet delay: Figure <ref> shows the end-to-end per packet delay to transfer a packet from an onion server to a client using vanilla onion service and . Our results indicate that achieves around 58% lower end-to-end per packet delays due to the fewer relays used along the data channel. This results in fewer encryption/decryption operations that need to be performed while delivering data from a sender to a receiver.
Data transfer time: Figure <ref> shows the average required time to transfer different sizes of data using and vanilla onion service from an onion server to clients. Our results demonstrate that is 2.72-3.62× faster than vanilla onion service in terms of transferring the same amount of data (including retransmissions that may be needed to recover lost packets).
Furthermore, achieves an average data transfer rate of 209.98KB/s as compared to 71.51KB/s using vanilla onion service. The quartile values also indicate that data transfer times are more consistent in as compared to vanilla onion service. The speed up and the consistency in data transfer times come from the fact that requires half of the Tor relay nodes compared to vanilla onion services.
Overhead: In Figure <ref>, we present the results for the overhead of vanilla onion service and overlay networks to transfer data of different sizes. incurs about 45%-47% lower overhead as compared to vanilla onion service. Because of the UDP packet losses over the data channel and the retransmissions of the lost packets for recovery purposes, the overhead of is not exactly 50% less than vanilla onion service, even through the length of the data channels in is half as compared to the path from a client to an onion server using vanilla onion service.
§.§.§ Effect of concurrent data transfers on the performance of
To understand how the performance of changes as the number of simultaneous client data transfers grows, we increase the number of concurrent client connections up to five hundred. We discuss the results of the data transfer times and the percentage of packet losses below.
Packet losses: In Figure <ref>, we present the results of packet losses on a data channel for varying numbers of concurrent client connections. Our evaluation results show that the percentage of packet losses is less than 4.04% for up to five hundred concurrent client connections. As we increase the number of concurrent connections, the percentage of packet losses slightly increases. This increase happens because UDP does not provide congestion control mechanisms, thus senders do not dynamically adjust their data sending rates to adapt to changes of network conditions. In general, our experiments indicated that the operation of can scale as much as the available resources (network bandwidth, CPU and memory of relays) allow us to do.
Data transfer time: We present the required time for transferring data for various numbers of concurrent client connections on a data channel in Figure <ref>. As the percentage of packet losses increases with the number of concurrent client connections, the required data transfer time also increases.
§ DISCUSSION
Creating a unidirectional UDP data channel: The main goal of a unidirectional UDP data channel in is to hide the identity of the sender of a packet. This can be achieved in two ways: IP spoofing <cit.> and Moving Target Defense (MTD) mechanisms <cit.>. In IP spoofing, the sender of a packet through the data channel will simply replace the source IP address with a different IP address (either Temp_IP_Server or Temp_IP_Client). In MTD mechanisms, Temp_IP_Server and Temp_IP_Client will be selected from a pool of IP addresses maintained by the Tor network (similar to the repository for the IP addresses of relays), therefore, a sender can request a temporary IP address from this pool.
Preserving anonymity in during collusion: Let us consider the scenario of Figure <ref>, where the onion server sends data packets to a user. In this scenario, the client knows the identity of R1, R2, and R3 but not the IP address of the server. If the user colludes with these three relays, it will not be able to identify the server since R3 receives packets from the server with a temporary IP address. On the other hand, the server only knows the IP address of R3, and R1 and R2 are unknown to it. To identify the user, the server needs to collude with R1 and R2, which are selected by the user.
Traffic correlation in : Unlike vanilla Tor and vanilla onion services, uses asymmetric paths for sending requests and responding with data packets. As a result, it is unlikely for an attacker to observe all traffic and find traffic correlations. To find traffic correlations between a client and a server, an attacker has to monitor all three channels (two data channels and a control channel) in .
Running experiments on a shared infrastructure: For the evaluation of , we ran experiments with virtual machine instances located around the world on a public cloud in which the resources are shared across multiple tenants. This creates an environment analogous to the vanilla Tor overlay network, where the resources of Tor relays are shared by multiple simultaneous connections.
§ CONCLUSION AND FUTURE WORK
In this paper, we presented , a framework to improve the performance of onion services through the connectionless nature of UDP. creates a unidirectional UDP path with a temporary sender IP address, which reduces the length of the path between a client and an onion server by half as compared to vanilla onion services. Our evaluation results demonstrated that is up to 3.62× faster than vanilla onion services, while reducing the overlay network overhead by up to 47%. In our future work, we plan to: (i) extend the evaluation and make a real-world deployment of our prototype available to users; and (ii) design mechanisms to equally distribute the bandwidth of relays among concurrent connections passing through these relays.
§ ACKNOWLEDGEMENTS
This work is partially supported by the National Science Foundation (awards CNS-2104700, CNS-2306685, CNS-2016714, and CBET-2124918) and ACM SIGMOBILE.
IEEEtran
|
http://arxiv.org/abs/2307.02602v2
|
20230705185328
|
Can extended Chaplygin gas source a Hubble tension resolved emergent universe ?
|
[
"Rikpratik Sengupta",
"Prasenjit Paul",
"B C Paul",
"M Kalam"
] |
gr-qc
|
[
"gr-qc"
] |
^1 Department of Physics, Aliah University, Kolkata 700160, West Bengal, India
^2 Department of Physics, Indian Institute of Engineering Science and Technology, Shibpur, Howrah 711103, West Bengal, India
^3 Department of Physics, Department of Physics, North Bengal University, Siliguri 734014, West Bengal, India.
^1 [email protected]
^2 [email protected]
^3 [email protected]
^1 [email protected]
In this paper, we attempt to explore the possibility of a obtaining a viable emergent universe scenario supported by a type of fluid known as the extended Chaplygin gas, which extends a modification to the equation of state of the well known modified Chaplygin gas by considering additional higher order barotropic fluid terms. We consider quadratic modification only. Such a fluid is capable of explaining the present cosmic acceleration and is a possible dark energy candidate. We construct a theoretical model of the emergent universe assuming it is constituted from such a fluid. It interestingly turns out that the theoretical constraints we obtain on the extended Chaplygin gas parameters from our emergent universe model are well in agreement with the observational constraint on these parameters from BICEP2 data. Our model is found to replicate the late time behaviour really well and reproduces Λ-CDM like behaviour, as evident from the analysis of the statefinder parameters. Moreover, the Hubble parameter analysis shows that for theoretically constrained values of the ECG parameters, the Hubble tension can be resolved yielding higher values of the present Hubble parameter H_0 in all possible cases. Also, the value of H(z) at a redshift z=2.34 fits better than Λ-CDM with recent observations in some cases. This leads us to the realization that such a fluid is not only a probable candidate for dark energy, but also sources an emergent universe unlike modified Chaplygin gas and the initial singularity problem can be resolved in a flat universe within the standard relativistic context.
Can extended Chaplygin gas source a Hubble tension resolved emergent universe ?
Rikpratik Sengupta^1, Prasenjit Paul^2, B C Paul^3 and Mehedi Kalam^1
August 1, 2023
===============================================================================
§ INTRODUCTION
Over the years, the Big bang model has come to be accepted as the standard model of cosmology. The predictions of the Big bang model are compatible to quite a large extent with the presently available plethora of observational data. However, there are some problems with the standard Big bang model. Firstly, there is a `beginning' of time, the point at which the field equations describing the spacetime are no longer capable of describing the physical situation due to presence of the “initial singularity<cit.>.” As a consequence of this, the standard Big bang model can not answer the question about how the universe came into existence. There have been a number of efforts to resolve this initial singularity problem and to find an answer to the question regarding the coming into existense of the universe. Most physicists believe that at such an early phase of the universe, the energy densities were considerably high enough and the length scales involving the universe were comparable to or lower than the Planck length, making quantum gravity effects increasingly significant.
The main concern in dealing with the singularity problem is that there is no single consistent theory of Quantum Gravity (QG) at present. The two main theoretical setups in which a lot of efforts are being invested in this direction are the higher dimensional Superstring/ M- theories<cit.> and the Loop Quantum Gravity (LQG)<cit.>. Neither of the two scenarios have been fully developed till date, but there has been a lot of progress in both the fields in the last few decades. However, it is interesting to note that in a number of investigations involving both the setups, there have been a number of cosmological solutions in which the initial singularity is absent. In the LQG context, cosmological models involving a “regular bounce" have been proposed<cit.>. Such a bounce leads to a “cyclic” universe, where there are repeated non singular big bangs and big crunches. In the braneworld gravity context, which is a higher dimensional scenario inspired from the Superstring/ M- theories, similar cosmological solutions with non-singular bounce have been obtained<cit.>, which may also be cyclic in nature by introducing a cosmological turnaround mechanism involving a homogenous scalar field<cit.>. In the context of 11-dimensional M- theories, a “cyclic” picture of the universe has been obtained, where the Big bang is believed to be a collision between two dynamic braneworlds<cit.>.
The second problem with the standard Big bang model is that it requires additional ingredients to explain the “dark sector” of the universe. The “dark sector” refers to the dark energy which is believed to be responsible for the presently observed accelerating phase of the universe<cit.> and the dark matter whose effect can be realized via its gravitational interaction<cit.>. In order to explain these effects, the standard relativistic Big bang scenario requires introduction of additional scalar fields capable of violating the strong energy condition like the quintessence<cit.>, tachyon<cit.> or phantom<cit.> fields, or exotic fluids like the Skyrme fluid<cit.>. Such a field is required to exist in the universe in addition to the “inflaton” scalar field responsible for the rapid accelerating expansion phase in the early universe realized through expoonential or power law type of scale factor. Inflation can also be realized via a tachyon field<cit.>. For dark matter candidates, particles which have not yet been detected experimentally and are not predicted by the standard model of particle physics have to be taken into account. In an attempt to resolve these problems, there have been a number of cosmological models which make use of a modified Einstein-Hilbert action by considering additional terms in the Lagrangian of the geometry or the matter sector or both, contributing to non-conventional effects<cit.>. The dark energy problem can also be resolved in the higher dimensional braneworld gravity setup<cit.> and the cyclic universe scenario obtained from the M- theory setup<cit.>. The simplest resolution is by considering a generalized Randall-Sundrum single brane model<cit.> which is characterized by perfect fluid bulk matter, resulting in an `effective' fluid leading to accelerated expansion on the brane. In the cyclic picture<cit.>, the inflationary phase is absent and a single scalar field governs all the phases of evolution of the universe from triggering the bounce to causing accelerated expansion. A cosmological bounce can also be obatained in the context of modified gravity<cit.>. The dark matter problem can also be resolved in the context in the context of braneworld gravity, where the gravitational effect of any such hypothetical matter can be replaced by the higher dimensional braneworld effects.<cit.>
In the pure geometrical context, using conformal spacetime geometry, another cyclic cosmological model known as the Conformal cyclic cosmology has been proposed by Penrose<cit.>. The Emergent Universe (EU) scenario has been proposed by Ellis and Maartens<cit.> in 2004 and is a non-singular alternative to the Big bang resolving the initial singularity problem but it is different from the other scenarios in the context that there is no bounce or QG regime and also it is not a cyclic model. The scenario assumes an initial Einstein Static Universe (ESU), which corresponds to length scales greater than the Planck scale to avoid the QG regime and is subsequently followed by the standard inflationary and reheating phases. The EU scenario was originally proposed in a relativistic context, considering the presence of a positive curvature term in the Friedmann equations. Such an EU scenario<cit.> was also obtained in the relativistic context by considering a minimally coupled scalar field with a physically interesting potential to be the dominant source term in the early universe. An identical EU scenario<cit.> can also be obtained by adding a term quadratic in scalar curvature with a negative coupling parameter to the gravitaional Lagrangian. It was shown that the EU scenario can be realized with a spatial curvature term absent in the Friedmann equations<cit.>, in the context of semi-classical Starobinsky gravity<cit.>. A generalized Equation of State (EoS) describing an EU<cit.> was found in the relativistic context, accomodating normal matter, exotic matter as well as dark energy. The EU scenario has been successfully studied in braneworld context<cit.> as well as the LQG context<cit.>. The EU scenario can also be realised in the framewrok of Einstein-Gauss-Bonnet gravity in four dimensions coupled with a dilaton field<cit.> as well as in the modified Gauss-Bonnet gravity<cit.>.
A type of fluid known as “Chaplygin gas”, with its origin in string theories<cit.> had been used as a dark energy candidate to explain the late time acceleration of the universe<cit.>. Such a fluid is characterized by the EoS p=-B_1/ρ, with p and ρ denoting the pressure and energy density, respectively in a comoving frame, while B_1 denotes the Chaplygin gas parameter. It is assumed that the energy density is positive and the constant B_1>0. However, for constructing a physically consistent cosmological model<cit.>, the concerned EoS had to be modified to p=-B_1/ρ^α, introducing an additional free parameter α, such that 0≤α≤ 1. This type of fluid is called the Generalized Chaplygin gas (GCG)<cit.>. Initially it exhibits a dust like behaviour, but at late times it behaves asymptotically as a cosmological constant term, thus explaining the present acceleration of the universe. The EoS for GCG was further modified<cit.> for better correspondance with observational data, known as modified GCG<cit.>. The EoS for modified GCG has the form
p=A_1ρ-B_1/ρ^α,
where another additional constant parameter A_1 is introduced.
The Extended Chaplygin gas (ECG) EoS was proposed<cit.> to recover barotropic fluid having a quadratic or higher order EoS, which is basically a higher order generalization of the modified GCG at least upto the second order. The Van der Waals fluid is an alternative to the idea of a perfect fluid, that acts as the single source term in describing the evolution of the universe in both the matter dominated and present accelerating phases, identical to the Chaplygin gas family of fluids. It is also described a non-linear Equation of state<cit.> just like the Chaplygin gas family. The EoS for ECG may be written in general as
P=∑_n=1^∞ A_n ρ^n-B_1/ρ^ α
Just like the other Chaplygin gas models, ECG is also used as a dark energy candidate to explain the present cosmic acceleration<cit.>. Such models usually violate the strong energy condition of standard General Relativity (GR), which raises a possibility of violation of the null energy condition in the relativistic context. For obatining an EU scenario, it is likely that the null energy condition of GR must be violated<cit.>. So, it is worth investigating the possibility whether an EU scenario in the relativistic context is supported by such a dark energy candidate. Such an investigation has been performed for modified GCG<cit.>, but it turns out that in order to make the EU scenario viable, the choice of the modified GCG parameters that are to be made are physically unrealistic. So, modified GCG does not support a viable EU scenario but since ECG incorporates higher order barotropic fluid extension of the modified GCG, it may be worth exploring whether this type of a fluid, suitable as a possible dark energy candidate can support an EU scenario or not. Moreover, ECG is known to admit bouncing and cyclic types of universes in the relativistic context<cit.>.
In the following section, we shall consider the mathematical details of the possible EU scenario sourced by the ECG. In the final section we shall discuss the physical significance of the conclusions that can be inferred from our obtained results.
§ MATHEMATICAL MODEL
Putting n=2 in Eq. (2), the EoS for Extended Chaplygin Gas (ECG) shall be upto the second order, which physically represents the quadratic barotropic EoS given by
P=A_1ρ+A_2ρ^2-B_1/ρ^α,
where ρ represents the energy density, pressure is denoted by P and A_1, A_2, B_1 and α denote the ECG parameters.
Here, for simplicity of the model, such that the number of free parameters be reduced and the energy density may be obtained analytically, the following realistic assumptions are made following the previous works<cit.>
α=1
A_1=A_2-1
B_1=2A_2.
Conservation of energy-momentum tensor can be written as
ρ̇+3H(ρ+P)=0,
where H=ȧ/a denotes the Hubble parameter and a is the scale factor.
Plugging in the EoS given by Eq. (3) into the conservation Eq. (5), we solve for the energy density in terms of the scale factor a, which turns out to be
ρ=1+2+√(5a^30A_2e^3π/2-1)/a^30A_2e^3π/2-1.
Our analysis is valid for all epochs of cosmic evolution such that we consider the early universe has considerably high energy densities (ρ>>1) and then the density gradually decreases before reaching an asymptotic value. For the late time ρ<<1. In order to obtain Eq. (6) we have used the approximations tan^-1(ρ+1)≈π/2 for ρ>>1 and tan^-1(ρ+1)≈π/4 for ρ<<1.
The first Friedmann equation is given as,
ρ=3H^2=3 (ȧ/a)^2.
We choose κ^2=8π G=1.
From Eqs. (6) and (7), we obtain a solution for the scale factor a as
a=2√(3)e^15t/√(3)A_2/e^15t/A_2-1-√(5)/e^15t/A_2-1e^3π/4/2√(3)e^15t/√(3)A_2/e^15t/A_2-1-√(5)/e^15t/A_2-1e^3π/4-2.
In obtaining the above solution, we have used the approximation ln a=1/a-1. This approximation is justified as solution (6) approximately represents the late time behaviour particularly well. We shall compare this scale factor to the one used to describe an Emergent Universe (EU), which is also valid for all time scales, to establish a correspondance between the independent ECG parameter and the free parameter of the EU scale factor. That will allow us to obtain an estimate for the ECG parameters if ECG is to be considered as the constituent of an EU.
The expression of scale factor for the emergent Universe can be written of the form<cit.>
a(t)=a_0(10^4+e^√(3)Bt/2)^2/3(A+1).
As we can see, the above scale factor does not vanish as we go backwards in time to t=0 or beyond as in the standard Big bang picture. As a result there is no appearance of singularity in the Friedmann equations governing the dynamics of the universe. This physically means that the universe has no beginning at a particular point of time and the big bang is replaced by an ESU, whose length scales are large enough to avoid a quantum gravity regime.
If the universe with ECG as constituent is capable of behaving as an EU, then the scale factors obtained in Eqs. (8) and (9) must be identical. Comparing the two, expanding both the scale factors binomially and equating the coefficients for the first terms after expansion, we can obtain a correspondance between the free EU parameter a_0 and the independent ECG parameter A_2 as
a_0=2√(2)×10^5/2e^15/A_2-1e^3π/4+2√(2)×10^5
We know that the deceleration parameter q(z) is defined as,
q(z)=z̈(1+z)/ż^2-2,
where z denotes the redshift parameter.
The scale factor in Eq. (9) can be written in terms of the redshift z as
z=1/a-1=1/a_0(10^4+e^√(3)Bt/2)-1
Taking first order derivative with respect to time, we get
ż=-B/a_0√(3)(A+1)e^√(3)Bt/2/(10^4+e^√(3)Bt/2)^3A+5/3A+3.
Writing down the above expression for ż in terms of the redshift parameter z, we obtain
ż=-B/√(3)(A+1)[(z+1)^2-3A-3A^2/6(A+1)/a_0^4+9A+3A^2/6(A+1)-10^4 (z+1)^3A+5/6(A+1)/a_0^3A+1/6(A+1)]
Taking second order derivative of z with respect to time, we get
z̈=B^2/2(A+1)e^√(3)Bt/2/a_0(10^4+e^√(3)Bt/2)^3A+5/3A+3-3B^2/4e^√(3)Bt/2(3A+5)^2/3A+3/a_0(10^4+e^√(3)Bt/2)^2(3A+4)/3(A+1).
This can be expressed in terms of the redshift parameter z as
z̈=3B^2/6(A+1)[(z+1)^2-3A-3A^2/6(A+1)/a_0^3A^2+9A+1/6(A+1)-10^4 (z+1)^3A+5/6(A+1)/a_0^3A+1/6(A+1)]
-3B^2(3A+5)/2(3A+3)^2[(z+1)^1/3/a_0^2/3-2X10^4(z+1)^3A+5/6/a_0^1-3A/6+10^8 a_0^3A+1/3(z+1)^3A+4/3].
The observational constraints on the EU parameters A and B respectively are -0.034 ≤ A ≤ 0.0014<cit.> and 0.003 < B < 0.5996<cit.>.
Using Eqs. (12), (14) and (16) in Eq.(11), it turns out that the deceleration parameter is independent of the second EU parameter B. So, we obtain the deceleration parameters for the lower and upper limits of A, respectively.
For the lower limit of A, we have
q(z)=1.45/1-10^4a_0^1.45(z+1)^1.45-0.55 [for A=-0.034]
and for the upper limit, we obtain
q(z)=1.5/1-10^4a_0^1.5(z+1)^1.5-0.47 [for A=0.0014]
The variation of the two deceleration parameters obtained above, along the redshift parameter z have been plotted in Figure 1. The curve in red color represents the variation of the deceleration parameter with z for the observationally constrained lower limit of EU parameter A, while the curve denoted in blue color represents the variation of deceleration parameter with z for the observationally constrained upper limit of A. We shall discuss the physical consequence of the obtained plots in the concluding section.
The statefinder parameters<cit.> related to the late time behaviour are defined as
j= Ḧ/H^3-3q-2
s= j-1/3(q-1/2),
where j is known as the jerk parameter and s is known as the snap parameter.
For the lower obsernational bound of A, we obtain the jerk and the snap parameters as follows
j(z)=4.205+2.1045×10^4a_0^1.45(1+z)^1.45/[1-10^4a_0^1.45(z+1)^1.45]^2-1.74/1-10^4a_0^1.45(1+z)^1.45+0.055,
and
s(z) = 4.205+2.1045×10^4 a_0^1.45(1+z)^1.45/3[0.4+1.05×10^4 a_0^1.45(1+z)^1.45][1-10^4 a_0^1.45(z+1)^1.45]-1.74/3[0.4+1.05×10^4 a_0^1.45(1+z)^1.45]
-0.945[1-10^4 a_0^1.45(1+z)^1.45]/[0.4+1.05×10^4 a_0^1.45(1+z)^1.45].
For the higher observational bound on A, the jerk and snap parameters are evaluated as
j(z)=4.5+2.25×10^4a_0^1.5(1+z)^1.5/[1-10^4a_0^1.5(z+1)^1.5]^2-1.32/1-10^4a_0^1.5(1+z)^1.5-0.0282,
and
s(z) = 4.5+2.25×10^4 a_0^1.5(1+z)^1.5/3[0.53+0.97×10^4 a_0^1.5(1+z)^1.5][1-10^4 a_0^1.5(z+1)^1.5]-1.32/3[0.53+0.97×10^4 a_0^1.5(1+z)^1.5]
-1.0282[1-10^4 a_0^1.5(1+z)^1.5]/[0.53+0.97×10^4 a_0^1.5(1+z)^1.5].
The variation of the jerk and snap parameters along the redshift for both values of A have been plotted in Figures 2 and 3, respectively.
Finally, we obtain the Hubble parameter H=ȧ/a in terms of the redshift z as
H(z)= B/√(3)(A+1)[(z+1)^-4+9A+3A^2/6(A+1)/a_0^4+9A+3A^2/6(A+1)-10^4 (z+1)^-3A+1/6(A+1)/a_0^3A+1/6(A+1)]
It is to be noted here that unlike the deceleration parameter, the Hubble parameter depends on both the EU parameters A and B. There is also dependence on the ECG parameters which can in turn be expressed in terms of the single parameter a_0. The variation of the Hubble parameter along the redshift is given in Figure 4.
The EU parameter B is absent in the deceleration parameter as the same dependence on B appears in the numerator and denominator of the expression for the deceleration parameter, making it independent of B. The Hubble parameter is however found to depend on both the EU parameters A and B. Also as mentioned in the manuscript, for obtaining analytic solutions, we have assumed some correspondence between the different ECG parameters following previous works, as a result of which the independent ECG parameter is A_2 which can be expressed in terms of the parameter a_0 from our analysis. We have used the plots of the variation of different parameters, namely the deceleration parameter, the jerk and snap parameters to constraint the value of the ECG parameters such that for observational bounds available on the EU parameters, we can obtain best fit for these parameters with the available observational data. The theoretical constraint on the ECG parameters obtained from our model in the process are confirmed with the Hubble parameter data as well and also with the previously obtained observational constraint from the BICEP2 data and they are found to be in very good agreement.
The physical explanation of the plots have been provided in the concluding section.
§ DISCUSSION AND CONCLUSION
In this paper, we have investigated the possibility of a viable EU scenario sourced by ECG. As discussed earlier, the different Chaplygin gas models finding their origin in the higher dimensional String theories are probable dark energy candidates, capable of explaining the late time accelerating behaviour of the universe. For obtaining such a behaviour in the relativistic context, it is expected that the strong energy condition as obtained from GR must be violated, which raises a possibility of violation of the null energy condition due to negative pressure. The violation of the null energy condition is essential for obtaining an EU scenario in the standard relativistic context in order to avert the initial singularity. So, it is worth investigating whether such a fluid supports an EU or not. Earlier investigation has revealed that the modified GCG does not support an EU for the realistic choice of the parameters concerned with the modified GCG<cit.>. However, the fluid we are considering here is an extension of the modified GCG EoS, allowing consideration of higher order barotropic fluid at least up to the quadratic term. Consideration of the additional term can possibly modify the fluid making it capable of supporting an EU, unlike the modified GCG, as it is found to support bouncing and cyclic types of regular cosmological solutions<cit.>.
We have considered upto second order ECG term, physically representing the quadratic barotropic fluid. In order to obtain analytical mathematical solutions, we have assumed cetrain realistic correspondances between three of the free ECG parameters that have been applied in other investigations also<cit.>. The free parameter α involved in the EoS is assumed to be of unit magnitude without any loss of generality, as in the case of all the Chaplygin gas candidates<cit.>. Vanishing divergence of the energy-momentum tensor yield the conservation equation and the energy density of the ECG is obtained from it. Using the first Friedmann equation, the scale factor is obtained using the late time approximation, as the solution for the energy density is valid for all epochs and represents the late time behaviour particularly well.
At this point we consider that ECG can support a viable EU scenario in order to impose some constraints on the ECG parameters from our theoretical model. So we consider the scale factor describing an EU to be identical to the scale factor that we have obtained by solving the Friedmann equation for a universe constituted out of ECG. Expanding both the scale factors binomially and equating the coefficients for the first terms, a relationship is obtained between the free parameter a_0 contained in the expression for the EU scale factor and the independent ECG parameter A_2, while the other two ECG parameters are expressed in terms of A_2 and α is of the order of unity. This allows us to constraint the ECG parameters A_1, A_2 and B_1 from our theoretical model of EU supported by ECG. If the theoretically obtained constraints on the upper and lower limits are in agreement with the constraints that have been obtained from the observational data<cit.>, then it will justify our model and we shall argue that ECG supports a viable EU scenario.
It is known from observational findings that the present value of the deceleration parameter is q≈-0.57 (SN+BAO datasets)<cit.> and the observationally obtained value of the redshift parameter at which the deceleration parameter flips sign from negative to positive, which means physically, the universe makes a transition from the accelerating to the decelerating phase (actually the reverse is hapenning as we are travelling forward in time), is typically z≈ 0.8<cit.>. While plotting the q(z) versus z curves for the observationally bound lower and upper limits of the EU parameter A in Fig. 1, we are free to fix the EU parameter a_0. For fixing this parameter within a certain range, we find the best fit curve for q(z) vs z, such that the present value of the deceleration parameter and the value of the redshift parameter at which the deceleration parameter vanishes are in close approximity to the observational values. As we see from the plot, for lower A, the rate of decrease of q is higher in the early era and then it flipped with the onset of the
deceleration era, such that the time of flip is same in both the cases. We find from the best fit plot that in case of A=-0.034: for z=0, q=-0.61 and for q=0, z=0.82 and in case A=0.0014: for z=0, q=-0.59 and for q=0, z=0.83. For obtaining these best fit values, we obtain the theoretical constraint on a_0 as 0.0031≤ a_0 ≤ 0.003753.
Now using Eq. (10), we may constrain the ECG parameter A_2 as 1.327≤ A_2 ≤ 1.362 from our EU model. Using our earlier cosiderations in Eq. (4), we may also correspondingly constrain A_1 and B_1, respectively as 0.327≤ A_1 ≤ 0.362 and 2.654≤ B_1 ≤ 2.724. Thus, we obtain constraints on the three ECG parameters from our EU model by theoretically constraining the EU parameter a_0, making use of the observational constraint on the EU parameter A and assuming that ECG can support an EU. These constraints on the ECG parameters obtained from our theoretical model are in very good agreement with the BICEP2 observational data which gives the observationally constrained limits on the ECG parameters as 1.2 ≤ A_2 ≤ 1.6, 0.2≤ A_1 ≤ 0.6 and 2.4≤ B_1 ≤ 3.2<cit.>. Thus, we can see that the ECG parameters obtained from our theoretical model are within the observational range obtained from BICEP2 data and may be estimated more precisely, as the difference between the lower and upper bounds appear to be considerably smaller.
In order to analyze the late time behaviour of our obtained EU model more explicitly, we compute the jerk and the snap parameters. We evaluate the jerk and snap parameters for the obeservational lower and upper bounds of the EU parameter A using the best fit value of the EU parameter a_0, which we had obtained to evaluate the theoretical constraints on the ECG parameters. As we can see from Fig. 2, the present value of the jerk parameter at redshift z=0 that we obtain from our theoretical model is very close to 1 for both values of A, and as we go back in time, the jerk parameter first decreases and then increases but the rate of increase is higher for the upper bound (higher value) of A. This behaviour is in agreement with the Λ-CDM (cold dark matter) model. For the snap parameter, we see from Fig. 3 that for the higher A, the snap parameter presently has a vanishingly small positive value and as we go back in time, the parameter reduces and changes sign to a negative value. For the lower value of A, presently it has a very tiny negative value and first decreases and then increases as we go back in time. The Λ-CDM model predicts the present values of the jerk and snap parameters to be 1 and 0, respectively. As we see, our model reproduces this value to a close approximation. Thus the late time behaviour is also well predicted by our EU model supported by an ECG.
We have also obtained an expression for the Hubble parameter in terms of the redshift and plotted its variation. We have plotted the variation for all possible combinations of the lower and upper observational bounds of the EU parameters A and B. The parameter a_0 which encodes the information regarding the ECG parameters is chosen for the best fit of the curve and the corresponding ECG parameter A_2 turns out to have a value 1.35 for obtaining the best fit of the Hubble variation to observational data<cit.>. This value of the ECG parameter is within our obtained range from the analysis of the other parameters and hence in support of our model. We see from Fig. 4 that as the redshift increases, the Hubble parameter also increases as expected. We obtain the values of H(z) at two particular redshifts, namely z=0 (denoting the value of the Hubble parameter at present time H_0) and z=2.34 (in km/sec/Mpc), to tally with observational data. For the first choice of EU parameters A=-0.034 and B=0.003, we get H_0=73.254 and H(z=2.34)=233.7. For A=-0.034 and B=0.5996, we get H_0=72.302 and H(z=2.34)=240.2. For A=0.014 and B=0.003, we get H_0=74.206 and H(z=2.34)=231. For A=0.014 and B=0.5996, we get H_0=70.029 and H(z=2.34)=244.2. The observed data suggests H_0=73.24 ± 1.74 km/sec/Mpc<cit.> and H(z=2.34)=222 ± 7 km/sec/Mpc<cit.>. There is a slight discrepancy with the Λ-CDM estimated values of H_0=67 km/sec/Mpc and H(z=2.34)=238 km/sec/Mpc. For the present values of the Hubble parameter, our model provides a better fit for the observational data for all four possible combinations of the upper and lower EU parameter bounds, thus resolving the Hubble tension. For the value at higher redshift, our model predicts a better fit in two cases while the fit is worse compared to Λ-CDM estimation in the other two cases.
Hence, we make the claim that the ECG fluid does source an EU scenario consistent with the observational data unlike modified GCG, besides being a probable dark energy candidate as evident from the fact that it replicates the late time behaviour very well. This characteristic of supporting an EU, unlike modified GCG, can be interpreted to be a result of the modification arising from the higher order quadratic barotropic fluid term in the modified EoS, due to which the null energy condition (NEC) can be violated while in case of modified GCG only the strong energy condition is violated. We conclude that ECG is more open to exploring different cosmological scenarios than the previous Chaplygin gas candidates as it supports a non-singular universe owing to NEC violation besides reproducing a Λ-CDM like behaviour and at late times. The most important perspective is that the initial singularity problem can be resolved in a standard relativistic context for a flat universe and the obtained late time cosmology from the EU model is in good agreement with observational data, at per with or even better than the standard Λ-CDM model in some cases.
§ APPENDIX
We present a few steps of the derivation of the solution (6) from the conservation equation (5). The solution is obtained under some approximations namely tan^-1(ρ+1)≈π/2 for ρ>>1 (early universe) and tan^-1(ρ+1)≈π/4 for ρ<<1 (late times). If these approximations are invoked, then the solution (6) will satisfy the differential equation (5). A few steps towards obtaining the solution are presented below.
On integrating both sides of the differential equation (5) and simplifying, we are left with
ln a= 1/30A_2ln[ρ^2+2ρ+2/ρ^2-2ρ+1]-π/20A_2+C',
where C' is a constant of integration which we assume to be zero for simplicity.
Upon further simplification, this may be expressed as a quadratic equation of ρ having the form
(a^30A_2-e^-3π/2)ρ^2-2(a^30A_2+ e^-3π/2)ρ+( a^30A_2-2e^-3π/2)=0
On taking the positive root solution, we get
ρ=1+2+√(2a^30A_2e^3π/2+1+3a^30A_2e^3π/2-2)/a^30A_2e^3π/2-1
which on simplification gives Equation (6).
§ ACKNOWLWDGEMENT
MK and BCP is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA),Pune, India for providing the Visiting Associateship under which a part of this work was carried out. RS is thankful to the Govt. of West Bengal for financial support through SVMCM scheme.
99
HE S. W. Hawking and G. F. R. Ellis, Astrophys. J. 152 (1968) 25.
Zw B. Zwiebach, A First Course in String Theory (Cambridge University Press, 2004).
Pol J. Polchinski, String Theory, Vol. 2, Superstring Theory and Beyond (Cambridge University Press, 1998).
GP R. Gambini and J. Pullin, A First Course in Loop Quantum Gravity (Oxford University Press, 2011)
Rovelli C. Rovelli, Living Rev. Rel. 11 (2008) 5.
BMS M. Bojowald, R. Maartens and P. Singh, Phys. Rev. D 70 (2004) 083517.
SS Y. Shtanov and V. Sahni, Phys. Lett. B 557 (2003) 1.
ST V. Sahni and A. Toporensky, Phys. Rev. D 85 (2012) 123542.
TurokSteinhardt P. J. Steinhardt and N. Turok, Phys. Rev. D 65 (2002) 126003.
R A.G. Riess et al., Astron. J. 116 (1998) 1009.
P S. Perlmutter et al., Astrophys. J. 517 (1999) 565.
Z F. Zwicky, Helvetica Physica Acta 6 (1933) 110.
V V. C. Rubin and Jr. W. K. Ford, Astrophys. J. 159 (1970) 379.
clw E. J. Copeland, A. R. Liddle and D. Wands, Phys. Rev. D 57 (1998) 4686.
zws I. Zlatev, L. M. Wang and P. J. Steinhardt, Phys. Rev. Lett. 82 (1999) 896.
Avelino P. P. Avelino, L. Losano and J.J. Rodrigues, Phys. Lett. B 699 (2011) 10.
DSS M. P. Dabrowski, T. Stachowiak, and M. Szydlowski, Phys. Rev. D 68 (2003) 103519.
RS0 R. Sengupta, B. C. Paul and P. Paul, Pramana – J. Phys. 96 (2022) 114.
Bil N. Bilic et al., JCAP 08 (2019) 034.
RS1 R. Sengupta, P. Paul, B. C. Paul, S. Ray, Int. Jour. of Mod. Phys. D 28 (2019) 1941010.
NOT S. Nojiri, S. D. Odintsov and P. V. Tretyakov, Phys. Lett. B 651 (2007) 224.
NOG S. Nojiri, S. D. Odintsov and O. G. Gorbunova, J. Phys. A 39 (2006) 6627.
SKK M. Szydlowski, A. Kurek and A. Krawiec, Phys. Lett. B 642 (2006) 171.
HL D. Huterer and E. V. Linder, Phys. Rev. D 75 (2007) 023519.
DGP G. Dvali, G. Gabadadze and M. Porrati, Phys. Lett. B 485 (2000) 208.
Sahni V. Sahni and Y. Shtanov, JCAP 11 (2003) 014.
Subenoy S. Chakraborty, A. Banerjee and T. Bandyopadhyay, arXiv:0707.0199.
RS L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 4690.
OOS S. D. Odintsov, V. K. Oikonomou and E. N. Saridakis, Annals of Physics 363 (2015) 141.
RS2 S. K. Tripathy, B. Mishra, S. Ray and R. Sengupta, Chinese Journal of Physics 71 (2021) 610.
pal S. Pal, S. Bharadwaj and S. Kar, Phys.Lett. B 609 (2005) 194.
Penrose R. Penrose, AIP Conference Proceedings 1446 (2012) 233.
EM G F. R. Ellis and R. Maartens, Class. Quant. Grav. 21 (2004) 223.
E2 G F. R. Ellis, J. Murugan and C. G. Tsagas, Class. Quant. Grav. 21 (2004) 233.
E3 D. J. Mulryne, R. Tavakol, J. E. Lidsey and G. F. R. Ellis, Phys. Rev. D 71 (2005) 123512.
M1 S. Mukherjee, B. C. Paul, S. D. Maharaj and A. Beesham, arXiv:gr-qc/0505103 (2005).
Starobinsky A. A. Starobinsky, Phys. Letts. B 91 (1980) 99.
M2 S. Mukherjee, B. C. Paul, N. Dadhich, S. D. Maharaj and A. Beesham, Class. Quant. Grav. 23 (2006) 6927.
Tanwi A. Banerjee, T. Bandyopadhyay and S. Chakraborty, Grav. Cosmol. 13 (2007) 290.
BiplabPaik B. Paik, M. Y. Khlopov, M. Kalam and S. Ray, Physics of the Dark Universe 32 (2021) 100823.
BCP1 B. C. Paul and S. Ghosh, Gen. Rel. and Grav. 42 (2010) 795.
BCP2 B. C. Paul, S. D. Maharaj and A. Beesham, arXiv:2008.00169.
BH M. Bordemann, J. Hoppe, Phys. Lett. B 317 (1993) 315.
Gori V. Gorini, A. Kamenshchik and U. Moschella, Phys. Rev. D 67 (2003) 063509.
U U. Alam, V. Sahni, T. D. Saini and A. A. Starobinsky, Mon. Not. Roy. Astron. Soc. 344 (2003) 1057.
Bento M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D 66 (2002) 043507.
AP A. R. Amani and B. Pourhassan, Int. Jour. of Theoretical Phys. 52 (2013) 1309.
P2 H. Saadat and B. Pourhassan, Int. Jour. of Theoretical Phys. 52 (2013) 3712.
P3 H. Saadat and B. Pourhassan, Int. Jour. of Theoretical Phys. 53 (2014) 1168.
P4 A. R. Amani and B. Pourhassan, Int. Jour. of Geom. Methods in Mod. Phys. 11 (2014) 1450065.
ud U. Debnath, A. Banerjee, and S. Chakraborty, Class. Quant. Grav. 21 (2004) 5609.
wu Y-B. Wu et al., Mod. Phys. Lett. A 30 (2015) 1550005.
RS4 S. Ray et al., Int. Jour. of Mod. Phys. D 30 (2021) 2150093.
P5 H. Saadat and B. Pourhassan, Astrophys. Space Sci. 343 (2013) 783.
P6 H. Saadat and B. Pourhassan, Astrophys. Space Sci. 344 (2013) 237.
P7 B. Pourhassan, Int. Jour. of Mod. Phys. D 22 (2013) 1350061.
P8 J. Sadeghi, B. Pourhassan, M. Khurshudyan and H. Farahani, Int. Jour. of Theoretical Phys. 53 (2014) 911.
P9 E.O. Kahya, B. Pourhassan and S. Uraz, Phys. Rev. D 92 (2015) 103511.
PK B. Pourhassan and E.O. Kahya, Advances in High Energy Physics 2014 (2014) 231452.
KKPM E.O. Kahya, M. Khurshudyan, B. Pourhassan and R. Myrzakulov, A. Pasqua, Euro. Phys. Jour. C 75 (2015) 43.
PK2 B. Pourhassan and E.O. Kahya, Results in Phys. 4 (2014) 101.
KP E. O. Kahya and B. Pourhassan, Astrophys. Space Sci. 353 (2014) 677.
P10 J. Sadeghi, H. Farahani, B. Pourhassan, Eur. Phys. J. Plus 130 (2015) 84.
P11 B. Pourhassan, Canadian Jour. of Phys. 94 (2016) 659.
P12 B. Pourhassan, H. Farahani, S. Upadhyay, New Astronomy 86 (2021) 101569.
Biswas M. Biswas, S. Maity and U. Debnath, Jour. of Holography Applications in Physics, 1(1) (2021) 71.
Zhu M. Zhu and Y. Zheng, JHEP 11 (2021) 163.
SD S. Dutta, S. Mukerji and S. Chakraborty, Advances in High Energy Physics, 2016 Article ID 7404218 (2016).
Salehi A. Salehi, Phys. Rev. D 94 (2016) 123519.
BP B. Pourhassan, Physics of the Dark Universe 13 (2016) 132.
KP2 E. O. Kahya and B. Pourhassan, Mod. Phys. Lett. A Vol. 30, 13 (2015) 1550070.
B1 B.C. Paul, P. Thakur and S. Ghose, Mon. Not. Royal Astron. Society 407 (2010) 15.
B2 B.C. Paul, P. Thakur and S. Ghose, Mon. Not. Royal Astron. Society 413 (2011) 686.
Sahni02 V. Sahni et. al., J. Exp. Theor. Phys. Lett., 77 (2003) 201.
X S. K. J. Pacif, S. Arora and P. K. Sahoo, Phys. Dark Universe 32 (2021) 100804.
Y A. Al Mamon and K. Bamba, Eur. Phys. J. C 78 (2018) 862.
Riess Riess et al., ApJ 826 (2016) 56.
Debulac Debulac et al., A and A 574 (2015) A59.
|
http://arxiv.org/abs/2307.01144v1
|
20230703163420
|
Spin-momentum locking breakdown on plasmonic metasurfaces
|
[
"Fernando Lorén",
"Cyriaque Genet",
"Luis Martín-Moreno"
] |
physics.optics
|
[
"physics.optics",
"cond-mat.mes-hall"
] |
[email protected]
Instituto de Nanociencia y Materiales de Aragón (INMA), CSIC-Universidad de Zaragoza, 50009 Zaragoza, Spain=-1
Departamento de Física de la Materia Condensada, Universidad de Zaragoza, 50009 Zaragoza, Spain=-1
University of Strasbourg and CNRS, CESQ & ISIS (UMR 7006), 8, allée G. Monge, 67000 Strasbourg, France
[email protected]
Instituto de Nanociencia y Materiales de Aragón (INMA), CSIC-Universidad de Zaragoza, 50009 Zaragoza, Spain=-1
Departamento de Física de la Materia Condensada, Universidad de Zaragoza, 50009 Zaragoza, Spain=-1
We present a scattering formalism to analyze the spin-momentum locking in structured holey plasmonic metasurfaces. It is valid for any unit cell for arbitrary position and orientation of the holes. The spin-momentum locking emergence is found to originate from the unit cell configuration. Additionally, we find that there are several breakdown terms spoiling the perfect spin-momentum locking polarization. We prove that this breakdown also appears in systems with global symmetries of translation and rotation of the whole lattice, like the Kagome lattice. Finally, we present the excitation of surface plasmon polaritons as the paramount example of the spin-momentum locking breakdown.
Spin-momentum locking breakdown on plasmonic metasurfaces
Luis Martín-Moreno
August 1, 2023
=========================================================
§ INTRODUCTION
Metasurfaces based on plasmonic arrays have been demonstrated to have a plethora of applications <cit.> such as sensing <cit.>, imaging <cit.>, or telecommunications <cit.>. In particular, geometric phase metasurfaces (GPMs) have gained significant attention in the last years due to their ability to manipulate the polarization of light waves in a controllable manner <cit.>. One important property of these metasurfaces is that they can exhibit spin-momentum locking (SML), which refers to the coupling between the polarization and the momentum of the involved light waves <cit.>.
Despite the evidenced applicability of these plasmonic GPMs and numerous numerical studies, no first principles rigorous theoretical analysis had been developed. There have been studies for continuously space-variant structures <cit.> and for structures with translation and rotation symmetries of the whole lattice under stringent conditions for the direction of the electric field <cit.>. Recently, we have applied a scattering formalism to study holey plasmonic GPMs that present a chiral arrangement in the unit cell <cit.>.
This article presents a general analysis of the SML on GPMs, extending our previous study to lattices that present full translation and rotation symmetry. In particular, we apply it to the Kagome lattice, which has been considered as a platform for GPMs <cit.> and also studied due to its relevance in antiferromagnets <cit.>. Our results provide a comprehensive understanding of the SML mechanism on GPMs and have important implications for designing and optimizing these metasurfaces. Based on this general formalism, we demonstrate that the appearance of the SML breakdown is ubiquitous for any system, revealing the interplay between the SML and the linear character of the surface plasmon polaritons (SPPs). The SML breakdown appears in systems with and without global rotation symmetries, both of which will be considered below.
§ THEORETICAL FORMALISM
The general derivation of the scattering formalism used in this paper is provided in the Supplemental Material of <cit.>. In this section, we present the essential elements required to comprehend the relevant terms of the formalism, along with the article's results.
We consider a general plasmonic metasurface, this is,a metal slab characterized by a periodically repeated unit cell with an arbitrary number of elements (N) distributed in. A huge variety of shapes can be considered <cit.> yet we will focus on one of the simplest ones, rectangular dimples, which corresponds to the study of our metasurfaces by reflection. Analyzing them by transmission, if we had considered holes, would lead to the same main results. Each dimple has a short side a, a long side b, and depth d. Furthermore, each dimple is defined by its position (r⃗_α = (x_α, y_α)^T) and the angle with respect to the u⃗_x direction (θ_α), where α is the index associated with each dimple.
An electromagnetic (EM) plane wave is impinging our metasurface with an in-plane wavevector k⃗^in = k^in_x u⃗_x + k^in_y u⃗_y and an incident polarization σ_in, and our goal is to compute the reflection coefficients into the different Bragg orders (see Figure <ref>). For this purpose, we employ the coupled-mode method (CMM), which has been extensively used in the study of EM properties in metallic dimple arrays <cit.>. The CMM expands the EM fields in plane waves in the free space regions and waveguide modes inside the dimples, and finds the electric field amplitudes by properly matching the EM fields at the interfaces.
The reciprocal lattice vectors that define our unit cell in the Fourier space are G⃗_1 and G⃗_2. The Bragg modes are characterized by an in-plane wavevector k⃗_m = k⃗^in + m_1 G⃗_1 + m_2 G⃗_2 and a polarization σ. We will combine the integers m_1 and m_2 into a single index: m = (m_1, m_2), for notational simplicity.
We can treat the metallic structure using the surface impedance boundary conditions (SIBC) approximation (see Appendices <ref>, <ref> and <ref>). However, here we use the perfect electric conductor (PEC) approximation because it is enough to describe the systems qualitatively <cit.>.
It is convenient to express the polarization of each Bragg mode on the circular polarization (CP) basis to study the SML provided by our metasurface. We represent the reflection coefficients as spinors to contain both spin components: 𝐫_m = (r_m^+, r_m^-)^T, where ± denote the right- and left-handed polarization (or spin), each of them defined within the plane perpendicular to the wavevector associated to the Bragg mode m. This representation is chosen because the spin of a plane wave is conserved upon reflection by a mirror <cit.> (while the helicity changes sign).
The reflection coefficients in the CP basis with respect to the propagation directions satisfy the following equations
𝐫_m = - δ_m0 𝐢_0 + C_m0 Y_0 𝐢_0 - ∑_m' C_mm' Y_m' 𝐫_m'.
The first term is the specular reflection, being 𝐢_0 the amplitude of the incident plane wave and δ_m0 the Kronecker delta. C_mm' are the geometric couplings <cit.>, which are 2 × 2 matrices operating in polarization space. They couple different Bragg modes (m' with m) via scattering with the plasmonic metasurface and encode the geometry of the unit cell through the overlaps between the Bragg and the waveguide modes.
Y_m' are also 2 × 2 matrices representing the modal admittances. They relate the in-plane magnetic field to the electric one and, in the CP basis, can be written as Y_m' = Y̅_m' 1 + Δ_m' σ_x, where 1 and σ_x are the 2 × 2 unit matrix and the Pauli matrix that swaps spin states, respectively. In terms of the linear p (transverse magnetic) - s (transverse electric) polarized basis, Y̅_m'≡ (Y_m' p + Y_m' s)/2 and Δ_m'≡ (Y_m' p - Y_m' s)/2. For a plane wave with frequency ω and in-plane wavevector k_m' = |k⃗_m'| propagating in a uniform medium with dielectric constant ϵ, the modal admittances are Y_m'p = ϵ / q_m'z and Y_m's = q_m'z, where q_m'z= √(ϵ - q_m'^2) (q_m' = c k_m' / ω and c is the speed of light). Notice that Δ_0 = 0 at normal incidence, while both Y̅_m' and Δ_m' diverge at the Rayleigh points (i.e., whenever a diffractive order becomes tangent to the metal-dielectric interface).
The geometric couplings allow us to explore the SML emergence because they provide the coupling between two different Bragg modes and their corresponding CP components. They can be written as
C_mm' = R^k(m) ← z C_mm'^z R^z ← k(m').
The interaction of the Bragg modes is ruled through the dimples, so the in-plane EM fields are the ones playing a role in the couplings. Therefore, the origin of the SML resides in the properties of the geometric couplings in the CP basis but with respect to the u⃗_z direction, C_mm'^z. However, each Bragg mode is transversal so its polarization is defined with respect to the propagation direction. Then, we need the R's to encapsulate the change of basis with respect to the u⃗_z and the propagation direction.
The expression is just R^k(m)← z = 1/2[ (√(q_mz^2 + q_m^2)/q_mz + 1) 1 + (√(q_mz^2 + q_m^2)/q_mz - 1) σ_x ]. So, σ_x also appears in C_mm', swapping the spin states as in the modal admittances.
On the other hand, the expression for C_mm'^z is:
C_m m'^z = C' ∑_α=0^N-1 S_mα S_m' α^*,
where C' is the dimple cross-section, which depends on the dimple area and depth, and the impedance of the waveguide mode; and S_mα is a geometrical factor that measures how well a given EM plane wave overlaps with the fundamental mode in the dimple (details in Appendix <ref> and <cit.>).
Both σ_x appearances (in Y_m' and C_mm') contribute to the mixing of the spin components of the Bragg modes, reducing the SML contrast and producing what we coined as spin-momentum locking breakdown in <cit.>. We have shown that the SML breakdown terms are ubiquitous to any configuration independently on whether they host or not global rotation symmetries. The paramount example of the relevance of the SML breakdown is the excitation of SPPs because both Δ_m', and the factor (√(q_mz^2 + q_m^2)/q_mz - 1)) appearing in R, rise and become as large as Y̅_m' and (√(q_mz^2 + q_m^2)/q_mz + 1)).
In the succeeding sections, we will describe two different, although related, structures: without and with global rotation symmetries. For both, we will present the SML mechanism derived from their geometric couplings and the SML breakdown effects.
§ SPATIALLY ROTATED DIMPLES ALONG U⃗_X DIRECTION
We consider a rectangular unit cell of N=3 dimples evenly spaced along the u⃗_x of the unit cell, with L being the distance between the centers of the two nearest dimples, in both x- and y- directions. We consider that θ_α varies linearly with α: θ_α = 2 π n_w α/ N, where the winding number n_w defines the number of complete 2π rotations along the unit cell. The system is depicted in Figure <ref>a, where the winding number is n_w = 1. The case presented in <cit.> is similar and the appearance of SML breakdown was already demonstrated. The choice of N=3 and n_w = 1 is based on the system considered in the next section, the Kagome lattice, whose unit cell can be seen as three clusters of three dimples each, with winding numbers of n_w=1 as well. Another reason for considering N=3 is because the rotation steps of 2π/3 are very far from the adiabatic and continuous condition required to apply the Berry phase formalism, which was conceived to analyze adiabatic and continuous deformations of a closed spatial path.
Notice that although the dimples perform a step-wise rotation along the unit cell, the whole lattice does not support global rotation symmetry.
For this case, the reciprocal lattice vectors are: G⃗_1 = 2π / (N L ) u⃗_x and G⃗_2 = 2π / L u⃗_y. Considering m_2 = m _2' = 0 is enough to explore the underlying physics because there is no inversion symmetry breaking along the u⃗_y direction <cit.>. Thus, we consider m = m_1, m' = m_1' and k_y^m = k_y^m' = 0. Besides, the small-dimple approximation simplifies the overlapping integrals by considering the dimples much smaller than the wavelength. Then, C_mm'^z reads
C_mm'^z = C ∑_α = 0 ^2 e^i 2 πα (m' - m) / N[ 1 e^-i 2 π 2 n_w α / N; e^i 2 π 2 n_w α / N 1; ]
= C N (δ_m, m' + n_0 N 1 + ∑_s = ±δ_m,m' + n_0 N - 2n_w s σ_s ),
where n_0 is any integer, σ_± are Pauli matrices that increase and decrease spin, respectively, and C = 4 a b C' / (π ^2 A_uc), being A_uc the area of the unit cell.
The SML mechanism is derived exactly from Eq. <ref>. The first term corresponds to the spin-preserving processes and the associated Bragg law is k_x^out = k_x^in + n_0 G^0, with G^0 = 2π/L. Two Bragg modes with a difference in indices proportional to N can be coupled if spin is preserved. The second term describes the spin-flipping processes and the associated Bragg law is k_x^out = k_x^in + n_0 G^0 ∓ k_g, where k_g = 2π 2n_w / (NL) is the geometric momentum. Two Bragg modes with a difference in indices proportional to N ± 2 n_w can be coupled if spin is changed to ∓ 1, which is exactly the spin-to-momentum conversion of the SML.
To illustrate this, we come with spin + ≡ (1,0)^T and represent both spin components of the normalized amplitudes of the geometric couplings in the CP basis. This is, 𝐜_m_1,0≡ (c_m_1, 0^+,c_m_1, 0^-)^T = C^z_m 0· (1,0)^T / (CN).
In Figure <ref>b, we represent |c_m_1,0^±|. The SML is evident. Spin is preserved for m_1 = 0, ± 3, which are multiples of N; and spin is flipped for m_1 = 2,-1, which are 2 n_w and 2 n_w - N respectively. Hence, the exact SML mechanism arises from the geometric couplings with respect to the u⃗_z direction, C_mm'^z.
When computing the full EM system (reflection coefficients), breakdown terms appear in both geometric couplings and modal admittances. Additionally, there is the contribution from the specular reflection. As we want to study the interaction of the light with the dimple lattice, we define Δ𝐫_m = 𝐫_m + δ_m0 𝐢_0, which removes the specular reflection from the zero order for a better observation of the SML breakdown.
In Figure <ref>c we represent |Δ r^±_m_1,0| for an incoming plane wave impinging normally to the metasurface with spin + and energy ω = 3 eV. The consequences of the SML breakdown terms are already noticeable: all the Bragg modes are a combination of both CP states, and the perfect SML does not hold anymore but is recognizable. Since at that frequency SPP resonances are not excited, the general behavior is still similar to the perfect SML.
Note that we are studying a plasmonic metasurface and the breakdown is maximum when a plasmonic resonance is excited <cit.>. Thus, we show the reflection coefficients when we are at a plasmonic resonance in Figure <ref>d. We represent |Δ r^±_m_1,0| at a SPP resonance associated to the Bragg modes m_1 = ± 3. We use an incoming plane wave impinging normally to the metasurface with spin + and energy ω = 2.69157 eV. The consequences of the SML breakdown terms are now predominant: |Δ r^±_±3,0| are very large and both spin components are similar, which is characteristic of the linear p polarized character of the SPP. Moreover, the perfect SML behavior cannot be recognized because of SML breakdown, being spoiled and mixed both spin components of all the Bragg modes.
§ KAGOME LATTICE
In this section, we present the main result of the article: the appearance of the SML breakdown in a system with combined translation and rotation symmetry of the whole lattice. This is the staggered (or √(3)×√(3)) Kagome lattice (KL) <cit.>. The reciprocal lattice vectors of the KL are: G⃗_1 = 2 π / (3L) u⃗_x and G⃗_2 = π / (3L) (- u⃗_x + √(3)u⃗_y). We will analyze its geometric couplings, as well as the reflection coefficients.
This symmetry is important because it has been used in other works <cit.> to study the appearance of SML via group theory arguments, although restricted to waves with an electric field perpendicular to the surface and at normal incidence.
Figure <ref>a shows a schematic representation of the considered KL. The unit cell is defined by the dashed lines and is composed of N=9 dimples, defined by the positions of their centers and their angles with respect to the u⃗_x direction (see Table <ref> in Appendix <ref>). These nine dimples can be subdivided in three similar clusters {α} = {{0,1,2},{3,4,5},{6,7,8}}. The dimples in each cluster are distributed forming an equilateral triangle, with angles that are step-wisely rotated with a winding number of n_w = 1.
Each triangular cluster has the same number of dimples and same winding number as the rectangular unit cell of the previous section. However, they have different spatial distribution. Consequently, the involved Bragg modes in the KL host similar, but different, coupling processes.
The geometric couplings in the circular polarization basis with respect to the u⃗_z in the PEC and small-dimple approximations are,
C_mm'^z = C ∑_α=0^8 e^i (k⃗_m' - k⃗_m) r⃗_α[ c_++ c_+- e^- i 2 θ_α; c_-+ e^ i 2 θ_α c_–; ]
= C A^mm'[ c_++ δ_m_1 + m_2, m_1' + m_2' + 3 n_0 - c_+- δ_m_1 + m_2, m_1' + m_2' + 3 n_0 - 2 n_w; -c_-+ δ_m_1 + m_2, m_1' + m_2' + 3 n_0 + 2 n_w c_– δ_m_1 + m_2, m_1' + m_2' + 3 n_0; ],
where we have defined c_σσ' = (k⃗_m ·σ⃗) (σ⃗' ·k⃗_m') / (k_m k_m'), being σ⃗ = u⃗_x + i σu⃗_y, with σ = ±. These c_σσ' are the projections of the Bragg modes m and m' with the circular polarizations σ and σ', respectively. The Kronecker deltas provide the selection rules between these Bragg modes, being n_0 an integer. Besides, depending on the Bragg modes to be coupled, the coupling amplitude is different: |A^mm'| = N if both Δ_1 and Δ_2 are even, and |A^mm'| = N/3 in the rest of cases; being Δ_1/2 = m_1/2'-m_1/2. This is inferred from the sum over the dimples in the unit cell, in the first line of Equation <ref>.
Equation <ref> rules two different processes. One process (given by the diagonal elements of C^z_mm') conserves spin. The corresponding Bragg law, called standard Bragg law <cit.> is k⃗^out = k⃗^in + m_1 G⃗_1 + m_2 G⃗_2 such that m_1 + m_2 = 3 n_0 (notice that the incident plane wave corresponds to m_1' = m_2' = 0) . The other process flips spin (off-diagonal elements of C^z_mm'). The corresponding Bragg law, called spin-orbit Bragg law <cit.>, satisfies another condition: m_1 + m_2 = 3 n_0 ∓ 2 n_w, which is exactly the SML mechanism.
Figure <ref>b shows the SML mechanism derived from the geometric couplings. We represent 𝐜_m_1,m_2 = C^z_m 0· (1,0)^T / (CN), where (1,0)^T is the spinor for the spin +. Although we have considered both m_1 and m_2 in the calculation, we take m_2 = 0 for a simpler representation. We observe the feature of the coupling amplitudes A^mm' of the different processes. It is easy to observe that the SML mechanism that we described above is satisfied.
Once we have shown how the SML arises from the geometric couplings for the KL, we look at Δ𝐫_m. In Figure <ref>c we represent |Δ r^±_m_1,0| for an incoming plane wave with spin +, energy ω = 3 eV and normal to the metasurface. Since SPPs are not excited at that frequency, the general behavior is similar to the perfect SML, although we already see some signatures of the breakdown. The amplitude relation between the different modes is no longer exactly satisfied, and we also observe small amplitudes of modes that should be zero if SML were exact.
Finally, in Figure <ref>d, we show the reflection coefficients when a plasmonic resonance is excited. We represent |Δ r^±_m_1,0| at a SPP resonance associated to the Bragg modes m_1 = ± 3. We use an incoming plane wave with spin +, energy ω = 2.69367 eV and impinging normally to the metasurface. The SML breakdown terms have acquired a governing relevance. |Δ r^±_±3,0| are very large and both spin components are similar, which is characteristic of the linear p polarized character of the SPP. From these resonantly excited modes, successive couplings with other modes can occur. In consequence, we cannot recognize anymore the expected SML because both spin components of all the Bragg modes are spoiled and mixed.
The physical interpretation is as follows: the EM fields carry CP light perpendicular to the propagation direction of the plane waves. However, the system has a particular symmetry perpendicular to the planar metasurface (u⃗_z direction). This mismatching results in that when the CP light gets projected onto the planar surface, it becomes elliptical (which is a combination of the two CP states) and then, the SML is spoiled.
§ CONCLUSION
We have shown that even a system with combined translation and rotation symmetry of the whole lattice suffers spin-momentum locking breakdown. The physical interpretation lies in the elliptical projection onto the planar metasurface of the circularly polarized light. Therefore, together with the results obtained in <cit.>, this shows that any system, with or without global lattice symmetries, presents breakdown of the SML. Nonetheless, we stress that the breakdown terms are often small, so the SML is a useful concept. However, in some cases such as the plasmonic resonances, breakdown terms become very relevant. Plasmon resonances are, thus, the paramount example of SML breakdown.
Despite the occurrence of this breakdown, it presents an opportunity to optimize the system in order to minimize it. Additionally, other applicative perspectives could be renewed by the consideration of the results presented in this work, such as optovalleytronic systems <cit.>, non-linear hybrid metasurfaces <cit.>, and topology-based high-resolution sensors <cit.>.
§ ACKNOWLEDGEMENTS
We acknowledge Project PID2020-115221GB-C41 was financed by MCIN/AEI/10.13039/501100011033 and the Aragon Government through Project Q-MAD.
This work is part of the Interdisciplinary Thematic Institute QMat of the University of Strasbourg, CNRS, and Inserm. It was supported by the following programs: IdEx Unistra (ANR-10-IDEX-0002), SFRI STRATUS project (ANR-20-SFRI-0012), and USIAS (ANR-10-IDEX-0002-02), under the framework of the French Investments for the Future Program.
§ DETAILS OF THE THEORETICAL FORMALISM
Here, we extend the calculations presented in the main text and introduce the required quantities such as C' and the overlapping integrals.
We present the formalism within the surface impedance boundary conditions (SIBC) approximation. The SIBC approximation provides a more accurate derivation because it considers the real dielectric constant of the metal ϵ_M (ω), via the Lorentz-Drude model <cit.>, and also the penetration of the EM fields into the metal through the surface impedance z_s = 1/ √(ϵ_M). Yet, we consider z_s = 1/ √(ϵ_M + 1), which is a phenomenological correction that leads to the exact dispersion relation of surface plasmon polaritons (SPPs) in a metal-vacuum interface. The reflection coefficients are now
f^+_m 𝐫_m = - f_0^- δ_m0 𝐢_0 + C_m0 Y_0 𝐢_0 - ∑_m' C_mm' Y_m' 𝐫_m',
where the SIBC signatures are encapsulated in the geometric couplings and in the quantities f_m^±, which are 2×2 matrices in the CP basis with respect to the propagation of the m-th Bragg mode, that depend on the surface impedance z_s such that:
f_m^± = 1/2[ f_mp^±+f_ms^± f_mp^±-f_ms^±; f_mp^±-f_ms^± f_mp^±+f_ms^±; ],
with f_mσ^± = 1 ± z_s Y_mσ and σ = { p,s}.
The dependence of the metal approximation in the geometric couplings is encapsulated in the constant C':
C'_SIBC = 1/Yf^+ f^- (1+Φ)/f^+ - f^- Φ ,
whereas
C'_PEC = 1/Y1+Φ/1-Φ,
being Y the modal admittance of the fundamental waveguide mode, f^± = 1 ± z_s Y, Φ = - e^i2 k_z^w d and k_z^w is the propagation constant along the z-direction of the fundamental waveguide mode. For a rectangular dimple with long side b, filled with a material with dielectric constant ϵ_d, k_z^w = √(ϵ_d (ω/c)^2 - k_w^2), with k_w = π / b.
We posed in the main text that the geometric couplings depend on the overlapping integrals S_mσα between the Bragg modes (characterized by m and σ) and waveguide modes (characterized by the dimple index α). A general expression for the overlapping integrals is intricate because of the dependence on the in-plane momenta and the size of the dimples (it can be found in <cit.>). However, if we consider the small-dimple approximation for which the dimple size is smaller than the wavelength, they read
S_m σα = √(a b/2 A_uc)4/π v_m σα e^-i k⃗_m r⃗_α,
where A_uc is the area of the unit cell, σ is the polarization of the considered Bragg mode, and v_m p α = (k_x^m cosθ_α + k_y^m sinθ_α) / k_m and v_m s α = (-k_y^m cosθ_α + k_x^m sinθ_α) / k_m, being k_x^m and k_y^m the x and y components of the in-plane momentum k⃗_m, respectively.
With these expressions, one can easily achieve the geometric couplings for both systems presented in the main text (see Equations <ref> and <ref>).
§ KAGOME LATTICE ELEMENTS
In Tab. <ref>, we present the defining quantities for all the dimples comprising the analyzed Kagome lattice. We label each dimple with an index α and show its center position and its angle.
§ SIBC APPROXIMATION IN THE KL
In this section, we expand on the SML breakdown cases that we studied in the main text for the Kagome lattice. We compute the effect of considering the SIBC approximation and finite-size dimples. This is shown in Figure <ref>, where we consider a representative case of non-resonant excitation and another case of a resonant plasmonic excitation. In both cases, we represent Δ r_m and sweep m_1. We observe the effect of the SIBC approximation at first glance. The zero order is larger than the rest (except when we excite an SPP and the resonant modes govern). Furthermore, the SML breakdown is evident in both figures, although the underlying SML can be noticed in the orders m_1 = -1,2 of Figure <ref>a where the spin - component is larger than the spin + one, for instance. Besides, in Figure <ref>b we observe the same behavior of huge |Δ r_±2^±| as we presented in the main text for |Δ r_±3^±|. Therefore, in the SIBC approximation, the SML becomes less evident because of the metal absorption.
In this case and below, we have kept m_2 = 0 not only for the representation but also for the simulation. This does not affect the physical behavior because the G⃗_1 direction presents a breaking of the inversion symmetry <cit.>.
§ ANALYSIS OF THE INCIDENT MOMENTUM IN THE KL
We have focused on the KL by analyzing its SML, the breakdown terms, and its dependence on being or not at a plasmonic resonance. For the latter analysis, we have varied the energy and kept the normal incidence. However, we can also excite different SPP resonances by varying the incident momentum. This section will show how the reflection coefficients behave when the incident momentum is varied away from the normal.
Figure <ref> represents the absolute value of both spin components for two reflection coefficients: r_3,0 and r_2,0, with respect to the incident momentum in the x direction: k_x^in. We have chosen the representative values of k_y^in = 0, ω = 1.79 eV and σ_in = +. Figures <ref>a and <ref>b show that there is only one spin component for each mode, which is in excellent agreement with the SML features derived from the geometric couplings C^z_mm'. The three small peaks for each subfigure correspond to plasmonic resonances which, given that breakdown terms have been neglected, preserve the SML. However, when we perform the full calculation, considering all SML breakdown terms, both spin components are non-negligible and the SML is spoiled (see Figures <ref>c and <ref>d). Besides, when the corresponding plasmonic resonance is associated with the Bragg mode that we are representing via the reflection coefficient, there is an enhancement of the latter. This was also seen in the |Δ𝐫_m| plots of the main text.
Logically, the SML breaks down when a plasmonic resonance is excited because the SPPs are linearly p polarized. However, this breakdown persists even when k_x^in is increased away from resonance. The reason is that for larger k_x^in, the Bragg modes associated with these reflection coefficients (r_3,0 and r_2,0) are evanescent. Given this and considering that both breakdown sources (modal admittances and the change of basis matrices) depend on the momentum in the u⃗_z direction of the corresponding Bragg mode q_mz, it is easy to infer that the evanescent modes introduce a strong breakdown as well.
§ ANALYSIS OF THE APPROXIMATIONS IN THE KL
The results presented in the main text are computed in the PEC and small-dimple approximations. On the other hand, in Figure <ref> we showed what happens if we calculate the same quantities but in the SIBC approximation and with finite-size dimples. A global comparison is still lacking. For this reason, in Figure <ref>, we display the five possibilities: neglecting the SML breakdown terms (blue), PEC and small-dimple (red), PEC and finite-size (yellow), SIBC and small-dimple (purple), and SIBC and finite-size (green).
Along the main text and the rest of the appendices, we have dealt with two of the five approximations detailed in Figure <ref>. In Figures <ref>c, <ref>d, <ref>c, <ref>d, <ref>c and <ref>d, we considered the PEC and small-dimple approximations, or what we call “full calculation". Besides, in Figure <ref>, we used the SIBC and finite-size approximations. Therefore, we present Figure <ref> to compare them and add the rest of the possible combinations: neglecting SML breakdown terms, PEC with finite-size, and SIBC with small-dimple approximations.
The effects of the different approximations are observed in Figure <ref>, representing both spin +/- components of the reflection coefficients. Blue dots represent the case of neglecting SML breakdown terms; because of that, some modes are zero (not seen). This approximation is equivalent to the behavior of the geometric couplings C^z_mm'. The rest of the approximations represent different levels of SML breakdown. The smallest SML breakdown is obtained when the metal is considered as a PEC and the dimples are very small, whereas the maximal breakdown appears when the metal is real and the dimples are finite-sized. Moreover, a general pattern appears: the effect of the dimple size is less relevant than the effect of the PEC approximation. That is to say, choosing small dimples or finite dimples only provides a small deviation over the reflection coefficients. However, a greater difference appears between the PEC and the SIBC approximations.
Note that we have stayed away from any plasmonic resonance for this comparison because the plasmonic resonance locations depend on the considered metal approximations.
apsrev4-2
|
http://arxiv.org/abs/2307.00500v1
|
20230702072029
|
CQLite: Communication-Efficient Multi-Robot Exploration Using Coverage-biased Distributed Q-Learning
|
[
"Ehsan Latif",
"Ramviyas Parasuraman"
] |
cs.RO
|
[
"cs.RO",
"cs.MA"
] |
CQLite: Communication-Efficient Multi-Robot Exploration Using Coverage-biased Distributed Q-Learning
Ehsan Latif Ramviyas Parasuraman
School of Computing, University of Georgia, Athens, GA 30602, USA
Author emails: {ehsan.latif;ramviyas}@uga.edu
=====================================================================================================================================================
Frontier exploration and reinforcement learning have historically been used to solve the problem of enabling many mobile robots to autonomously and cooperatively explore complex surroundings. These methods need to keep an internal global map for navigation, but they do not take into consideration the high costs of communication and information sharing between robots. This study offers CQLite, a novel distributed Q-learning technique designed to minimize data communication overhead between robots while achieving rapid convergence and thorough coverage in multi-robot exploration. The proposed CQLite method uses ad hoc map merging, and selectively shares updated Q-values at recently identified frontiers to significantly reduce communication costs. The theoretical analysis of CQLite's convergence and efficiency, together with extensive numerical verification on simulated indoor maps utilizing several robots, demonstrates the method's novelty. With over 2x reductions in computation and communication alongside improved mapping performance, CQLite outperformed cutting-edge multi-robot exploration techniques like Rapidly Exploring Random Trees and Deep Reinforcement Learning.
Multi-Robot, Exploration, Communication
§ INTRODUCTION
Map-based coverage and exploration is a significant problem of interest in the robotics and multi-robot systems (MRS) community <cit.>. In this problem, robots continuously explore to obtain the full environmental map in a new bounded environment without prior information. It can be helpful in various applications, including search and rescue, domestic service, survey and operations, field robotics, etc. Autonomous exploration and surveillance solutions can also demonstrate the adaptability of the MRS since robots can carry out these missions in different and uncharted areas.
Recent works have been influential in realizing an efficient exploration objective. For example, information-based methods (e.g., <cit.>) typically use the Shannon entropy to describe the uncertainty of the environmental map and construct the optimization problems such that the robot's control variable (e.g., velocity) is continuously optimized during the exploration process. On the other hand, frontier-based methods (e.g., <cit.>) involve deciding the robot's next move (or path) by searching the frontier points on the border of free and unknown points. Often, these methods only produce approximate solutions due to optimization.
Integrating learning with planning solutions is promising, especially for robot exploration <cit.>. In the reinforcement learning (RL) paradigm, robots can continuously improve competence and adapt to the dynamics of natural surroundings by observing the results of navigational choices made in the actual world <cit.>. On the other hand, cooperation among robots in an MRS can help achieve a complex mission through simple distributed approaches <cit.>.
This paper explores the intersection between learning and cooperation, designs a combined solution to achieve efficient map exploration, and provides theoretical support for fast convergence and time complexity. We leverage the benefits of learning-based paradigms for joint exploration. We aim to create a distributed algorithm that gains knowledge through robot-robot information sharing while minimizing communication and computing overheads.
Specifically, we utilize a distributed Q-learning methodology with a coverage-biased reward function with a light communication and information fusion strategy. In our approach, we reduce communication complexity by sharing only the current state information, i.e., Q-value, instead of the complete Q-table as done in <cit.> and explored frontier.
Fig. <ref> provides an overview of the proposed method implemented in the Robot Operating System (ROS) framework.
The main contributions of this paper are summarized below.
* We propose a novel distributed coverage-biased Q-learning approach (CQLite) for efficient multi-robot map exploration with limited data exchanges.
* We substantiate the potential of our method with theoretical guarantees and extensive simulation experiments. We evaluate the performance of our approach against two state-of-the-art (SOTA) multi-robot exploration methods: Rapidly-exploring Random Trees (RRT) for Optimized Exploration <cit.> and Deep Reinforcement Learning (DRL) for Voronoi-based Exploration <cit.>.
* We open source[<https://github.com/herolab-uga/cqlite.git>] the CQLite as a ROS package for use and further development by the robotics community.
The key idea behind the CQLite uses a coverage-biased reward function to perform efficient exploration by sharing limited information among robots in a distributed fashion. Our method achieves fast convergence with the best coverage performance, reduced communication, and update costs compared to the baselines.
Video of sample simulation experiments and real-world demonstrations are also available at <https://youtu.be/n3unL1nuieQ>.
§ RELATED WORK
Map exploration problems focus on frontier-based and learning-based coverage planning approaches. A robot can be greedily pushed in an occupancy grip-map to the closest boundaries <cit.> or to the most uncertain (or informative) regions <cit.>.
In frontier-based strategies, the robots will look to expand coverage into the unexplored regions by choosing the next waypoints based on the frontier of the explored map boundaries. For instance, in <cit.>, the multi-robot map exploration objective is integrated into an optimization framework incorporating Rapidly-exploring Random Trees (RRTs) to increase the effectiveness and efficiency of multi-robot map exploration. However, the constraints of such frontier-based approaches are the computing expense of the optimization methods and the possibility of non-optimal outcomes resulting from RRTs' stochastic characteristics.
Researchers also presented communication-efficient solutions for exploration in multi-robot systems. For instance, Zhang et al. <cit.> introduced the MR-TopoMap based on a topological map, which can independently explore the robot's surroundings while sporadically exchanging topological maps when communication is possible. But, path planning through topological mapping can lead to a sub-optimal path and, specifically, in the case where robots start exploring from the same position, exploring the same map, making it difficult to divide the map into topologies.
Corah et al. <cit.> use information-based distributed planning considering communication restrictions. However, the planner's finite-horizon nature could lead to suboptimal exploration paths because it doesn't consider long-term planning beyond the given horizon making it more difficult for the system to make decisions based on knowledge in the future. This might prevent robots from efficiently exploring or discovering key regions of interest.
More recently, Gao et al. <cit.> reduced inter-robot communication costs by utilizing a mission-based protocol and centralized planning, where the former can actively disconnect robots to proceed with distributed (independent exploration) and the latter will help them achieve rendezvous to reconnect and share information. However, computing the super-frontier information is computationally expensive, and the active disconnection strategy may limit the robots from sharing other critical data during the mission.
A body of research concentrates on Reinforcement Learning (RL) and Q-learning for multi-robot tasks, modifying the learning mechanism in low communication scenarios for better navigation and exploration <cit.> and utilizing deep reinforcement learning to achieve optimality in robotic exploration <cit.>.
However, this method calls for frequent map merging, which raises the cost of updates.
A Deep RL (DRL) approach for cooperative multi-robot exploration using Voronoi cells was proposed in <cit.>. Despite its intriguing concept, it was constrained by training difficulties and sub-optimal solution tendencies.
Further, DRL has shown promise in some problem spaces, but they frequently offer less-than-ideal solutions outside those contexts. They cannot guarantee convergence in infinite horizons.
In research <cit.>, Q-learning is expanded, and an RL for multi-robot cooperative tasks is proposed. The standard reward is transformed into an "internal reward" when robots on the map cannot communicate with one another. This allows the robots to learn following this "internal reward" type and achieve task collaboration under such challenging circumstances. Another study <cit.> suggests that robots can use the knowledge acquired through learning more effectively by mixing RL with traditional deep neural networks. Compared to classical RL, the algorithm can help the robot find the best way to get out of the map more quickly but requires merging maps on every time step, which increases the update cost.
A unique RL map navigation technique is proposed in research <cit.> to address the unknown map navigation problem for autonomous ground vehicles. First, the picture data of the random map is collected using the quadrotor's bottom-mounted camera. Then the virtual map is created in the simulation environment using an image processing technique. Finally, an enhanced Q-learning algorithm is suggested to address the issue that the original greedy strategy repeatedly forces the robot to linger in the past state.
Though the DRL and Q-learning approaches are popular and solve specific research questions, they provide sub-optimal solutions outside the task space (or domain). They cannot offer convergence in cases of an infinite horizon. DRL approaches address traditional map exploration issues but provide solutions with high computation, communication, and update costs. When it comes to RL, to solve the problems of slow learning speed and non-convergence of traditional Q-learning, many scholars have improved RL in different aspects.
In <cit.>, a Voronoi-based approach is proposed that uses DRL for cooperative multi-robot exploration. By splitting the environment into Voronoi cells and allocating a robot to each cell for investigation, the authors hope to increase exploration efficiency. However, due to the complexity of the environment and the unknowable mapping of the environment, this strategy is limited by the difficulty of training deep RL models and the possibility of finding less-than-ideal answers.
Although some methods cannot accelerate the convergence speed of Q-learning, they provide us with exciting ideas.
Another related topic to the cooperative exploration problems is the cooperative Simultaneous Localization and Navigation (SLAM) approaches. We briefly comment on this literature from the perspective of communication efficiency.
For example, Liu et al. <cit.> proposed a multi-agent SLAM approach that uses efficient communication to reduce bandwidth consumption but lags in computational efficiency. In contrast, the authors in <cit.> proposed a lifelong localization and mapping framework that adapts to changing environments but cannot optimize the communication and computational cost for mapping. Bernreiter et al. <cit.> used spectral graph analysis to enable robots to collaborate on mapping tasks but didn't discuss the computational cost of graph formation and optimization, a key challenge in real-world applications.
On the other hand, the cooperative RL approach proposed in <cit.> faces limitations in terms of computational complexity, as it may become increasingly complex as the number of state spaces increases. Training the RL agents to collaborate in the SLAM tasks may be challenging. These limitations regarding communication have been carefully considered in our proposed CQLite to ensure its practicality in real-world applications and prove computational and communication efficiency by sharing limited data ad-hoc and employing efficient Q-learning to determine optimal exploration strategy.
To address these gaps in the literature, CQLite considers an efficient information transfer mechanism combined with distributed Q-learning with a coverage-biased reward function for achieving communication and computationally efficient multi-robot cooperation to solve map exploration tasks.
CQLite departs from RRT (frontier-based) and DRL (learning-based) regarding exploration strategy by reducing recurrent frontier exploration to avoid mapping overlap and Q-learning update strategy for communication efficiency by only sharing and utilizing recently calculated Q-value to the robots, respectively. Additionally, in both RRT and DRL, robots share locally explored maps on every iteration and apply map merging, which gives rise to computational complexity consequently. We reduced this overhead by only sharing and applying map merging in an ad-hoc manner.
By incorporating these novelties, our proposed CQLite method addresses the limitations of the above approaches, even in cases of limited multi-robot connectivity.
Another line of research focuses on cooperative simultaneous localization and mapping (SLAM) techniques, which emphasize communication effectiveness. Although computational efficiency is still a problem, Liu et al. <cit.> presented a multi-agent SLAM technique that lowers bandwidth use. Others use spectral graph analysis for cooperative mapping but overlook the computational costs of graph formation and optimization <cit.>. In contrast, others concentrate on lifelong localization and mapping but fail to optimize the communication and computational cost <cit.>. Cooperative RL techniques, as those in <cit.>, have difficulty keeping up with the rising computational complexity of growing state spaces. By delivering computational and communication efficiency through selective data sharing and utilizing effective Q-learning for determining the best exploration strategy, our proposed CQLite method addresses these limitations.
It is worth noting that the objectives of SLAM and exploration approaches are fundamentally different. The SLAM problem focuses on accurately building and merging the map, while the exploration problem focuses on using the available map to determine waypoints to maximize coverage area. In our work, we use an existing map merging method[<https://wiki.ros.org/multirobot_map_merge>] from the literature to perform multi-robot SLAM. At the same time, our proposed CQLite is designed to maximize exploration with low communication and computation costs.
§ PROPOSED APPROACH
The problem formulation employed here deploys many robots at random starting locations in an unknown environment.
The robots must navigate towards the frontier position detected by local sensing information as a standard map exploration strategy. To accomplish this efficiently, a robot must decide which frontier to navigate after leaving its current explored region. In doing so, it is hoped to reduce the number of steps to take and the size of data exchanges with connected robots while considerably enhancing the effectiveness of each robot's random exploration. Robots only share updated Q-value and the newly explored frontier with other robots. Each robot keeps track of its local and shared frontiers to avoid re-exploration. Robots continue to generate local maps and share the newly developed map only when asked by other robots in case of the already explored frontier. Robots who cannot find new frontiers merge their local map with a map received from peers to build a global map using the feature similarity-based map merging technique <cit.>. The robot's decisions regarding an action plan are based on the shared information and Q-learning computation.
The whole procedure concerning robot i can be visualized in Fig. <ref>.
§.§ Q-learning
Markov's decision processes frequently model the robot's interaction with the environment. A robot's state is (x,y,θ, active/inactive) in a global frame.
Robots are localized and initialized in a global frame, and positions are known concerning virtually defined bounded regions which can be expanded based on exploration requirements.
We consider the frontier's position as states for exploration by applying efficient frontier detection <cit.>. A robot can transition from state s_t ∈ S to state s_t+1 due to acting a_t ∈ A based on its state at time t. Robot action a_t to reach s_t+1 from s_t can be determined using discrete-time Hopfield function <cit.>.
The transition probability is defined as T: S × A × S → [0, 1]. The robot will receive a reward for each action using a reward function R: S × A × S → R specific to the task. The robot will have learned the course of action to take in each state and will be able to maximize the reward of the entire interaction process.
In Q-learning, all possible states and actions are created using the Q table, which then updates each value through iterative learning. The robot then chooses the best course of action for each state based on the values in the table. This approach is frequently utilized in path planning, chess, card games, and other activities.
Assumptions: For simplicity, we assume a flat ground terrain environment for exploration.
* The robots have an omnidirectional sensory system that can detect the boundary of an obstruction within the maximum sensing range r_s and provides a description of the open space around the robot.
* Each robot has a communication range r_c >> r_s that it can use to broadcast the data stored in its memory. The robot can constantly receive information about its relative position from a neighbor robot inside the r_c communication range.
* Robots are connected through the wireless communication channel and assumed to form a connected graph throughout exploration, which is practical to achieve in a multi-robot application. Nevertheless, the proposed solution is distributed and ensures maximum coverage even in partial disconnectivity with the trade-off in time of exploration and re-exploration.
Here, we introduce CQLite as a distributed method for robot i, which is now at state s_t at time t and selects the following state as s_t+1 to explore independently.
Finding the action a that maximizes the Q-value for a specific state s is the goal of the maximum optimization function for Q-learning, i.e.,
a^* = _a Q_i(s,a),
where a^* is the optimal action for a given state s.
The Q-learning algorithm updates the Q value as
Q_i,t + 1( s_t,a_t)
= (1 - α )Q_i,t( s_t,a_t) + α[ r_i,t + γQ_i,t( s_t+1,a^*)],
where r_i,t is the reward received for taking action a in state s_t.
The α∈ (0, 1] controls the balance between the coverage and delay, and γ is the discount factor to prioritize present vs. future rewards.
This optimization function is used in the action selection step of the Q-learning, where the agent selects the action that maximizes its expected future reward.
The objective of the CQLite is to perform maximum coverage in less time and avoid overlapping exploration, which can be numerically defined as
max_π{P_a^π (t) - λ_i E_t(a|π)} ,
where P_a^π(t) is the probability to cover the unexplored region using for action a using policy π at time t, E_t(a|π) is estimated time to reach the state s_t by taking action a at time t in policy π and λ_i is the cost associated with each step taken by robot i. We have a vector path extracted by containing position waypoints connecting s_t to s_t+1 associated with a <cit.>. For each dimension of path at each control instant t=t_j, we first compute the velocity command as:
v_t_j = K_p · .e_t_j + K_I ∑_t=t_0^t_j(e_t_j) ,
where e_t_j = s_t,j - s_t,j-1 represents the instantaneous error between the intermediate states associated with action a (i.e., the feedback) at time t= t_i. Further, K_p and K_I are the so-called proportional and integral gains of the motor controller that regulate the contributions of the corrections induced by the actual error and the error accumulated over time, respectively. These constant values determination is based on the motion constraints of our differential drive robots as discussed by Li et al. <cit.>. They can be different based on the robot's physical and motion characteristics. In our case, we predetermined values of K_P and K_I as 2 and 0.5, respectively. Now we apply simple kinematic to find the estimate E_t(a|π) as:
E_t(a|π) = ∑_j=1^N(e_t_j)^2/v_t_j
To avoid the exploration of already explored region for state s_t, we determine P(st ∩ ES_t) as:
P(s_t ∩ ES_t) = ∑_j=1^mP(s_t ∩ es_j)/m
Where es_j ∈ ES_t, and m is number of explored state in ES_t and overlap probability of each explored state in ES_t can be determined as:
P(s_t ∩ es_j)=
1 dist(s_t,es_j)≤ r_i,s
0 dist(s_t,es_j) > r_i,s
At each discrete time step t, the robot i acquires an observation s_t from the environment, selects a corresponding action a_t, then receives feedback from the environment in the form of a reward r_it=R(s_t,a_t) as shown below:
r_it =
-λ_i s_t ∈ ES_t
λ_i - Q_i,t + ρ(1 - P(s_t ∩ ES_t)) +
σ r_i,c s_t ∉ ES_t
Where P(s_t ∩ ES_t) is the probability of overlap between the current state s_t, and the already explored states ES_t by robot_i and other robots, ρ is a scaling factor that controls the importance of minimizing the overlap, r_c is the communication range, and σ is the scaling factor that determines the importance of maximizing the communication range. σ depends upon the robot's sensing capabilities and makes the reward function modular for heterogeneous robots with different sensing capabilities.
Then the state information is updated s_t+1. The goal of the RL is to select policy π that maximizes the discounted sum of future rewards, i.e., Q_π(s_1) = ∑_τ^ t=1γ^t R(s_t,a_t), which according to the Bellman optimality principle satisfies.
The reward function in Eq. (<ref>) produces a negative reward whenever the agent has looped back, and the calculated reward is based on the step-cost, Q-value, probability of overlap, and scaling factor otherwise.
§.§ Multi-Robot Lite Cooperation
We reduce the communication overhead amongst individual exploration-capable robots through a distributed approach, allowing each robot to make independent decisions based on local information and with little interaction from other robots. In our lite version of Q-learning, only the current state and Q-value are communicated amongst nearby robots to encourage cooperation. When another robot receives the information, it will update the received Q value in its Q table and update the local map. We develop a discovery approach based on the distance between simulated robots to replicate the network range in which we only share the current position of a robot i, its Q-Value for each direction, and mark the current situation as explored to avoid repetitive exploration.
§.§ Exploration Strategy
Robots create a global Q table for each cell and action after searching the map and experiencing several experiences. The Q table is then turned into a weighted graph G=(𝒮, ℰ, 𝒞), where 𝒮 ={s_1,s_2,...,s_n} denotes the set of states, and ℰ∈ |𝒮|×|𝒮| signifies the set of edges whose elements indicate whether or not a path exists between the center points of each pair of states. It is assumed that robots do not exchange nodes during exploration, and Voronoi boundaries are fixed. Furthermore, 𝒞 is the weight matrix indicating the edge metric cost.
The primary goal of discovering this study's reduced graph and significant states is to optimally disperse robots over the coverage region by minimizing the relevant cost function. Because robots move at varying speeds, we formulate the cost as a function of the defined traveling time as
t_(s_p_i,s_q)= d_(s_p_i,s_q)/v_i,
where v_i is the i^th robot's speed, and d_(s _p_i,s_q)∈𝒞 is the Euclidean distance between the i^th robot's current state p_i and state q. Furthermore, knowing the optimal path from state p_i to state s, each robot's overall optimal traveling time is the sum of the trip times (costs) from state p_i to state s. This study's shortest path between each pair of states is computed using the A* algorithm. Then the total time τ is calculated by knowing path 𝒫 = {p_1,p_2,...,p_n} as
τ_(S_p,S_q) = t_(s_p,s_p_1) +t_(s_p_1,s_p_2)+ ... + t_(s_p_n,s_q) .
After determining the shortest time between each pair of states, the field is partitioned into M Voronoi subgraphs g_r_i for i ∈{1, 2, ..., M} to distribute work proportionally among M robots. To that aim, the ideal Voronoi diagram g_r_i for i^th robot, according to Lloyd's algorithm, is a split of the area determined as:
g_r_i = {s_q ∈ S | τ_(s_p_i,s_q)≤τ_(s_p_j,s_q), ∀ i≠ j} .
Where j is the other connected robot.
The i^th robot is responsible for covering the state s (associated robot) in its sub-graph g_r_i using the Voronoi partitioning result. The entire cost is then calculated as
λ_i,(p,g_r)=
∑_j=1^m∑_q∈ g_r_iτ_(s_p_j,s_q)ϕ_q ,
where ϕ_q is the priority value associated with state s_q. As the map turns into a graph, higher priority values are assigned to target states, while lower priority values are assigned to states far and already explored from the current state. The entire travel time (cost) will therefore be minimized, and an optimal solution will be obtained only when the current distance between the robot i and the target state s_q, d_(s_p_i,s_q) converges to zero. Algorithm <ref> provides the pseudocode description of CQLite for efficient map exploration implemented in a distributed manner on each robot i.
§.§ Overall Procedure
The state action space in the traditional Q-learning process is too large due to the complex map environment used to solve map-aided problems. This also results in an unmanageably high number of iterations and takes a long time in the learning process. To avoid this issue, we propose a multi-robot (lite) cooperation strategy based on distributed Q-learning. The precise stages in this procedure are
* Given a sensing range of a robot, it is only allowed to perform Q-learning on the visible and unexplored frontiers in the explorable region.
* After exploring a frontier, a robot requests a shared map and merges with the local map to find a new frontier.
* Multiple robots use Q-learning (see Sec. <ref>) to explore these small maps at the same time. Because the sub-map is much simpler than the target map, it only needs a few iterations to complete the learning.
* Each robot updates its Q-table, Q_i at every iteration, based on local Q-value update Eq. (<ref>) and received Q-values from connected robots using the lite cooperation (see Sec. <ref>).
* Finally, based on Q_i, the robot applies optimization to find the policy for the next action (see Sec. <ref>) and efficiently achieves the target map's exploration.
* Robot terminates its exploration objective if all the frontiers are explored.
§.§ Convergence Analysis
We analyze the convergence of the target Q value update function Eq. (<ref>). We denote the error ratio δ_t = MSE(Q_t)/E_t(a|π), where MSE(Q_t) is the calculated mean square error for Q-table at time t and E_t(a|π) is the average number of steps to cover the region by taking action a at time t for policy π.
Theorem 1 (Convergence of Q-values): Using Eq. (<ref>) for Q-value updates, then if 0≤δ_t≤ 1, with probability 1-e, we have the estimated time to reach a given state as:
E_t(a|π) ≤ω E_1(a|π) + √(ln(1/e)∑_i=0^t-1ψ_i^2(δ_t-i:t)/2)
Here, ψ_i(δ_t-1:t)= ∏_j=t-i^t-1(j+γδ_j)/∏_j=t-i^tj, α_t= ∏_j=1^t-1(j+γδ_j)/∏_j=2^tj and γ = 0.95.
Our analysis is derived based on the subsequent (synchronous) Q-learning. In contrast to the conventional Q-learning, we swap out the current Qt for the independent Q-function Q'(s, a) for the target Q_t(s_t, a_t) and note that if Q'_t(s,a) = Q^*_source, we know that 0≤δ_t≤ 1.
First, we break down the update role into:
Q_t( s_t,a_t)
= (t-1/t) Q_t-1(s,a) + 1/t( r_t + γ max_a' Q'_t-1(s',a')
+ γ max_a' Q^*_t-1(s',a') - γ max_a' Q^*_t-1(s',a') )
Let ε_t(s,a)=Q_t(s,a)-Q^*(s,a) and
ξ(s') =γ× max_a'(Q^*_t-1(s',a')) then recall the definition of δ_t, we will have ε_t(s,a)
≤t-1/tε_t-1(s,a) + 1/t( ξ(s') - E_s'ξ(s')) + 1/tγδ_t E_t-1
As we know ε_t(s,a) ≤ E_t, by applying maximization and recursion of E, we will have:
E_t ≤t-1+ γδ_t/tE_t-1 + 1/t(ξ(s') - E_s'ξ(s'))
≤∏_j=1^t-1(j+γδ_j)/∏_j=2^tj E_1 + ∑_i=1^t-1∏_j=t-i^t-1(j+γδ_j)/∏_j=t-i^tj
×(ξ(s') - E_s'ξ(s'))
= α_t E_1 + ∑_i=1^t-1ψ_i(δ) (ξ(s') - E_s'ξ(s'))
According to weighted Hoeffding inequality <cit.>, with prob-
ability 1-e, we can prove Eq (<ref>) for Theorem 1.
This convergence result demonstrates the influence of the error ratio on the convergence rate. In other words, learning will go more quickly for our chosen Q value update function. Even though CQLite shares only updated Q-value, it still achieves the required convergence and provides an optimal strategy for robots to explore the map efficiently.
Time Complexity: Assume that the grid factor is k_g (resolution of the grid map on which the grid is divided) and that the target sub-map is k× l in size. The grid map's size is k_g k× k_g l, and the total number of points is k_g^2kl. The operations to find Q-values must be carried out cyclically k_g^2kl times.
kl can calculate and represent the CQLite's state space size; however CQLite doesn't perform a merging and searching strategy at every iteration; hence the length of the Q-value table is significantly less than (kl) through selecting the training process, and the computing complexity of the algorithm is considerably less than O(kl).
§ EXPERIMENTAL VALIDATION
Turtlebot3 robots are used to carry out the exploration plan, implemented using the ROS framework.
The open-source openslam-gmapping[<https://openslam-org.github.io/gmapping.html>] technique of the ROS gmapping package is used to create 2D maps. It uses odometer data and a particle filter method as its foundation. The local maps created by each robot are combined to create the global map. Feature-based map merging[<http://wiki.ros.org/multirobot_map_merge>] is employed to merge maps when required. Frame conversion between the local map frames is necessary for map merger. The coordinate transformation correlation between the robots must be calibrated before combining local maps. In the current work, the global frame is one robot's frame, and the relative positions and orientations of the robots are initialized to a known state.
The ROS movebase[<https://github.com/ros-planning/navigation>] package allows the robot to move toward the goal point while securely avoiding barriers between robots. The Dijkstra algorithm for global path planning and the Dynamic Window Approach (DWA) for local dynamic obstacle avoidance are both implemented in this package. In this study, the units of time and distance are in seconds and meters, respectively.
Simulation Setup:
A closed simulation environment based on the ROS Gazebo simulator with two indoor template environments is used: the Gazebo's house world (≈ 250m^2 area) and the Amazon AWS bookstore world (≈ 100m^2 area). The robots may quickly finish the map exploration in a closed environment. Each robot has a laser scanner to gather data about its surroundings. The robot's trajectory is determined based on the fusion of wheel odometry and laser scan information.
The following parameters are used in the experiments in the simulated environment. The laser scanner's range and r_i,s are set to 15m and 1m, respectively. Additionally, the robot's maximum linear and angular speeds in the simulation are set to 0.5 ms^-1 and π/4 rads^-1, respectively. The global detector's growth factor η and the local detector's growth factor η_1 in the RRT detector are set to 5m and 3m, respectively. The weight parameters, α = 0.6, γ = 0.95 and λ_i = 2 for 1m distant step. Each experiment was run for ten trials, with average observations reported.
We evaluate the performance in the following three scenarios to validate the robustness and scalability of the proposed solution: 1) 3 robots in the house world, 2) 3 robots in the bookstore world, and 3) 6 robots in the bookstore world.
§.§ Evaluation Metrics
The proposed CQLite and the methods put forward by RRT <cit.> and DRL <cit.> are compared in our experiments.
We use the below metrics for a comprehensive evaluation:
* Mapping Time: The amount of time spent mapping is a gauge of the efficiency of the exploration process;
* Path Length: This term refers to the path length of all robot's trajectories combined until exploration converges. The entire trajectory length gives an idea of the robot's energy usage while subtly describing its investigation's effectiveness;
* Exploration Percentage: The percentage of the generated map with time elapsed;
* Overlap Percentage: The percentage of the overlap of the explored map with time elapsed;
* Map SSIM: Structural similarity index measure of generated maps compared with ground truth map to measure map correctness;
* CPU Utilization: The maximum % consumption of the processor of a robot throughout the trajectory;
* Memory Consumption (RAM): The maximum occupied memory by the robot throughout the trajectory;
* COM payload: The size of the data communicated by a robot averaged over iterations.
§ RESULTS AND DISCUSSION
We have reported each approach's average performance after ten trials in each condition to reduce the measurement noise and analyze the statistical details.
A sample of the mapping outcomes of the compared approaches with the trajectories followed by three robots in the simulated environment is shown in Fig. <ref> and generated maps also delineate the map correctness. The outcome should be stated considering the average mapping time, distance traveled, and mapping efficiency. Mapping efficiency is determined by comparing with the original map, and reported percentages are normalized with gazebo world dimensions.
Table <ref> provides a comparative analysis of different methods on all the performance metrics and the statistical data from the results. It also lists the theoretical (algorithmic) computational complexity.
Figs. <ref> shows the comparison of the approaches in the three key performance metrics: computation, communication, and exploration. Fig. <ref> shows a zoomed-in view of the mapping process at three different closely timed instances.
The proposed CQLite reliably outperforms other strategies on the key performance metrics. CQLite covers a larger area in less time, improving mapping efficiency by 10% while traveling 22 fewer meters than RRT in the experiment. In three-robot scenarios, CQLite was more effective than DRL and RRT, with 9% and 8% shorter mapping times, respectively. Its path length was also less than DRL's by about 38%. The advantages became even more apparent when the trial involved six robots. While the mapping time was around 26% faster than DRL and 7% faster than RRT, the path length was about 38% shorter than with DRL.
CQLite had an exploration percentage that was 4% greater than DRL in the three-robot scenario. This advantage persisted in the six-robot case, where CQLite's exploration percentage was almost 4% higher than DRL's while maintaining the lowest overlap percentage. The stability and effectiveness of CQLite in multi-robot exploration tasks are highlighted by these results from various experiments.
Communication-wise, CQLite's strategy is more effective. Contrary to RRT and DRL, which exchange locally explored maps continually. CQLite showed a significant reduction of more than 80% in the communication payload (average data size) shared between the robots.
Notably, CQLite continues to explore at a constant rate even after reaching 60% coverage, in contrast to RRT, which slows down. This dominance carries over into a real-life three-robot bookshop scenario, where it outperformed DRL and RRT regarding reduced mapping time and shorter journey distances.
Results have validated the practicality of CQLite by surpassing DRL and RRT in terms of most of the performance matrices in all scenarios. Further, demonstrating its efficacy and applicability on resource-constrained robots, CQLite maintained decreased RAM, CPU, and communication payload usage. CQLite demonstrates its power in managing a range of multi-robot exploration scenarios by offering improved map quality, as higher MAP SSIM ratings indicate. It is particularly appropriate in situations when there are significant communication and resource constraints.
The proposed CQLite can cover the largest possible area in less time than RRT and is comparable to DRL. Specifically, CQLite can achieve 10% higher mapping efficiency by traveling 22 m less than RRT. It is clear that CQLite consumes approximately half of the memory and has less than half the communication payload size as DQL. However, the memory consumption of CQLite is still comparable with RRT, as the RRT approach applies pruning techniques to reduce memory consumption. We also compared the communication and computation overhead of CQLite with RRT and DRL throughout the exploration. CQLite has a few spikes of high CPU utilization for map merging when other maps are received ad-hoc in the contract; RRT has shown continuous high CPU utilization as it keeps applying map merging and randomly exploring trees for exploration. Similarly, DRL keeps optimizing policy and merging maps during exploration.
Regarding communication, RRT and DRL keep sharing locally explored maps. However, CQLite only shared a map when requested and depicted a few peaks in the communication cost plot of Fig. <ref>. Furthermore, our result also demonstrates the exploration percentage of CQLite, which is comparable with DRL and significantly better than RRT. CQLite achieved 8% and 4% better exploration percentages than RRT and DRL, respectively. After 60% exploration, RRT becomes slow; however, DRL and CQLite explore at the same rate, ending at 91% and 95% exploration for DRL and CQLite, respectively.
Overall, the proposed CQLite shows significant potential to outperform the state-of-the-art techniques and creates promising avenues to research further. It applies well to resource-limited and communication-limited applications.
The proposed CQLite algorithm consistently beats the RRT and DRL techniques in various evaluation parameters in the scenario of three robots in a bookshop world. CQLite has speedier exploration capabilities, as evidenced by its mapping time being roughly 9% and 8% less than DRL and RRT, respectively. Additionally, CQLite travels a path length about 38% shorter than DRL, indicating higher robot movement efficiency. While the overlap percentage achieved by CQLite is about 40% less than RRT, the exploration percentage is about 4% higher than DRL, indicating a more thorough exploration with fewer overlapped areas. Additionally, CQLite shows superior performance in MAP SSIM, with a 25% improvement over RRT, and exhibits better efficiency in computational resources, with approximately 48% and 33% less CPU and RAM utilization, respectively, compared to DRL. Compared to DRL, the communication payload is decreased by around 78%.
CQLite continues outperforming the other two approaches while evaluating the performance of six robots in a bookshop environment. The reduced mapping time, which is roughly 26% faster than DRL and 7% faster than RRT, demonstrates the scalability of the suggested solution in a multi-robot scenario. The path length traveled is also reduced, about 38% less than DRL, which points to an improved exploration strategy. With a decrease of about 46% when compared to RRT, CQLite's exploration percentage is around 4% higher than DRL's while keeping the lowest overlap percentage. Better map quality is indicated by CQLite's higher MAP SSIM score, which is an improvement of about 27% over RRT. With reductions of about 45%, 56%, and 85%, respectively, compared to DRL, the CPU utilization, RAM usage, and communication payload remain the lowest among the examined approaches, further supporting the effectiveness and scalability of CQLite in the context of multi-robot exploration.
A strong proof of CQLite's robustness, efficiency, and applicability comes from the performance results of CQLite in the three and six-robot bookstore world scenarios. CQLite is resilient in managing a variety of multi-robot exploration scenarios, as evidenced by its ability to surpass RRT and DRL in terms of mapping time regularly, path length traveled, exploration percentage, and overlap percentage. The effectiveness of CQLite is further demonstrated by the large reductions in CPU use, RAM consumption, and communication payload compared to competing approaches, making it a more computationally and communication-friendly solution for practical applications. CQLite's capacity to scale to handle scenarios with three and six robots demonstrates its usefulness for tackling exploration jobs with various team sizes.
The experimental results demonstrate the generalizability and practicality of CQLite for real-world applications. While displaying a performance similar to DRL, CQLite surpasses RRT regarding mapping effectiveness, area coverage, and journey time. In addition, CQLite uses less memory and has a smaller communication payload than DQL. Although CQLite uses the same amount of RAM as RRT, it is more efficient overall because of lower communication and processing overhead. Further supporting its efficacy, CQLite's exploration percentage is noticeably higher than RRT's and marginally superior to DRL's. These experimental findings and comparisons of the three main performance indicators—computation, communication, and exploration—justify the CQLite technique for practical applications, particularly when communication and resource availability are constrained. The positive results of CQLite present fresh directions for further study.
One of the limitations of the proposed CQLite is that it relies on wireless communication, which can be intermittent or harsh in specific real-world situations. In such scenarios, a communication-aware strategy can be integrated with our approach to tolerate changes in communication channels.
§.§ Real Robot Demonstration
Two Turtlebot3 robots are used for real-world map exploration using the ROS Noetic framework. Robots can share information about the odometry and map output of their respective SLAM by subscribing to specific topic messages in ROS. Experiments are performed in a small-scale lab setting of 10m^2.
We tested the CQLite under two scenarios: 1) a simplistic setup where one robot can cover the entire map without the other robot needing to move; 2) a complex setup where obstacles obstruct both robots in their initial positions, necessitating both robots' contributions (see Fig. <ref>). Both scenarios were successfully tested by exploring the entire map, as can be seen in this video: <https://youtu.be/n3unL1nuieQ>. We believe this provides further evidence for the practicality of the proposed exploration approach.
§ CONCLUSION
This paper proposed CQLite, a distributed Q-learning strategy for multi-robot exploration created to get around the excessive communication and computation complexity and expense of learning-based systems. CQLite, which reduces communication and processing overhead by simply sharing Q-value and mapping data over the network, performs well in practical applications. Experimental results revealed that it ensures comprehensive coverage, quick convergence, and cheaper computing costs compared to well-liked RRT and DRL techniques. The same mapping efficiency was attained with only half the CPU load and 80% less communication overhead. In the future, we will examine the reward generality of CQLite to cope with various multi-robot applications.
IEEEtran
§ APPENDIX
Proposition 1 (Q-table Update Efficiency): The CQLite Exploration method reduces the communication and computation cost for exploration by sharing and appending only updated Q-values and newly discovered frontiers to the local Q-table, which reduces communication and computation cost by 1/n than the cost of the SOTA approaches. Where n is the total number of possible states (size of Q-table).
The CQLite approach reduces the size of the Q-table and the amount of data that needs to be transmitted between robots by sharing and appending only the updated Q-values and recently found frontiers to the Q-table.
We prove this by comparing the data needs with that of the SOTA, where the full Q-table is shared between robots.
The shared Q-value for a given state-action combination (i,j) in the Q-table will be Q_i,j.
CQLite only updates Q-value once during the whole exploration, in contrast to SOTA as it updates each value in every iteration.
Compared to sharing and updating the whole Q-table, the communication and computing costs are decreased by 1/n.
The update cost of CQLite for Q-table with size n is n, but the update cost of SOTA is n^2 ; hence the cost reduction relation case be written as:
C_i,CQLite = 1/n C_i,SOTA,
where C_i,SOTA is the communication and calculation cost of updating and sharing the whole Q-table in SOTA exploration techniques, and C_i,CQLite is the communication and computation cost of the CQLite Exploration method.
To further determine the effectiveness of the Q-table update in CQLite, the cost of sending the matrix Q over a network can be used to indicate the cost of sharing and updating the whole Q-table and can be stated as follows:
C_i,SOTA = κ·∑_j=1^n|Q_i,j| ,
where |Q_i,n| is the absolute value of the Q-value of state j for robot i, and α is a constant that denotes the cost of sending one unit of data across the network. SOTA requires all Q-values for policy determination; hence all Q-values are shared to update Q-table in every iteration.
The CQLite Exploration approach reduces the size of the matrix and the quantity of data that needs to be transferred by sharing and appending only the updated Q-values and newly found frontiers. Let Q' be the updated matrix that only includes the new frontiers and updated Q-values. This modified matrix's transmission cost can be expressed as
C_i,CQLite = κ·∑_j=1^n|Q'_i,j| .
Since Q' is a subset of Q, it can be concluded that ∑_j=1^l |Q'_i,j| ≤∑_j=1^l |Q_i,j|, and therefore:
C_i,CQLite≤1/n C_i,SOTA
This proves that the CQLite exploration approach is more efficient regarding Q-table updating than the SOTA exploration methods like RRT and DRL.
Proposition 2 (Mapping Efficiency): CQLite performs map sharing and merging with the probability P(s_t ∩ ES_t), which requires <<iterations compared to relevant SOTA exploration approaches (e.g., RRT and DRL) for maximum exploration.
Here, P(s_t ∩ ES_t) is the probability of overlap between the current state s_t, and the already explored states ES_t by robot_i and other robots iterations is the total number of iterations carried out by the algorithm.
The probability of overlap P(s_t ∩ ES_t) between the current state s_t of robot i and the previously explored states ES_t by other robots is used to determine if map sharing and merging will take place in the CQLite Exploration technique. This map merging and sharing aims to reduce the number of iterations and steps the algorithm must perform.
As part of the CQLite Exploration approach, the algorithm updates the map by combining shared maps regularly as follows:
f_CQLite = P(s_t ∩ ES_t)·iterations
Where f_CQLite is the frequency of map merging carried out by the algorithm in the CQLite Exploration method, and iterations is the mapping frequency of SOTA exploration methods like RRT and DRL.
The probability P(s_t ∩ ES_t) can be derived using Bayes' theorem as follows:
P(s_t ∩ ES_t) = P(ES_t | s_t) · P(s_t)
Given the previously explored states ES_t, P(ES_t | s_t) is the conditional probability of the current state s_t, and P(s_t) is the probability of the current state s_t.
P(s_t) can be represented as a uniform distribution over the state space, assuming that the exploration process is a random walk, with:
P(s_t) = 1/n ,
where the state space's overall state count is n.
The frequency of occurrence of the current state s_t in the previously investigated states ES_t can be used to estimate the conditional probability P(ES_t | s_t). If the frequency with which the present state s_t occurs in the previously studied states ES_t is f_s_t, then:
P(ES_t | s_t) = f_s_t/n_e ,
where n_e is the total number of states in the already explored states ES_t.
Substituting the above expressions into the equation for P(s_t ∩ ES_t) gives:
P(s_t ∩ ES_t) = f_s_t/n_e·1/n = f_s_t/nn_e <<iterations
For the total number of iterations CQLite only updates the map for f_s_t/nn_e times and f_s_t < nn_e and nn_e is equal to the iterations in case of visiting each state at each iteration. Hence, the above derivation proved that the CQLite method is more efficient as CQLite's update frequency (frequency of map merging) is <<iterations in map sharing and merging than SOTA exploration methods like RRT and DRL.
Both propositions signify that the CQLite exploration method is more efficient in computation, communication, and mapping operations compared to the state-of-the-art RL-based multi-robot exploration approaches.
|
http://arxiv.org/abs/2307.03149v1
|
20230706172408
|
Joint evolution of a Lorentz-covariant massless scalar field and its point-charge source in one space dimension
|
[
"Lawrence Frolov",
"Samuel Leigh",
"A. Shadi Tahvildar-Zadeh"
] |
math.AP
|
[
"math.AP",
"math-ph",
"math.MP",
"78A35, 35A21, 70S10"
] |
[
Vittorio De Falco^1,2
August 1, 2023
=========================
In this paper we prove that the static solution of the Cauchy problem for a massless real scalar field that is sourced by a point charge in 1+1 dimensions is asymptotically stable under perturbation by compactly-supported incoming radiation. This behavior is due to the process of back-reaction. Taking the approach of Kiessling, we rigorously derive the expression for the force on the particle from the principle of total energy-momentum conservation. We provide a simple, closed form for the self-force resulting from back-reaction, and show that it is restorative, i.e. proportional to negative velocity, and causes the charge to return to rest after the radiation passes through. We establish these results by studying the joint evolution problem for the particle-scalar field system, and proving its global well-posedness and the claimed asymptotic behavior.
§ INTRODUCTION AND STATEMENT OF MAIN RESULTS
§.§ Background
Consider the dynamics of a vibrating point charge and the electromagnetic field it is sourcing. As the particle oscillates it radiates electromagnetic waves which propagate away from the charge at the speed of light. These waves carry both energy and momentum, so to conserve the total energy-momentum of the system, the particle must undergo some dampening through an interaction with its own field. This dampening self-interaction is one example of radiation-reaction, or back-reaction, the process in which charge distributions source fields which then “re-act" on the charges.
As it is well-known, attempting to study the process of back-reaction in the framework of Maxwell-Lorentz electrodynamics leads to a fundamental inconsistency. The Lorentz force needed to calculate the path of the particle requires us to evaluate the electromagnetic field along the particle's path, and yet the field sourced by the particle is undefined precisely on this path! Resolving the radiation-reaction problem has been an open problem for more than a century, and has been worked on by many notable figures such as Poincaré <cit.> and Dirac <cit.>. A gripping account of this endeavor can be found in <cit.>, which also includes an excellent review of the main approach mathematicians have taken to successfully resolve the inconsistency, namely smearing the point charge into a smooth charge distribution. In this approach however, it is not possible to take the smearing away, once it is introduced. The introduction of smearing also complicates the task of keeping the system of equations fully Lorentz-covariant, leading to many of the results obtained thus far being restricted to non-relativistic motions of the particle. (See also <cit.>, Section 2.3, for more recent results in this direction.)
Previous techniques of studying these field-particle systems directly, either without smearing or with smearing that is put in and then taken away, has left something to be desired. Outlined in a recent review <cit.>, they include an infinite bare-mass renormalization which is mathematically ill-defined, or an ad-hoc averaging of the fields in a neighborhood of the charge. A breakthrough occurred in 2019 when, following up on the work of Poincaré <cit.>, Kiessling <cit.> showed that postulating energy-momentum conservation of the field-particle system yields a unique and admissible force law, provided that the field's momentum density is locally integrable around the particle. Although this integrability assumption rules out the classical vacuum law of Maxwell, given by E=D, B=H, it admits others, such as the Bopp-Landé-Thomas-Podolsky (BLTP) vacuum law <cit.>.
In three dimensions, the Maxwell electromagnetic field sourced by a point charge is not only too singular to evaluate along the charge's path, but also its energy and momentum densities are not locally integrable around the source. By contrast BLTP, which is a higher order modification of Maxwell, does have the regularity necessary to derive a unique force law from the assumption of energy-momentum conservation. In the context of BLTP, Kiessling and Tahvildar-Zadeh <cit.> have successfully applied this force law to prove local well-posedness of the joint field-particle dynamics, and Hoang et al. <cit.> proved global existence for the scattering problem of a single charge interacting with a smooth potential. However, the complex form of the BLTP energy-momentum conserving force law has so far resisted a clear analysis of the asymptotic behavior of the particle.
OLD PARAGRAPH: New strides have been taken in the search for a consistent theory of relativistic charged particles thanks to a recent work by Michael Kiessling <cit.>. Inconsistencies arise in the Maxwell-Lorentz theory of electromagnetism when one considers the joint evolution of point charges and their field, and rectifying these issues has been an active area of research since the mid 1900s by physicists such as Dirac <cit.>. This area has since been extended to the study of other field-particle systems such as the study of scalar point charges by Quinn in 2000 <cit.>, which are sometimes preferred for their mathematical simplicity. The study of joint evolution problems for charges and their field are crucial to our understanding of back-reaction, the process in which a charged particle's motion causes it to emit radiation, which in turn must cause the motion to slow. The current techniques of studying these field-particle systems leave something be desired, outlined in a recent review <cit.> they include an infinite bare mass re-normalization which is mathematically ill-defined, or an ad-hoc averaging of the fields in a neighborhood of the charge. Unlike his predecessors, Kiessling's approach to determining the self-force provides the reader with a fully rigorous force law so long as an integrability assumption of the field is met.
His paper proved that if the field momentum density is locally integrable around a point charge, its force is given by a momentum balance law. Although the integrability assumption rules out the classical vacuum law of Maxwell, it has found recent success in the scattering regime of Bopp-Lande-Thomas-Podolsky electrodynamics <cit.>.
This paper studies the dynamics of a relativistic point charge coupled to a massless scalar field on flat 1+1 dimensional space-time. Working in one space dimension provides us with the regularity needed to derive the energy-momentum conserving force law without making any higher order modifications to the theory, thus allowing a much simpler analysis to be performed. We choose to study scalar charges because electromagnetism is not viable in one space dimension. Our model closely resembles the one studied in <cit.>, with the exception that our dynamics are fully relativistic. To isolate the effects of the interaction between the scalar charge and its own field, we will be focusing on the case of a single particle perturbed by
scalar radiation.
§.§ Main Results
Taking Kiessling's approach, we show that the 2-force which acts on a single charged particle at z=(z^0,z^1)∈ℝ^1,1 is given by
F^μ(z^0,z^1)=-[n_ν T^μν_S (z^0,x^1)]_x^1=z^1,
where n_μ is the unit covector that is annihilated by the particle's two-velocity, T_S^μν is the energy-momentum tensor of the field, and [·]_x^1=z^1 denotes the jump in space at z^1. The force law given by equation (<ref>) is derived from the principle of energy-momentum conservation
∂_ν T_p^μν+∂_ν T_S^μν=0,
where T_p^μν is the energy-momentum tensor of the particle, which is concentrated on the particle's world-line, and the derivatives are taken weakly, to account for singularities in the two energy tensors.
Guided by Weyl's “agens theory" of matter <cit.>, according to which the world-lines of matter particles are simply the locus of singularities of the underlying spacetime and/or the fields defined on that spacetime, we are led to the study of a joint evolution problem in which the path of the particle z(τ) appears as a jump discontinuity in the derivatives of the scalar field U, with the jumps showing up inside the force term in the equation for z̈. The equation of motion for a scalar field with a point charge source is
η^μν∂_μ∂_ν U=a ∫δ^(2)(x-z(τ))dτ
where τ and a are the particle's proper time and scalar charge respectively. We consider a stationary field-particle system which is perturbed by some incoming scalar “radiation." The joint evolution problem corresponding to this is given by the initial value problem
{[ η^μν∂_μ∂_ν U = a∫δ^(2)(x-z(τ))dτ; U(0,x^1) = -a/2|x^1|+V_0(x^1); ∂_0 U(0,x^1) = V_1(x^1), ].
dz^μ/dτ=u^μ
dp^μ/dτ=F^μ,
where F is as in (<ref>). We work in the fixed Lorentz frame where the particle remained stationary at the origin for all time x^0<0. The motion of the charge is perturbed by incoming radiation, which is represented by V_0, V_1 in the initial data for U.
We now state the first main result of our paper:
For any set of particle parameters with positive bare mass and non-zero real scalar charge, and for any set of small, smooth functions V_0(x^1), V_1(x^1) compactly supported away from the origin, the joint initial value problem given by (<ref>) and (<ref>) admits a unique, global-in-time solution.
To prove this, we explicitly compute the forces in equation (<ref>), and from that we extract a closed form for the self-force that scalar point charges exert.
We show that the scalar self-force is restorative and proportional in magnitude to the particle's velocity:
d p^1/dτ=F^1_self+ F^1_ext
where
F^1_self =-a^2/2u^1,
while F_ext consists of the standard external force terms. We will now state the second result of our paper.
Let z, U satisfy the joint IVP given by (<ref>) (<ref>) for a single scalar particle, where V_0 and V_1 satisfy the same conditions as in (<ref>). Then for every ϵ>0, there exists a T_ϵ>0 such that |u^1(x^0)|<ϵ for all x^0>T_ϵ.
In other words, a stationary charged particle perturbed by some compactly supported radiation asymptotically returns to rest. In section 2 we derive the equations of motion for a scalar field from the principle of stationary action, and derive the force law from the assumption of energy conservation. In section 3 we present a proof of the global well-posedness result for the joint evolution of a single scalar particle and its field. In section 4 we study the asymptotics of our joint evolution, and provide a proof of the asymptotic stability of the stationary solution.
§ DERIVATION OF EQUATIONS OF MOTION
§.§ Field Equations From Principle of Stationary Action
We derive the equations of motion for our scalar field using the principle of stationary action. The action for a point charge coupled to a massless scalar field U defined on 1+1 dimensional flat space-time[We work with signature η_μν=diag(1,-1).] is given by
S[U,z]=∫ℒ√(-η) dx^2,
where the Lagrangian density ℒ is defined via
ℒ(x):= 1/√(-η)∫ -(m̃-aU(z))√(η_μνż^μż^ν)δ^(2)(x-z)dθ
+ 1/2η^μν∂_μ U ∂_ν U(x).
Here z^μ, m̃, and a represent the particle's space-time position, bare mass, and scalar charge respectively, while θ is an arbitrary parameterization of the particle's worldline.
Extremizing the action by taking variations with respect to U returns
η^μν∂_μ∂_ν U= a ∫√(η_μνż^μż^ν)δ^(2)(x-z))dθ= a∫δ^(2)(x-z(τ))dτ,
where τ is the particle's proper time. Equation (<ref>) shows that the point charge is acting as a singularity in the derivatives of the scalar field U. Given a world-line for our particle along with some specified initial data for U, we could solve for the evolution of U by solving the associated initial value problem. However, we are interested in studying the joint evolution problem of our field-particle system. So the motion of the charge must be in accordance with all the scalar forces that act on it. We may naively derive the forces acting on our charges by taking variations of the action with respect to z^μ. This leads to a familiar law of motion that is inconsistent with the field equations, namely:
d p^μ/d τ=-a∂^μ U(z),
where p is the dynamical momentum defined by
p^μ :=(m̃-aU(z))u^μ, u^μ:=dz^μ/dτ.
The inconsistency arises since the force law (<ref>) derived from the principle of stationary action requires one to evaluate the first derivatives of the scalar field along the particle's world-line. However, the field equations derived from the same principle imply that the first derivatives of the field are undefined precisely along this world-line. So there can be no joint evolution of the field-particle system which extremizes the action.
In non-rigorous settings, one is typically taught to ignore the ill-defined self-interactions terms. But the dynamics given by ignoring singular self-forces is off-shell. In particular, it does not conserve the total energy-momentum of the system, as we will show later. The problems we face here are not unique to scalar point particles, and we take inspiration from the work of Kiessling on force laws for electromagnetic particles in three space-dimensions <cit.>, to derive a rigorous force law for our scalar point charges.
We conclude this section by defining the dynamical mass m which depends on the field U evaluated along the particle's world-line
m(x^0):=p^μ/u^μ=m̃-aU(x^0,z^1(x^0)),
where
z^1(x^0):=z^1(τ^-1(x^0)),
and τ^-1(x^0) is the unique value for which z^0(τ^-1(x^0))=x^0.
The dynamical mass and momentum will frequently appear throughout our calculations, taking the place of mass and momentum wherever one would expect the latter two to appear.
§.§ Force Law From Conservation of Energy-Momentum
In this section we will derive our force law from the assumption of the local conservation of energy-momentum.
From the action (<ref>) we derive the following energy density-momentum density-stress tensor (energy tensor, for short) for our system:
T^μν(x)= T_p^μν(x)+T_S^μν(x),
where
T_p^μν:=m(x^0)∫ u^μ u^νδ^(2)(x-z(τ))dτ, T_S^μν:=(∂^μ U)(∂^ν U)-1/2η^μν(∂_α U)(∂^α U).
The total energy tensor is singular along the particle's world-line, so we will take conservation of energy-momentum in the weak sense of distributions to derive our force law.
Suppose U is such that
∀ x^0≥ 0, lim_ϵ→ 0∫_z^1(x^0)-ϵ^z^1(x^0)+ϵT^0ν_S(x^0,x^1) dx^1=0.
(This requires T_S^0 ν be locally integrable[While this condition does not hold for fields sourced by scalar point charges in three space dimensions, it holds in one space dimension.] around z.) Then, assuming conservation of energy-momentum in the weak sense of
∫_Ω∂_μ T^μ 0dV=0=∫_Ω∂_μ T^μ 1dV,
for all tubular regions Ω around the particle's world-line, yields the unique force law
d p^ν/d τ=-[n_μ T^μν_S(z^0,x^1)]_x^1=z^1,
where n_μ=(-u^1,u^0) is the space-like unit covector annihilated by u^μ.
Fix ν. By the definition of weak derivatives,
∫_Ω∂_μ T^μν dV=∫_∂Ω T^αν N_α dS,
where N are the unit normal vectors to our boundary ∂Ω and dS is the surface element induced by η. The boundary ∂Ω consists of four parts. Two space-like curves given by
T̅_1={x∈ℝ^1,1| z^1(T_1)-ϵ<x^1<z^1(T_1)+ϵ, x^0=T_1 }
T̅_2={x∈ℝ^1,1| z^1(T_2)-ϵ<x^1<z^1(T_2)+ϵ, x^0=T_2 }
and two time-like curves given by
C̅_1={x∈ℝ^1,1| T_1<x^0<T_2, x^1=z^1(x^0)-ϵ}
C̅_2={x∈ℝ^1,1| T_1<x^0<T_2, x^1=z^1(x^0)+ϵ}.
The unit normal covectors for these boundaries are given by
N^T_1_μ=-[ 1; 0 ], N^T_2_μ=[ 1; 0 ], N^C_1_μ=-[ -∂ z^1/∂τ; ∂ z^0/∂τ ],
N^C_2_μ=[ -∂ z^1/∂τ; ∂ z^0/∂τ ].
Notice that N_μ^C_2=-N_μ^C_1=n_μ is the unit covector annihilated by u^μ, up to a sign.
Since T_p^μν has no support on the boundaries C̅_1 and C̅_2 their contribution to the surface integral vanishes, and we are left with
∫_∂Ω T^αν_p N_α dS=∫_z^1(T_2)-ϵ^z^1(T_2)+ϵT^0 ν_p dx^1-∫_z^1(T_1)-ϵ^z^1(T_1)+ϵT^0 ν_p dx^1.
Recall that
T^0 ν_p= m(x^0) ∫ u^0 u^νδ^(2)(x-z(τ))dτ=p^νδ(x^1-z^1).
Plugging this into (<ref>), we arrive at
∫_∂Ω T^αν_p N_α dS=p^ν(T_2)-p^ν (T_1)=∫_τ_1^τ_2d p^ν/d τdτ.
where z^0(τ_i)=T_i. So our assumption of weak conservation of energy-momentum grants us the following equation
∫_τ_1^τ_2d p^ν/d τdτ=-∫_∂Ω T_S^αν N_α dS,
which states that the total change in the particle's momentum is equal to the total energy flux of the field through a tube around the world-line of the particle. The L.H.S of this equation has no dependence on the width of our tubular region ϵ, so it follows that the R.H.S should also have no dependence on ϵ. Taking the limit as ϵ goes to zero returns
∫_τ_1^τ_2d p^ν/d τdτ=-lim_ϵ→ 0∫_∂Ω T_S^αν N_α dS
.
By our assumptions two of the boundary contributions vanish:
lim_ϵ→ 0∫_T̅_2 T^αν_S N_α dS=lim_ϵ→ 0∫_z^1(T_2)-ϵ^z^1(T_2)+ϵT^0ν_S dx^1=0,
and
lim_ϵ→ 0∫_T̅_1 T^αν_S N_α dS=lim_ϵ→ 0∫_z^1(T_1)-ϵ^z^1(T_1)+ϵ-T^0ν_S dx^1=0.
When we parameterize the remaining space-like boundaries by τ, we obtain
∫_C̅_2 T^αν_S N^C̅_2_α dS=∫_τ_1^τ_2N_α^C_2T^αν_S(z^0(τ),z^1(τ)+ϵ) dτ,
and
∫_C̅_1 T^αν_S N^C̅_1_α dS=-∫_τ_1^τ_2N_α^C_2T^αν_S (z_i^0(τ_i),z_i^1(τ_i)-ϵ) dτ_i.
Plugging these back into equation (<ref>) returns
∫_τ_1^τ_2d p^ν/d τdτ=-∫_τ_1^τ_2[n_μ T^μν_S (z^0,x^1)]_x^1=z^1 dτ.
Since this holds for all τ_1, τ_2, Equation (<ref>) immediately follows.
§ WELL-POSEDNESS OF JOINT EVOLUTION
In this section we present the global well-posedness result for a scalar particle purturbed by some incoming radiation. We work in the fixed Lorentz frame where the scalar charge was stationary and remained at the origin for all time x^0<0. The incoming radiation will appear as compactly supported initial data in the IVP for U.
§.§ Joint IVP Set Up
We begin by setting up the joint initial value problem for our field-particle system. The Cauchy problem for the scalar field U is given by
{[ η^μν∂_μ∂_ν U = a∫δ^(2)(x-z(τ))dτ; U(0,x^1) = -a/2|x^1|+V_0(x^1); ∂_0 U(0,x^1) = V_1(x^1), ].
where V_0 and V_1 are compactly supported away from the origin, and represent the incoming radiation. -a/2|x^1| is included in the initial data to represent the field that was sourced by the stationary charge. Since our evolution equation is linear, it is natural to split the Cauchy problem into three parts by setting U=V+U_stat+U_source where
{[ η^μν∂_μ∂_ν V = 0; V(0,x^1) = V_0(x^1); ∂_0 V(0,x^1) = V_1(x^1) ]. {[ η^μν∂_μ∂_ν U_stat = 0; U_stat(0,x^1) = -a/2|x^1|; ∂_0 U_stat(0,x^1) = 0 ].
{[ η^μν∂_μ∂_ν U_source = ∫ aδ^(2)(x-z(τ)dτ; U_source(0,x^1) = 0; ∂_0 U_source(0,x^1) = 0 ].
The solutions to (<ref>) are given by d'Alembert's formula while the solution to (<ref>) is given by Duhamel's principle. U_source can be written as an integral equation which depends on the trajectory of the charge. From these solutions it is easy to calculate the R.H.S of our force law:
dp^1/dτ= -[n_μ T_S^μ 1(x^0,x^1)]_x^1=z^1=-a^2/2u^1+a∂_1 V(x^0,z^1).
(See <cit.> for a similar calculation.)
The force law derived from the conservation of energy is similar to the one derived from the principle of stationary action. However, the singular “self-force" term has been determined rather than ignored, and the expression for it guarantees the conservation of the system's total energy-momentum.
Written in terms of the dynamical mass and momentum, the initial value problem for the charge's trajectory is
d z^1/dx^0 = p^1/√(m^2+(p^1)^2),
d p^1/d x^0 = -a^2/2p^1/√(m^2 +(p^1)^2)+am/√(m^2+(p^1)^2)∂_1 V(x^0,z^1),
with
z^1(0)=0, p^1(0)=0.
We will now state the main result of this paper.
For any set of particle parameters {m̃>0,a∈ℝ∖{0}}, and for any set of sufficiently small smooth functions V_0(x^1), V_1(x^1) compactly supported away from x^1=0, the joint initial value problem given by (<ref>), (<ref>), and (<ref>) admits a unique global-in-time solution.
The smallness condition taken on the initial data is ||V_0||_L^∞ +1/2||V_1||_L^1≤m̃/|a|, and is placed to ensure that the mass m(x^0) is bounded from below.
§.§ Proof of Well-Posedness
Strategy for the proof: Imagine that instead of solving for the joint evolution of the particle and the field, we solve for the dynamics of one when the other is given. For a given charge trajectory one can solve for U by plugging the trajectory into equation (<ref>). Conversely, given the dynamics of the field one can solve for a test charge's trajectory via (<ref>). Consider what happens if one were to recursively define a sequence of trajectories and field solutions by using the ith trajectory to solve for the (i+1)th field solution via (<ref>), and then using the (i+1)th field to solve for the (i+1)th trajectory via (<ref>), ad infinitum. If this sequence converges, it would converge to a trajectory which sources the same field solution that guides it. This is the key idea behind our proof, and we will see that this process does converge to a unique joint evolution.
In this section we provide a proof of the well-posedness of the joint IVP. We will do this by transforming our joint IVP into a set of integral equations. The solution to (<ref>) can be written in the form of an integral equation using Duhammel's principle
U_source(x^0,x^1) =1/2∫_0^x^0∫_x^1-(x^0-t)^x^1+(x^0-t)a∫δ^(2)(x-z(τ))dτ ds dt
=1/2∫_0^x^0a/u^0(t)χ_[x^1-(x^0-t),x^1+(x^0-t)](z^1(t))dt.
Since our system's dynamics depend only on the field evaluated along the world-line of the particle, it suffices to consider
U_source(x^0,z^1(x^0))
:=a/2∫_0^x^01/u^0(t) dt=a/2∫_0^x^0m(t)/√(m^2(t)+p^2(t)) dt.
Let U_stat and V be the solutions to (<ref>). Given the conditions of (<ref>), there exists a unique, global in time solution to the following set of integral equations
Q(x^0) = ∫_0^x^0p/√(m^2 + p^2)dt
p(x^0) = ∫_0^x^0-a^2/2p/√(m^2 +p^2)+am/√(m^2+p^2)∂_1 V(t,Q(t)) dt
W(x^0) = ∫_0^x^0a/2m/√(m^2+p^2) dt,
where m(t,Q(t),W(t)):=m̃-aU_stat(t,Q(t))-aV(t,Q(t))-aW(t).
We treat this a fixed-point problem.
Introduce q(·)=[ Q(·); p (·); W (·) ] so that we may write (<ref>), (<ref>), and (<ref>) as
q(x^0)=∫_0^x^0 f(q(t),t)dt:= F_x^0 (q(·)).
We seek to prove the existence and uniqueness of a fixed point for the function F_*(q(·)), which maps a given curve in ℝ^3 to another curve in ℝ^3. Notice that any such fixed point will have a bounded first derivative in all three components. Define L_k,γ(ℝ_≥ 0) to be the normed set of Lipschitz continuous functions l(·):ℝ_≥ 0→ℝ with Lipschitz constant k, and which satisfy l(0)=0. We equip this space with the weighted L^∞ norm
||l(·)||_γ=sup_t≥ 0e^-γ t|l(t)|<∞.
We extend this definition for maps from ℝ_≥ 0→ℝ^n by defining
L_k⃗,γ(ℝ_≥0)=∏_i=1^n L_k_i, γ(ℝ_≥0)
(Cartesian product), where k⃗=(k_1,k_2, ... k_n).
For each fixed k⃗, L_k⃗,γ is an uncountable family of metric spaces that share the same elements but differ in their equipped metric.
Fixing both k⃗ and γ>0, we have that
L_k⃗,γ is a complete subset of a Banach space.
Corollary (<ref>) in the appendix.
Letting K be an upper bound for a^2/2 +||a∂_1 V_0||_L^∞+||aV_1||_L^∞, and k⃗=(1,K,|a|/2), it becomes clear that our desired fixed point should, if it exists, reside in L_k⃗,γ.
F_*(q(·)):L_k⃗,γ→ L_k⃗,γ is a well defined mapping.
Let q=[ Q(·); p(·); W(·) ]∈ L_k⃗,γ, and write F_*(x(·))=[ P(*); ρ(*); S(*) ]. Notice
∀θ, θ' ≥ 0, |ρ(θ)-ρ(θ')| =|∫_θ^θ'-a^2/2p/√(m^2 +p^2)+am/√(m^2+p^2)∂_1 V(t,Q(t)) dt|
≤ K|θ-θ'|.
So ρ(*)∈ L_K,γ(ℝ_≥ 0). The proofs for P(*) and S(*) are analogous.
In order to prove the existence of a unique fixed point of F_*(q(·)), we need to show it is a contraction mapping on L_k⃗,γ. Towards this, we prove two lemmas.
For |Q(t)|≤ t and |W(t)|≤|a|/2t, the quantity m_V:=m̃-|a|(||V_0||_L^∞+1/2||V_1||_L^1) is a lower bound for m(Q(t),W(t)), and is positive by our smallness assumption on V_0, V_1.
(<ref>). Since |Q(t)|≤ t, (<ref>) lets us compute that
U_stat(t,Q(t))=-a/2t.
Thus,
m(W(t),Q(t)) =m̃-aV(Q(t))+a^2/2t -aW(t)
≥m̃-aV(Q(t))≥m̃-|a| ||V||_L^∞≥ m_V.
Let Y_γ be a complete metric space equipped with the metric induced by ||·||_γ, such that F_*(q(·)):Y_γ→ Y_γ is a well defined mapping. Suppose there exists an L<γ such that for all q_1(·),q_2(·) ∈ Y_γ
||f(q_2(τ),τ)-f(q_1(τ),τ)||_X ≤ L ||q_2(τ)-q_1(τ)||_X.
Then F_*(q(·)):Y_γ→ Y_γ is a contraction mapping.
Theorem (<ref>) in the appendix.
There exists an L<∞ independent of γ such that for all q_1(·), q_2(·) ∈ L_k⃗,γ,
||f(q_2(t),t)-f(q_1(t),t)||≤ L||q_2(t)-q_1(t)||.
Fix t≥ 0. We omit arguments of t for ease of notation, and write q_1=[ Q; p; W ], q_2=[ P; ρ; S ]. By the triangle inequality we have
||f(q_2)-f(q_1)||≤
≤∑_i=1^3 |f_i[ Q; p; W ]-f_i[ P; p; W ]|+|f_i[ P; p; W ]-f_i[ P; ρ; W ]|+|f_i[ P; ρ; W ]-f_i[ P; ρ; S ]|,
where f_i denotes the components of f. There are nine terms that we wish to bound. For the sake of brevity, we will restrict ourselves to the least trivial bound. By the mean value theorem there exists an O between Q and P such that
|f_2[ Q; p; W ]-f_2[ P; p; W ]|=|Q-P||D_1f_2[ O; p; W ]|.
We wish to bound D_1 f_2. Recall
f_2[ O; p; W ]=-a^2/2p/√(m^2 +p^2)+am/√(m^2+p^2)∂_1 V(O).
Since O is between Q and P, m(O,W) is bounded from below by m_V. Also, |dm(Q,W)/dQ(O,W)|=-a∂_1V(O) which is bounded from below by K. Let K' be the Lipschitz constant of ∂_1 V. Tedious calculations then yield the following bound
|D_1f_2[ O; p; W ]|≤a^2/2K/m_V +K^2/m_V+|a|K'=:M_1,2.
Performing a similar analysis on the other 8 terms yields a bound of the form
||f(q_2)-f(q_1)|| ≤ |Q-P|∑_j=1^3 M_1,j+|p-ρ|∑_j=1^3 M_2,j+|W-S|∑_j=1^3 M_3,j
≤ ||q_2-q_1||√((∑_j=1^3 M_1,j)^2+(∑_j=1^3 M_2,j)^2+(∑_j=1^3 M_3,j)^2).
Applying Lemma (<ref>) with γ>L, we find that there exists a unique q∈ L_k⃗,γ(ℝ_≥ 0) satisfying
q(x^0)=F_x^0 (q(·))=∫_0^x^0 f(q(t),t)dt
for all x^0≥0. This concludes our proof of Theorem (<ref>).
Letting q(x^0)=[ Q(x^0); p(x^0); W(x^0) ], and defining the function W_source(x^0,x^1) via
W_source(x^0,x^1)=∫_0^x^0a/2√(1-(dQ/dx^0)^2)χ_[x^1-(x^0-t),x^1+(x^0-t)](Q(t))dt,
we find that Q(x^0), p(x^0), and U(x^0,x^1):=V(x^0,x^1)+U_stat(x^0,x^1)+W_source(x^0,x^1) are unique global solutions to the joint IVP given by equations (<ref>), (<ref>), and (<ref>). This concludes our proof of Theorem (<ref>).
§ ASYMPTOTIC STABILITY OF SCALAR PARTICLE
The force law that describes the charge's dynamics given by (<ref>) is very similar to the one given by the principle of stationary action (<ref>), with the exception of a well-defined self-force. We derived this from the assumption that the total energy-momentum of our system is conserved, so it is not unexpected that the self-force is restorative, in magnitude proportional to the particle's velocity but with the opposite sign. https://sites.math.rutgers.edu/ laf230/Motion_of_Perturbed_Scalar_Particle_V2.gifThis animation[Link to animation: https://sites.math.rutgers.edu/ laf230/Motion_of_Perturbed_Scalar_Particle_V2.gifhttps://sites.math.rutgers.edu/l̃af230/Motion_of_Perturbed_Scalar_Particle_V2.gif]
of a scalar particle's sourced field showcases how as the particle's motion is disturbed so too is the field that it is sourcing. These disturbances in the sourced field carry energy, so to preserve the total energy of the system, the field disturbance's energy must have been sourced by the particle's kinetic energy. This is interpreted as an effective “self-force" which was conjectured to sap away the particle's kinetic energy until all motion ceases.
We will show that, as expected, in the case of a scalar particle perturbed by some incoming radiation, the self-force causes the charge to asymptotically return towards rest.
§.§ Asymptotic behavior of self-force
Take V_0, V_1 sufficiently small, smooth, and compactly supported away from the origin so that the radiation may propagate towards the particle.
Although the form of (<ref>) was beneficial for proving well-posedness, we will find it best to convert the equation to one for du^1/dx^0 to better study the charge's motion. Chain rule gives us
m du^1/dx^0=-a^2/2u^1+au^1 dV(x^0,z^1(x^0))/dx^0+a/u^0∂_1V(x^0,z^1).
The terms in (<ref>) can be split up into two categories, those which depend on the external radiation and those which do not. We write
md u^1/dx^0=F_self+ F_ext,
where
F_self =-a^2/2u^1,
and
F_ext =au^1d V(x^0,z^1(x^0))/dx^0+a/u^0∂_1 V.
Since the incoming radiation was compactly supported and is propagating at the speed of light, we expect it to perturb the motion of the particle and then propagate away. Eventually we expect only the self-forces to remain, and it is clear that the particle will asymptotically tend towards rest as long as the dynamical mass does not grow too quickly.
We begin by proving that the external radiation does eventually propagate away from the particle.
Let z^μ, p^μ, and U satisfy the joint IVP given by (<ref>), (<ref>), and (<ref>) with the same conditions as (<ref>). Let v=dz^1/dx^0. Then
∫_0^∞ 1-|v(t)| dt =∞.
Suppose not. Then
∫_0^∞ 1-|v(t)| dt= D <∞.
Recall that
v=p^1/√(m^2 +(p^1)^2).
We proceed by proving estimates about p^1 and m which will contradict the rate of growth of |v|. The first will be the rough linear estimate for p^1. From (<ref>) we have
d p^1/d x^0=-a^2/2v +a√(1-v^2)∂_1 V.
So |p^1(x^0)|≤ Rx^0 where R≥a^2/2+||a∂_1 V_0||_L^∞+||aV_1||_L^∞. To estimate m(t), recall
m(x^0)= m̃-aV(x^0,z^1(x^0))+a^2/2x^0 -aU_source(x^0,z^1(x^0)).
We bounded m from below in the previous section by showing that |aU_source(x^0,z^1(x^0))|≤a^2/2x^0. Using our contradiction hypothesis, we will now prove a stronger bound. For x^0 ≥ 0 we have
|aU_source(x^0,z^1(x^0))| = a^2/2∫_0^x^0√(1-v(t)^2) dt
≤a^2/2(√(∫_0^x^0 1-|v(t)| dt))·(√(∫_0^x^0 1+|v(t)| dt))
≤a^2/2√(D)√(2x^0).
By the conditions on the radiation we see that the linear term will dominate m(x^0). So, there exists a time T>0, and a slope M>0 such that for all x^0>T we have that m(x^0)≥ Mx^0. Using our estimates for p^1 and m, we can conclude with a final estimate for v. For all t>T
|v(t)|=|p^1|/√(m^2 +(p^1)^2)≤Rt/√((Mt)^2 +(Rt)^2)=R/√(M^2+R^2)<1.
But this implies that 1-|v(t)| is bounded from below, so
∫_0^∞ 1-|v(t)| dt =∞.
as desired.
All external force terms in F_ext will vanish in finite time.
It suffices to show that ∂_1V, ∂_0V evaluated along the particle's world-line vanish after finite time. By d'Alambert's formula we have
∂_1V(x^0,z^1(x^0))=1/2[V_0(z^1(x^0)+x^0)+V_0(z^1(x^0)-x^0)+V_1(z^1(x^0)+x^0)-V_1(z^1(x^0)-x^0)].
∂_0V(x^0,z^1(x^0))=1/2[V_0(z^1(x^0)+x^0)-V_0(z^1(x^0)-x^0)+V_1(z^1(x^0)+x^0)+V_1(z^1(x^0)-x^0)].
Lemma (<ref>) implies that the quantities z^1(x^0)± x^0 grow arbitrarily large, so each term in (<ref>) will vanish once z^1(x^0)± x^0 leaves the support of V_0, V_1.
We conclude this section by showing that the perturbed charge asymptotically returns towards rest.
For every ϵ>0, there exists a T_ϵ>0 such that |u^1(x^0)|<ϵ for all x^0>T_ϵ.
By Corollary (<ref>), there exists a time T such that for all x^0>T, u^1(x^0) satisfies
du^1/dx^0=-a^2u^1/2m.
If at some time T'>T we have that u^1=0, then it will remain 0 for all time afterwards and the theorem holds. Suppose |u^1|>0 for all time after T. Dividing both sides of (<ref>) by u^1 and integrating returns
ln(|u^1(x^0)|)=ln(|u^1(T)|)-a^2/2∫_T^x^01/mdt.
Recall that m is strictly positive and increases at most linearly, thus there exists an A such that for all x^0>T>0, m<Ax^0. It follows that
ln(|u^1(x^0)/u^1(T)|)<-a^2/2A(ln(x^0/T)).
So
|u^1(x^0)|< |u^1(T)|(x^0/T)^-a^2/2A.
Since a^2/2A>0, we see that u^1 decays to zero asymptotically.
§ SUMMARY AND OUTLOOK
In this paper we studied the joint evolution of a point particle coupled to a scalar field in 1+1 dimensions. A rigorous derivation for the force on a charged particle was given from the assumption of conservation of energy-momentum, and from this we calculated the self-force which result from back-reaction. The scalar self-force that charged particles experience is restorative, proportional to negative velocity, and causes the charge to asymptotically return towards rest after being perturbed by radiation. Written explicitly:
md u^1/dx^0=F_self+ F_ext,
where
F_self =-a^2/2u^1,
and a denotes the scalar charge.
We also proved the well-posedness for the joint evolution problem.
Although our study was meant to act as a toy model, it has produced some fruitful results. They indicate that for universes in which fields coupled to point particles evolve with a certain level of regularity, the joint evolution problem of fields and particles is well-posed under the condition that their dynamics preserve the total energy-momentum. The Maxwell-Lorentz theory of point charges in 3+1 dimensions does not give rise to the level of regularity sufficient for such joint evolutions to exist. BLTP, a higher-order modification to Maxwell-Lorentz theory has found success in 3+1 dimensions because of the regularity of its field's evolution, and we have successfully recreated these results in 1+1 dimensions. However, the complexity of the BLTP electromagnetic self-force in 3+1 dimensions does not admit a clear way to prove that the process of back-reaction will asymptotically return a perturbed particle to rest. Working in 1+1 dimensions has afforded us a much simpler set of dynamics, and we were able to prove this conjecture in the case of a scalar particle. The mathematical simplicity we are given in this lower dimensional space makes our results approachable and easily understood, and many of the physically intuitive arguments still carry over to higher-dimensional counterparts.
There are multiple avenues that we will take to further investigate the process of back-reaction. Firstly, we would like to generalize our results from the case of particles coupled to massless scalar fields to the case of massive scalar fields. We believe that our results for the massive field case will provide researchers with a clearer picture of back-reaction in higher order modifications of 3+1 dimensional field theories. We also plan to pursue a project investigating gravitational back-reaction to see whether a gravitational self-force arises when deriving the force law from the principle of energy conservation.
§ WELL-POSEDNESS OF INTEGRAL EQUATIONS
In this section we will provide brief proofs of well-known results regarding the well-posedness of integral equations. We will do this with the help of Banach fixed point theorem.
Let X be a metric space, and let C_γ^0(ℝ_≥ 0,X) be the normed set of continuous maps from ℝ_≥ 0→ X such that for all x(·) ∈ C_γ^0(ℝ_≥ 0,X) we have
||x(·)||_γ=sup_t≥ 0e^-γ t||x(t)||_X<∞.
Then C_γ^0(ℝ_≥ 0,X) is a Banach space.
Let f:X×ℝ_≥ 0→ X. Define
F_t (q(·)|x_0):=x_0+∫_0^t f(q(τ),τ)dτ.
Let Y_γ be a complete subset of C_γ^0(ℝ_≥ 0,X) such that F_*(x(·)|x_0):Y_γ→ Y_γ is a well defined mapping. Suppose there exists an L<γ such that for all q_1(·),q_2(·) ∈ Y_γ
||f(q_2(τ),τ)-f(q_1(τ),τ)||_X ≤ L ||q_2(τ)-q_1(τ)||_X.
Then F_*(x(·)|x_0) restricted to Y_γ is a contraction mapping on a complete metric space. Thus, Y_γ contains a unique fixed point q(·) which satisfies
q(t)=F_t(q(·)|x_0)=x_0 +∫_0^t f(q(τ),τ)dτ.
for all t≥ 0.
We wish to show that for all x_1(·),x_2(·) ∈ Y_γ,
||F_*(x_2(·)|x_0)-F_*(x_1(·)|x_0)||_γ≤L/γ ||x_2(·)-x_1(·)||_γ.
By definition
||F_*(x_2(·)|x_0)-F_*(x_1(·)|x_0)||_γ =sup_t≥ 0e^-γ t||∫_0^t f(x_2(τ),τ)-f(x_1(τ),τ)dτ||_X
≤sup_t≥ 0e^-γ t∫_0^t ||f(x_2(τ),τ)-f(x_1(τ),τ)||_X dτ
≤sup_t≥ 0e^-γ t∫_0^t L ||x_2(τ)-x_1(τ)||_Xdτ
=sup_t≥ 0∫_0^t e^-γ(t-τ)e^-γτL||x_2(τ)-x_1(τ)||_Xdτ
≤sup_t≥ 0∫_0^t e^-γ(t-τ)L||x_2(·)-x_1(·)||_γ dτ
≤L/γ||x_2(·)-x_1(·)||_γ.
In cases where f:ℝ×ℝ_≥ 0→ℝ is bounded it is useful to consider L_k,γ(ℝ_≥ 0)⊂ C_γ^0(ℝ_≥ 0,ℝ), the normed set of all Lipschitz continuous functions l(·) with Lipschitz constant k and l(0)=0.
For all γ>0,
L_k,γ(ℝ_≥ 0) is a complete subset of C_γ^0(ℝ_≥ 0,ℝ).
Let {l_n(·)} be a Cauchy sequence in L_k,γ(ℝ_≥ 0). This has a limit l(·) in C_γ^0(ℝ_≥ 0,ℝ). Also, we have that for every t, t' ≥ 0
m|t-t'| -|l_n(t)-l_n(t')|≥ 0.
The convergence is pointwise, so taking the limit of both sides as n goes to infinity yields
m|t-t'| -|l(t)-l(t')|≥ 0.
Taking the infinum over t,t' yields that l(·)∈ L_k,γ(ℝ_≥ 0) as desired.
For all γ>0, k⃗∈ℝ^n, it follows that L_k⃗,γ(ℝ_≥ 0):=∏_i=1^n L_k_i,γ(ℝ_≥ 0) is a complete subset of C_γ^0(ℝ_≥ 0,ℝ^n).
plain
10
SEA
Aditya Agashe, Ethan Lee, and A. Shadi Tahvildar-Zadeh.
On the joint evolution problem for a scalar field and its
singularity.
Involve, (to appear), 2023.
Bopp1
Fritz Bopp.
Eine lineare Theorie des Elektrons.
Annalen der Physik, 430:345 – 384, 03 2006.
Bopp2
Fritz Bopp.
Lineare Theorie des Elektrons. II.
Annalen der Physik, 434:573 – 608, 03 2006.
Dirac
Paul Adrien Maurice Dirac.
Classical theory of radiating electrons .
Royal Society, 160:148–169, 1938.
Hoang
Vu Hoang, Maria Radosz, Angel Harb, Aaron DeLeon, and Alan Baza.
Radiation reaction in higher-order electrodynamics.
Journal of Mathematical Physics, 62(7):072901, 2021.
Kiessling
Michael K.-H. Kiessling.
Force on a point charge source of the classical electromagnetic
field.
Phys. Rev. D, 100:065012, 2019.
KiesslingE
Michael K.-H. Kiessling.
Erratum: Force on a point charge source of the classical
electromagnetic field [Phys. Rev. D 100, 065012 (2019)].
Phys. Rev. D, 101:109901, 05 2020.
KTZ
Michael K.-H. Kiessling and A. Shadi Tahvildar-Zadeh.
Bopp-Landé Thomas-Podolsky electrodynamics as initial value
problem.
in preperation, 2023.
Ko20
A. I. Komech and E. A. Kopylova.
Attractors of nonlinear Hhamiltonian partial differential equations.
Russian Mathematical Surveys, 75(1):1, feb 2020.
Lande
Alfred Landé.
Finite Self-Energies in Radiation Theory. Part I.
Phys. Rev., 60:121–127, 07 1941.
Thomas
Alfred Landé and Llewellyn H. Thomas.
Finite Self-Energies in Radiation Theory. Part II.
Phys. Rev., 60:514–523, 10 1941.
Podolsky
Boris Podolsky.
A Generalized Electrodynamics Part I—Non-Quantum.
Phys. Rev., 62:68–71, 07 1942.
Poincare
Henri Poincaré.
Sur la dynamique de l’électron.
Rendiconti del Circolo Matematico di Palermo, 21:129–175,
1906.
Poisson
Eric Poisson, Adam Pound, and Ian Vega.
The motion of point particles in curved spacetime.
Living Rev. Relativ., 14, 2011.
spohn_2004
Herbert Spohn.
Dynamics of Charged Particles and their Radiation Field.
Cambridge University Press, 2004.
Weyl
Hermann Weyl.
Feld und Materie.
Annalen der Physik, 65:541–563, 1921.
|
http://arxiv.org/abs/2307.02056v1
|
20230705064705
|
Energy Consumption of Electric Vehicles: Effect of Lateral Dynamics
|
[
"Simran Kumari",
"Susenjit Ghosh",
"Ashish R. Hota",
"Siddhartha Mukhopadhyay"
] |
eess.SY
|
[
"eess.SY",
"cs.SY"
] |
Energy Consumption of Electric Vehicles: Effect of Lateral Dynamics
Simran Kumari1This work is supported by Ministry of Education (MoE), Govt. of India under Prime Minister Research Fellowship (PMRF) scheme and partially supported by the project HEV of SRIC IIT Kharagpur, jointly funded by Tata Motors Limited (TML) and Govt. of India under UAY scheme.Authors are with Department of Electrical Engineering, Indian Institute of Technology (IIT) Kharagpur, India. Email: [email protected], [email protected], [email protected] and [email protected]. Susenjit Ghosh2 Ashish R. Hota3 Siddhartha Mukhopadhyay4
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Current research on energy related problems such as eco-routing, eco-driving and range prediction for electric vehicles (EVs) primarily considers the effect of longitudinal dynamics on EV energy consumption. However, real-world driving includes longitudinal as well as lateral motion. Therefore, it is important to understand the effects of lateral dynamics on battery energy consumption. This paper conducts an analysis of the stated effect and validates its significance through simulations. Specifically, this study demonstrates that inclusion of the effect of lateral dynamics can improve accuracy and reliability of solutions in eco-routing, eco-driving and range prediction applications.
Energy flow, Lateral dynamics, Energy aware driving, Electric vehicle
§ INTRODUCTION
EV technology has seen a boom in recent years <cit.>. However, state-of-art technology does not adequately address the issue of range anxiety among EV drivers. Various reasons leading to this issue are route, traffic, driver and vehicle powertrain <cit.>. There have been several works related to EV energy consumption such as energy consumption prediction <cit.>, <cit.>, <cit.>, eco-driving <cit.>,<cit.>,<cit.>, eco-routing <cit.>,<cit.> and range prediction <cit.>,<cit.>, <cit.> in order to deal with range anxiety issue. EV energy consumption model is a fundamental block for these applications.
State-of-art works use different approaches for developing the same. Earlier work <cit.> presents quasi-steady backward power-based energy consumption model and computes regenerative braking efficiency. Additionally, <cit.> presents a dynamic model namely integrated battery-electric vehicle model which include battery dynamics, motor dynamics as well as vehicle longitudinal dynamics. Reference <cit.> presents analysis of energy optimal driving for conventional as well as electric vehicles from optimal control perspective. It utilizes Pontryagin’s Minimum Principle (PMP) to obtain velocity profile which minimizes wheel to distance and tank to distance energy consumption. In <cit.>, authors study motion control problems such as cruise distance maximization and travel time minimization utilizing electric vehicle power consumption model for an EV. Additionally, <cit.> models eco-driving problem as battery energy consumption minimization problem over road segments. Furthermore given predicted drive cycle and current state of charge (SOC), <cit.> utilizes unscented kalman filter (UKF) to predict SOC profile through quasi-static power consumption model. Above works utilize longitudinal vehicle dynamics for modelling EV energy consumption. However, longitudinal vehicle dynamics does not capture realistic on-road driving scenario due to various factors such as road geometry, driver intention and traffic behaviour. Realistic driving includes coupled longitudinal and lateral motion. Therefore, it is necessary to consider the effect of steering along with accelerator and brake pedal actuators to obtain more accurate estimate of energy consumption.
Few recent literature have attempted to capture the effect of steering action on energy consumption. For given turning radius, <cit.> estimates lateral force for different values of longitudinal velocity and finds the energy optimal velocity for achieving turning maneuver. Based on terminal optimal velocity values, transition velocity profile between straight and curved road is calculated through a dynamic programming approach. In <cit.>, authors evaluate maximum cornering speed at which tire force does not saturate and use this in hybrid electric vehicle (HEV) energy management. However, these works do not incorporate varying curvature road and lane changes which are common in real-life driving situations. In order to address this research gap, we present an analysis of the effect of lateral dynamics on energy consumption of a rear wheel driven (RWD) EV. Our analysis indicates that state of art energy consumption models based on longitudinal motion underestimate energy consumption in EVs, and motivates inclusion of lateral dynamics in such models.
§ ENERGY FLOW IN AN ELECTRIC VEHICLE
It is important to understand the flow of energy from energy source to maneuver in order to analyse the effect of lateral dynamics on energy consumption. An EV powertrain generally consists of battery as energy storage, motor as propulsion source followed by fixed gear differential with its axles attached to wheel. The flow of energy for a RWD EV maneuvering from time t_0 to t_f is presented below.
§.§ Energy consumed in maneuver
Different quantities associated to EV dynamics during maneuver are shown in Fig. <ref>. The distance of center of gravity (COG) from front and rear axles are denoted by a and b respectively. Additionally, l denotes half of track width of the EV and EV front wheels are steered by an angle δ. Longitudinal velocity, lateral velocity and yaw rate of COG of the EV are denoted by v_x, v_y and ψ respectively. Dependence on time is suppressed for better readability. Similarly, F_x, F_y and M_z denote resultant longitudinal force, resultant lateral force at and resultant yaw moment about COG of the EV. The force and moment are generated by forces acting at wheel-road contact. When an EV is performing coupled longitudinal and lateral maneuver, utilized energy is given as:
E_w,m =∫_t_0^t_fF_xv_x dt+∫_t_0^t_fF_yv_y dt+∫_t_0^t_fM_zψ dt.
Here, E_w,m is energy utilized in maneuver. Assuming that camber angle of each wheel is zero, the relationship between forces and moment at vehicle level with wheel forces are:
F_x= (F_wlr^x+F_wrr^x)-(F_wlf^y+F_wrf^y)sinδ
-(F_wlf^x+F_wrf^x)cosδ-F_a,
F_y= (F_wlf^y+F_wrf^y)cosδ+(F_wlr^y+F_wrr^y)
-(F_wlf^x+F_wrf^x)sinδ,
M_z= (-F_wlr^y-F_wrr^y)b+(F_wlf^ylsinδ+F_wlf^yacosδ)
+(F_wrf^ylsinδ+F_wrf^yacosδ)
+(F_wrf^xlcosδ-F_wrf^yasinδ)
+(F_wlf^xlcosδ-F_wlf^yasinδ).
Here, F_w*#^x and F_w*#^y are wheel longitudinal and lateral forces at *# wheel where *∈{l:left,r:right} and #∈{f:front, r:rear}. Throughout the paper, subscripts *∈{l:left,r:right} and #∈{f:front, r:rear} are used. Similar to forces, wheel velocities are also related to vehicle velocities and the relationship is given as:
v_wlf^x =(v_x-ψl)cosδ+(v_y+ψa)sinδ,
v_wlf^y =-(v_x-ψl)sinδ+(v_y+ψa)cosδ,
v_wrf^x =(v_x+ψl)cosδ+(v_y+ψa)sinδ,
v_wrf^y =-(v_x+ψl)sinδ+(v_y+ψa)cosδ,
v_wlr^x =v_x-ψl, v_wlr^y=v_y-ψb,
v_wrr^x =v_x+ψl, v_wrr^y=v_y-ψb,
where, v_w*#^x and v_w*#^y are longitudinal and lateral components of wheel velocity at wheel *#.
§.§ Energy consumed in wheel translation motion
Brake torque T_b*# and rolling resistance moment T_R*# resist wheel rotational motion. However, axle torque T_w*# assist in wheel rotation during acceleration and resist the rotational motion during deceleration in presence of regenerative braking. Therefore, input energy consumed by wheel is
E_w*#,in =∫_t_0^t_f(T_w*#-T_b*#-T_R*#)ω_w*# dt.
Some part of wheel input energy is used in rotating wheel E_rot,w*# and wheel longitudinal maneuver E_w*#,out^x while rest is lost due to friction E_loss,w*#. Energy consumed in wheel longitudinal translation motion is given as:
E_w*#,out^x =E_w*#,in-E_rot,w*#-E_loss,w*#
=∫_t_0^t_f(F_w*#^xv_w*#^x)dt,where,
E_rot,w*# =∫_t_0^t_f(T_w*#-T_b*#-T_R*#-F_w*#^xr_w)ω_w*# dt
E_loss,w*# =∫_t_0^t_fF_w*#^x(r_wω_w*#-v_w*#^x) dt.
Here, ω_w*# and r_w denote wheel rotational speed and radius of wheel (same for all the wheels).
Wheel lateral forces, generated at tyre-road contact due to steering action, resist wheel motion in order to align with wheels in steered direction and thus dissipate energy. Therefore, lateral forces also contribute to wheel output energy along with longitudinal forces. The output energy of individual wheel is:
E_w*#,out= E_w*#,out^x+E_w*#,out^y
= ∫_t_0^t_f(F_w*#^xv_w*#^x+F_w*#^yv_w*#^y) dt
where, E_w*#,out^y is energy consumed in wheel lateral translation motion. For RWD vehicles, E_w*#,out^x is positive for rear wheels which contribute to wheel traction energy. The value of E_w*#,out^x is negative for front wheels indicating dissipation of energy due to resisting wheel longitudinal forces during acceleration as well as deceleration. This causes decrease in traction energy and leads to a lower value of resultant traction energy. In absence of regeneration E_rot,w*# and E_w*#,out^x are lost as thermal energy during braking. However in presence of regenerative braking, energy is not totally lost as thermal energy and some part is recovered as battery energy.
Through simplification of equations, it can be shown that energy consumed in maneuver is same as sum of energy consumed in translation motion of all wheels. Thus,
E_w,m =E_wlr,out+E_wrr,out+E_wlf,out+E_wrf,out.
§.§ Input energy from powertrain to wheel
Axle torque contributes to input energy of wheel from powertrain E_w*#,in^p and is given as:
E_w*#,in^p=∫_t_0^t_f(T_w*#ω_w*#) dt,
where T_wlf=T_wrf=0. Input energy for individual wheels are given below:
E_w*#,in=E_w*#,in^p+∫_t_0^t_f (-T_b*#-T_R*#)ω_w*# dt.
For RWD EV, differential output energy, denoted by E_d,out, is distributed into rear left and right wheels as:
E_d,out =E_wlr,in^p+E_wrr,in^p.
§.§ Energy flow from battery to differential
Motor output energy is input to differential and is related to differential output energy as E_d,out=η_diffE_m,out. Here, η_diff and E_m,out are differential efficiency and motor output energy respectively. Motor output energy is given as:
E_m,out =∫_t_0^t_f T_m(t)ω_m(t) dt
=E_m,in-∫_t_0^t_fP_m,loss(T_m(t),ω_m(t)) dt,
where E_m,in = η_invE_b,out, T_m≥ 0
E_b,out/η_inv, T_m ≤ 0.
Here, T_m, ω_m, η_inv, P_m,loss, E_m,in and E_b,out are motor torque, motor speed, efficiency of inverter, motor power loss, motor input energy and battery output energy respectively.
With V_b and I_b as terminal voltage and current flowing through battery, output energy of battery is:
E_b,out= ∫_t_0^t_f V_b(t)I_b(t) dt.
It is evident from above energy flow analysis that energy from battery is utilized in rotating wheel and to overcome friction loss, front wheels slippage loss (due to component of resistive wheel longitudinal forces) and cornering loss (due to component of resistive wheel lateral forces). Therefore, overall energy demand for an EV trying to achieve specific longitudinal maneuver over a duration increases in presence of lateral maneuver. Similar analysis can be applied to front wheel driven (FWD) EV.
§.§ Energy Flow Simulation Results
The presented flow of energy can be visualized graphically in terms of power flow from source to maneuver.[Analysis upto next section is carried out for a specific driver behaviour. Results for various driver behavior would qualitatively remain the same.] To analyze the energy flow, a simulation study is conducted using Matlab-Simulink <cit.>. An EV model, consisting of powertrain and planar dynamics, is simulated to track FTP-75 drive cycle for a given driver behaviour. The planar maneuver followed by EV is characterized by longitudinal speed, inertial Y-coordinate and yaw rate. Fig. <ref> shows snapshot of maneuver for duration 90-130s.
Power flow profile for this small duration of the EV maneuver in absence and presence of regeneration braking is shown in Fig. <ref> and <ref> respectively.
Positive battery power indicates situation when driver is pressing accelerator pedal and energy flows from battery to maneuver. In this case, power corresponding to input energy consumed by wheels is smaller in magnitude compared to battery power due to efficiency factor of motor and differential. Resisting front wheel longitudinal forces causes decrease in magnitude of wheel traction power and leads to resultant traction power with smaller magnitude. The power consumed in maneuver is same as resultant traction power in case of longitudinal motion. However, there is a decrease in power consumed during lateral maneuver due to resistive front wheel lateral forces. Power consumed in wheel rotation and dissipated in tyre-road contact friction loss is at most 2% of power supplied by battery and is negligible. However, the magnitude of power consumed in wheel rotation is not negligible during braking. Instead negative value of this power and power consumed in maneuver, in Fig. <ref>, indicate that brake mechanism dissipates vehicle maneuver energy and wheel rotational energy as thermal energy. In case of regeneration, negative wheel input power and battery power, shown in Fig. <ref>, indicates some part of wheel rotation and maneuver energy flows back to battery and rest is dissipated as thermal energy during braking.
Fig. <ref> shows the variation in power requirement of an EV trying to achieve the longitudinal maneuver specified in Fig <ref> in absence and presence of specified lane change maneuver. The plot in the top panel corresponds to EV without regeneration and the plot in the bottom panel corresponds to EV with regeneration. It can be observed that power demand increases in presence of lateral maneuver. There is a decrease in longitudinal speed during lateral maneuver due to resisting wheel lateral forces. Driver presses accelerator pedal more to sustain target speed and battery power demand increases as a result. It can also be observed that power demand is less in case of regeneration. It is evident from the discussion that EV lateral dynamics has an impact on battery energy consumption. Therefore, the next section analyzes significance of this impact on energy consumption of an EV driving over long range.
§ EFFECT OF LATERAL DYNAMICS ON EV RANGE
To analyze significance of the effect of lateral dynamics on energy consumption, an EV model with given driver behavior is simulated to track standard drive cycles in two different cases. The two cases are: first is longitudinal maneuver and second is longitudinal maneuver with a lane change every 1km approximately. It can be observed from Fig. <ref> that energy consumed for HWFET drive cycle in latter case is more compared to former for EV without regeneration barking. Corresponding motor torque profile has large peak at every lane change indicating increase in energy demand for EV maneuver. Regeneration braking provides EV with facility of energy recuperation. Therefore, energy consumed in presence of regeneration barking is less compared to one without it. The opportunity of recuperation for EV in latter case is less compared to former. Since, cornering forces already assist in braking, amount of negative torque required for braking is less. Similar observation is obtained for other drive cycles such as NEDC and FTP 75.
Various parameters relating energy consumption corresponding to different drive cycles to EV performance are included in Table <ref>-<ref>. Driving range of EV is obtained through extrapolation of consumed battery energy over repetition of drive cycle for a 54.28 kWh battery. It can be observed that there is a significant decrease in driving range when lane changes are included in maneuver. Thus, neglecting lateral dynamics gives an overestimated range.
§ CONCLUSION
An analysis of energy flow in an EV is carried out and it is observed that there is an increase in demanded energy to achieve specific longitudinal profile in presence of lateral maneuver. It is revealed that state-of-art energy consumption model underestimates the energy consumption profile of an EV during maneuver and lateral dynamics has significant impact on EV performance in real-world driving situation. Therefore, it is essential to include its effect in energy consumption model for different applications such as eco-driving, range prediction, etc. Further analysis to incorporate effect of various environmental factors such as road friction, slope and wind speed during planar maneuver of EV remains a promising direction for future research.
ieeetr
|
http://arxiv.org/abs/2307.01557v1
|
20230704082139
|
Separated RoadTopoFormer
|
[
"Mingjie Lu",
"Yuanxian Huang",
"Ji Liu",
"Jinzhang Peng",
"Lu Tian",
"Ashish Sirasao"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Separated RoadTopoFormer
Mingjie Lu[1], Yuanxian Huang[1], Ji Liu, Jinzhang Peng, Lu Tian, Ashish Sirasao
Advanced Micro Devices, Inc., Beijing, China
(Mingjie.Lu, YuanXian.Huang, Ji.Liu, jinz.peng, lu.tian, ashish.sirasao)@amd.com
===========================================================================================================================================================================================================================
[1]These authors contributed equally to this work.
Understanding driving scenarios is crucial to realizing autonomous driving. Previous works such as map learning and BEV lane detection neglect the connection relationship between lane instances, and traffic elements detection tasks usually neglect the relationship with lane lines. To address these issues, the task is presented which includes 4 sub-tasks, the detection of traffic elements, the detection of lane centerlines, reasoning connection relationships among lanes, and reasoning assignment relationships between lanes and traffic elements. We present Separated RoadTopoFormer to tackle the issues, which is an end-to-end framework that detects lane centerline and traffic elements with reasoning relationships among them. We optimize each module separately to prevent interaction with each other and aggregate them together with few finetunes. For two detection heads, we adopted a DETR-like architecture to detect objects, and for the relationship head, we concat two instance features from front detectors and feed them to the classifier to obtain relationship probability. Our final submission achieves 0.445 OLS, which is competitive in both sub-task and combined scores.
§ INTRODUCTION
In recent years, the availability of public large-scale datasets and benchmarks has greatly facilitated autonomous driving research. Many datasets <cit.> focus on sensing visible lane lines to keep vehicles on the right track only, or to obtain traffic information by detecting traffic signals only.
However, the separation of tasks leads to a limited understanding of driving scenarios.
For example, a driving vehicle will be confused when it sees a green light but the lane it follows is controlled by another red light.
Based on this limitation, a key aspect of this task <cit.> is to understand the complex driving environment, which is a prerequisite for making reasonable decisions.
On the one hand, this task wants to establish a strong association between traffic elements and lanes. On the other hand, understanding the separations between neighboring lanes is also necessary for guiding the vehicle driving on the desired trajectory. Both topology reasoning tasks are extremely challenging.
This task can be divided into two parts simply, which are scene structure perception and reasoning. The scene structure perception aims to find out what and where the traffic elements and lanes are and the reasoning aims to understand the relationship between them. The latter is highly dependent on the former, but the reverse is not certain. So, we optimize each module separately to prevent interactions during training, and finally integrate them by finetuning.
Experiments prove it works. We also have made other experimental improvements, please refer to Section <ref>.
§ DATASETS
Road Genome, also known as OpenLane-V2 <cit.>, is the first dataset focusing on topology reasoning in the autonomous driving area. It contains 2.1M instance-level annotations and 1.9M positive topology relationships. This challenge is based on subset_A, which contains 22477 training frames, 4806 val frames, and 4816 test frames. Each frame contains 6 surrounding images with resolution 1550 × 2048 and a front-view image with resolution 2048 × 1550. The final metric is OpenLane-V2 Score (OLS), which is the average of various metrics from different subtasks and is defined to describe the overall performance of the primary task: OLS = 1/4[DET_l + DET_t + f(TOP_ll) + f(TOP_lt)], where f is a scaling function that balances the scale of different metrics.
§ METHODS
§.§ Baseline
The official baseline <cit.> provides a simple and easy-to-follow framework that
generates two feature maps from different views. One is in BEV (Bird's-eye view) and the other one is in PV (Perspective view). The former is used to predict lane centerlines (LCs) and the latter is for traffic elements (TEs) prediction. Two detection heads adopt similar DERT-like architectures. The following two relationship prediction modules establish pairwise relationships, which contain a L × L lanes relationship matrix and a L × T lanes-traffic elements relationship matrix, where L and T represent the numbers of LCs and TEs from front detected results . Then two subsequent MLPs are used to predict the logits of two kinds of relationships, respectively.
§.§ Architecture
The design of our algorithm follows Road Genome <cit.>. However, unlike Road Genome, our TE branch and LC branch do not share a common backbone as demonstrated in Figure <ref>. Instead, each branch has an independent backbone network to extract features. This modification allows for independent feature learning and data augmentation for two detection tasks.
Lane centerline detection. Given multi-view images, we first use a shared Swin-small <cit.> backbone to extract features from each view's image. Then, we apply BEVFormer<cit.> to transform the multi-perspective view features into a unified BEV feature. Later, a Deformable DETR-like<cit.> transformer is utilized to extract query-wise information of the 3D lane centerlines based on the BEV feature. Finally, each output query is passed through an LC head to predict the confidence of a line and the coordination of 11 equally spaced 3D points in the centerline. The coordination of each 3D point is normalized according to the detection range.
Traffic element detection. We utilize a separated and independent Swin-small backbone to extract the perspective view feature from the front center image. DINO<cit.> head is employed to detect 2D traffic elements.
Topology prediction. We follow the design of topology prediction in STSU <cit.>. Every two objects' query will be concatenated. The concatenated feature will pass through an MLP and a sigmoid layer and output a relationship confidence. The two objects will be considered as having a topology relationship only if the confidence is greater than 0.5. Instead of considering all queries like the baseline, we only consider the query whose confidence is bigger than a prior threshold.
§.§ Bells and whistles
Hierarchical query. For 3D centerlines detection, the locations of points are significant for the final performance. We design two kinds of queries, point query and instance query, to make the query input transformer decoder have better representation ability. Point queries Q_p ∈ℝ^N_p × D and instance queries Q_I ∈ℝ^N × D are first passed through a self-attention module to model the relationship between queries, where N_p represents the number of point queries and is set to 11 to be equal to the final output number of points, N represents the max number of centerlines, and D represents the dimension of the embeddings. To aggregate the feature of both kinds of queries, a point pooling module is proposed to get a global feature across point queries. We utilize the sum operation to pool the point queries. Finally, LC query Q_LC is obtained by adding each instance query to the 3D global pooling point feature.
Q_pooled = PointPooling(Q_p) = ∑_i=1^N_pQ_p,i, Q_p,i∈ℝ^D
Q_LC,i = Q_I,i + Q_pooled
Intersection-sensitive classification head. The OpenLane-V2 <cit.> dataset contains two kinds of centerline, normal lane centerline and connecting line in intersections, which are evidently different. Unlike normal lane centerlines with obvious
local texture features, connecting lines in the intersection are virtual lines, which are used to describe the relationship among normal lane centerlines. Therefore, we distinguish these two categories in the classification head in the LC head. As shown in Table <ref>, this simple strategy improves the DET_l metric by 2.43%.
Swin backbone and input resolution. Because the input image size of the baseline <cit.> is the original resolution of the image, which is 1550x2048, the batch size can only be set to one on every GPU when training the whole model. However, the backbone of the baseline is ResNet50 <cit.> and utilizes BatchNorm, which is inappropriate when the batch size is set to one. Therefore, we utilize Swin-small <cit.> as our backbone for both LC branch and TE branch, which apply LayerNorm instead of BatchNorm. Besides, to speed up the training and save device memory, we resize the multiview images to 775x1024. For the front view image, we keep its size as its original resolution (2048x1550), because its overhead is affordable. The backbones in both two branches are pre-trained in ImageNet1K <cit.>.
11 points representation. Instead of representing the 3D line as five Bezier control points like STSU <cit.>, we directly model the 3D line as 11 equally spaced keypoints in its skeleton. We found this simple representation is surprisingly better than the Bezier curve. Results are shown in Table <ref>.
DINO TE detector. We use the DINO<cit.> detector head instead of the original deformable-detr of the baseline with 900 queries. As show in Table <ref>, DINO brings about a 2% gain for traffic elements detection.
Geometric clues for relationship prediction between centerlines. The topological relationship between centerlines is not only related to semantic information but also associated with their geometric locations. If the endpoints of the centerlines of two lanes are very close, then there is a high probability that they are topologically related. Therefore, we introduce geometric clues for relationship prediction between centerlines in two aspects. First, we concatenate the LC query with its start point and end point which are predicted by the LC regression head. Second, any two lane centerlines whose start and end points are less than three meters apart will be considered to have a topological relationship, even if their relationship confidence is less than 0.5. Results are shown in Table <ref>.
Decoupled training and integrated finetuning. Instead of training all modules of the whole network simultaneously, we decouple different modules and train only one of them each time. Specifically, we first independently train the LC module and TE module. Then, two relationship heads are trained with frozen backbones and detection heads. The decoupled training strategy helps us quickly verify an improvement idea for a single module. Meanwhile, this strategy enables each module to perform its own duties and avoids the impact between different tasks. After all modules are trained independently, we finetune the whole network with a smaller learning rate. During finetuning, only four heads are unfrozen, including the LC head, TE head, and two relationship heads. In the decoupled training set, we follow the training setting in Road Genome <cit.>, including the optimizer, the learning rate update schedule, and so on. The learning rate will be adjusted proportionally with the batch size. In the finetuning stage, we set a smaller learning rate, which is a quarter of the decoupled training stage.
§ FINAL RESULTS
For the final submission, we apply all the aforementioned strategies for performance improvement. The performances on the OpenLane-V2 validation and test set are demonstrated in Table <ref> and <ref>, respectively.
ieee_fullname
|
http://arxiv.org/abs/2307.01730v1
|
20230704135847
|
Precise characterization of nanometer-scale systems using interferometric scattering microscopy and Bayesian analysis
|
[
"Xander M. de Wit",
"Amelia W. Paine",
"Caroline Martin",
"Aaron M. Goldfain",
"Rees F. Garmann",
"Vinothan N. Manoharan"
] |
physics.optics
|
[
"physics.optics"
] |
Discovery of in TMC-1 Based on observations carried out
with the Yebes 40m telescope (projects 19A003,
20A014, 20D15, and 21A011) and the Institut de Radioastronomie Millimétrique (IRAM) 30m telescope. The 40m
radiotelescope at Yebes Observatory is operated by the Spanish Geographic Institute
(IGN, Ministerio de Transportes, Movilidad y Agenda Urbana). IRAM is supported by INSU/CNRS
(France), MPG (Germany) and IGN (Spain).
W. G. D. P. Silva 1 J. Cernicharo 2 S. Schlemmer 1 N. Marcelino 3,4 J.-C. Loison 5 M. Agúndez 2 D. Gupta 1 V. Wakelam 6 S. Thorwirth 1 C. Cabezas 2 B. Tercero 3,4 J. L. Doménech 7 R. Fuentetaja 2 W.-J. Kim 1 P. de Vicente 3 O. Asvany 1
Received X, 2023; accepted X, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Interferometric scattering microscopy (iSCAT) takes advantage of the
interference between elastically scattered light and a weak reference
beam to detect small particles such as biomolecules and
nanospheres <cit.>. The principal advantage of this technique
over fluorescent-based imaging is that it is label-free. It therefore
entails little risk of photobleaching or heating, allowing samples to be
imaged at high frame-rates over long times <cit.>. Furthermore, an interferometric image
encodes information about the size and three-dimensional (3D) position
of the particle that produced it, making iSCAT useful for sensitive,
nanoscale measurements, such as characterizing the mass distribution of
molecular complexes <cit.>, the polymerization of protein
filaments <cit.>, the rates of DNA ejection from
bacteriophages <cit.>, and the kinetics of viral
self-assembly <cit.>.
These measurements rely on algorithms that infer sizes and positions of
nanoparticles or nanoassemblies. The most frequently used algorithms
extract this information primarily from the central spot of the iSCAT
image. For example, quantifying the interferometric contrast of the
central spot of an iSCAT image of a biomolecular assembly yields a
measurement of its mass <cit.>. Also, fitting a Gaussian
function to the intensity profile of the center of an iSCAT image of a
nanoparticle yields a measurement of its two-dimensional (2D) position
to nanometer-scale precision <cit.>. However, such algorithms
discard information such as the interference fringes outside the central
spot, which contain additional information about size and 3D position.
Furthermore, these methods cannot easily make use of prior information
– for example, the expected particle size – and do not easily account
for correlations – for example, between size and position – making it
difficult to quantify uncertainties on the measurements.
An alternative method is to model both the scattering and interference
and fit that model to the data. This forward modeling approach has been
used to analyze interferometric images from an in-line holographic
microscope, which has a different configuration than the iSCAT
microscope but operates on similar principles. For example, fitting a
forward model of Lorenz-Mie scattering, interference, and propagation to
an in-line hologram enables precise characterization and 3D tracking of
microscopic particles <cit.>, with quantified
uncertainties <cit.>. Recently, the forward modeling approach has been
extended to iSCAT. Mahmoodabadi and coworkers <cit.>
developed a forward model of the interferometric point spread function
(iPSF), including the effects of an objective lens, and fit this model
to iSCAT data to extract 3D trajectories of gold nanoparticles. Modeling
interferometric images as point-spread functions is a reasonable
approximation here because the particles are much smaller than the
wavelength of light. More recently, Kashkanova and
coworkers <cit.> applied a Lorenz-Mie scattering
solution, applicable to larger particles, to quantify the intensities of
processed iSCAT images, and He and coworkers <cit.> developed a
general approach based on numerical electromagnetic simulations.
Our aim is to develop a forward modeling approach that yields precise
measurements of specimen size, mass, and 3D position, makes use of
information in both the central spot and surrounding fringes, readily
incorporates prior information, and accurately quantifies uncertainties.
To this end, we use a Bayesian parameter-estimation framework. We infer
the posterior probability density (or “posterior”) of the parameters
in the forward model given the data and prior information, which is
specified as prior probability distributions (or “priors”) on the
parameters. The full posterior describes more than just the best-fit
values; it can also be used to infer the correlations between parameters
and the marginalized uncertainty of each parameter, which accounts for
these correlations.
To implement this approach, we must develop a computationally simple
forward model that can be used with Markov chain Monte Carlo (MCMC)
sampling methods, the typical approach to calculating the posterior in a
Bayesian framework <cit.>. Sampling the posterior with MCMC methods
requires thousands of model evaluations. We must therefore make physical
approximations to limit the computational complexity of the model, so
that sampling takes a reasonable time. Furthermore, the model must be
expressed so as to allow efficient sampling in a multi-dimensional
parameter space. The most efficient MCMC methods are based on Monte
Carlo (HMC) algorithms <cit.>, which rely on automatic
differentiation to calculate gradients <cit.>. To
leverage these algorithms, we must express our model using a modern,
computational graph library.
To enable HMC-based analysis of iSCAT data, we focus on the Rayleigh
scattering regime, applicable to particles much smaller than the
incident wavelength. This approximation allows us to use the iPSF to
model the iSCAT image, as previously shown by Mahmoodabadi and
coworkers <cit.>. Here, we re-parameterize the
forward model of the iPSF so that it can be used with a computational
graph library and HMC sampler, both implemented in the Python package
<cit.>. Furthermore, we use a much simpler
model for the optical train of the microscope, one that ignores effects
of the objective other than magnification. This choice reduces the
computational cost of each model evaluation. As we show, our approach
can efficiently estimate parameters from iSCAT data along with their
correlations and uncertainties, even when the posterior is multi-modal.
We demonstrate several applications of this approach, including tracking
diffusing nanoparticles in 3D, directly inferring diffusion coefficients
from position data, and characterizing the ejection of DNA from a lambda
phage, a virus that infects E. coli bacteria.
§ MODEL OF THE IPSF
In iSCAT, coherent light illuminates a sample through an objective lens.
The light is scattered by the particles in the sample, and a portion of
the incident beam 𝐄_inc is reflected by the
interface of the coverslip (Fig. <ref>a, b). The scattered
field (𝐄_sca) and reflected field
(𝐄_ref) return through the objective and interfere
to form an image. For a single subwavelength particle, the resulting
interferometric image is a set of concentric bright and dark rings
(Fig. <ref>c) that can be modeled with the iPSF.
§.§ The simplified model
For simplicity, we assume that the interference pattern is translated
one-to-one from the focal plane of the objective lens onto the camera,
an assumption widely used in analysis of data from in-line holographic
microscopy <cit.>. This approximation neglects any
aberrations induced by the coverslip or optical train and assumes the
particle is above the focal plane. Nonetheless, analysis of in-line
holograms shows that the approximation is reasonable if the particle is
at least a few micrometers above the focal plane and the objective has a
high numerical aperture <cit.>. For our purposes, this
approximation enables a more efficient parameterization that allows us
to avoid computationally expensive numerical integrations or
special-function evaluations.
With this approximation, the intensity profile I(x,y) of the
interference pattern is
I(x,y) = |𝐄_ref(z_f)+𝐄_sca(x,y,z_f)|^2
= E_ref^2 + E_sca^2 + 2 E_ref E_scacosϕ_dif.
Here, the coordinate system (x,y,z) has the z-axis aligned with the
optical axis, where z=0 is the top of the coverslip, and z_f is the
position of the focal plane. ϕ_dif is the phase difference
between 𝐄_ref and 𝐄_sca (we have
omitted the arguments (x,y,z_f) for brevity). We normalize by
E_ref^2 and subtract the contribution of the reference beam
to obtain the iPSF as
iPSF≡E_sca^2 + 2 E_ref E_scacosϕ_dif/E_ref^2≈2 E_ref E_scacosϕ_dif/E_ref^2.
We neglect the term E_sca^2 because
E_sca≪ E_ref for weakly scattering systems.
To evaluate this expression at the focal plane, we first consider a
reference beam aligned with the optical axis with a constant intensity
profile. Though the beam never reaches the focal plane physically, we
can treat it as if it originates at the focal plane by including an
additional phase shift of -n_m k z_f, where n_m is the refractive
index of the medium and k=2π/λ is the vacuum
wavevector.
Fresnel reflection from the refractive index mismatch at the
coverslip-sample interface induces an additional phase shift
ϕ_ref. We thus find that at the focal plane
𝐄_ref = E_refe^ϕ_ref-n_m k z_f.
In the Rayleigh approximation, the scattered field from a sphere at a
distance r in the far-field limit is
𝐄_sca(r) =
𝐄_inc2√(2)π^2 α/(λ/n_m)^2 re^i n_m k r√(1+cos^2θ),
where 𝐄_inc is the incident light, α is the
particle polarizability relative to the medium, λ is the
incident wavelength, and θ is the scattering
angle <cit.>. The polarizability is proportional to the
volume of the particle. For a spherical particle,
α= a^3 (n_p^2 - n_m^2/n_p^2+2n_m^2),
where a is the particle radius and n_p is its refractive index.
Because α can be complex, it can lead to an additional phase
difference between the incident and scattered wave. The scattered field
at the focal plane can be evaluated from (<ref>) with r=r_p
≡√((x-x_0)^2+(y-y_0)^2+(z_p-z_f)^2), the distance between the
position on the focal plane (x,y,z_f) and the particle position
(x_0,y_0,z_p). The scattering angle is then cosθ=(z_p-z_f)/r.
Finally, by accounting for the additional phase factor of e^i n_m k
z_p in the incident beam, corresponding to the optical path length
from the coverslip to the particle, we obtain the following expressions
for the terms in (<ref>):
E_ref = E_ref,
E_sca = E_inc2√(2)π^2 |α|/(λ/n_m)^2 r_p√(1+(z_p-z_f/r_p)^2),
ϕ_dif = (ϕ_ref - n_m k z_f) - (n_m k z_p + arg(α) + n_m k r_p)
§.§ Parameterization
Several parameters in (<ref>) have
equivalent effects on the iPSF and cannot be independently inferred. We
reparameterize to avoid such degeneracies and minimize the correlation
between parameters, which is computationally beneficial for MCMC. We
define the following parameters: the height of the particle with
respect to the focal plane z_p' ≡ z_p - z_f, a lumped amplitude
Ê_0 ≡E_inc/E_ref4√(2)π^3|α|/(λ/n_m)^3,
and a lumped phase
ϕ_0 ≡ -ϕ_ref + 2 n_m k z_f + arg(α)
Because k or n_m can be precisely measured, we do not include them
as parameters. With the new parameterization, (<ref>) can be
expressed in terms of the following quantities:
E_ref = E_ref,
E_sca = E_refÊ_0 1/n_m k r_p√(1+(z_p'/r_p)^2),
ϕ_dif = -(ϕ_0 + n_m k z_p' + n_m k r_p),
with r_p ≡√((x-x_0)^2+(y-y_0)^2+z_p'^2). We substitute these
quantities into (<ref>) and eliminate E_ref
to arrive at
iPSF = 2 Ê_0 1/n_m k r_p√(1+(z_p'/r_p)^2)cos[-(ϕ_0 + n_m k z_p' + n_m k r_p)].
We find good agreement between this model and the more complex model
developed by Mahmoodabadi and coworkers <cit.>
(Fig. <ref>;
compare with Fig. 3b of Ref. , below the
dashed line).
§.§ Beam misalignment
In many iSCAT experiments, the reference beam is purposely misaligned
with the optical axis to avoid unwanted reflections. The deformation of
the interference pattern caused by misalignment would introduce a
systematic error in the inferred parameters if it were not modeled.
We model the misalignment by considering the spatially varying phase
shift at the focal plane, which we parameterize by two angles:
θ_b, the angle between the beam and the optical axis
(Fig. <ref>a), and φ_b, the rotation of
the beam about the optical axis. We first project each position on the
focal plane (x, y) to a distance along the unit vector
(cosφ_b, sinφ_b):
r_φ = x cosφ_b + y sinφ_b.
The spatially varying phase shift ϕ̂_ma is related to
the perpendicular distance from the equiphase line
r_φsinθ_b (Fig. <ref>a) as
ϕ̂_ma = n_m k (x cosφ_b + y sinφ_b) sinθ_b.
There is also a phase shift of the beam incident on the particle at
(x_0, y_0), such that the relative phase shift is
ϕ_ma = n_m k [(x-x_0) cosφ_b + (y-y_0) sinφ_b] sinθ_b.
Other than this phase shift, we assume that there is
no additional effect from the beam misalignment on the scattering angle,
an approximation that is valid if the detector subtends a sufficiently
small angle, as is the case here.
By including this phase shift in our model, we can calculate the iPSF
for any values of θ_b and φ_b, as shown in
Fig. <ref>b. Because these parameters should not
vary within a single experiment, they can either be inferred or directly
measured and then set as constant. Through inference, we find in our
experiments that θ_b≈ 5.6^∘ and
φ_b≈52^∘. The degree of asymmetry in the iPSF depends
on the degree of misalignment of the beam; when the beam is only
slightly misaligned, this effect becomes negligible.
§.§ Beam Gaussianity
Although a typical incident beam is spatially filtered, resulting in a
Gaussian profile, it is reasonable to approximate the beam profile as
uniform when the iPSF is much smaller than the beam width. With strongly
scattering particles, however, the extent of the iPSF can be comparable
to the beam size. This is the case for the larger polystyrene particles
(d_p∼100 nm) that we examine in some of our experiments, though not
for the lambda phage particles. Thus, we correct for beam Gaussianity
for analysis of polystyrene particles but approximate the beam as
uniform for analysis of the smaller phage (see Supplemental Material).
§ BAYESIAN INFERENCE FOR ISCAT
To estimate the free parameters of our model along with their
uncertainties, we first process the raw iSCAT images according to the
scheme in Ref. , then use a Bayesian MCMC
method to fit the model to the processed data. The processing algorithm
estimates the background from the image by filtering, subtracts off the
estimated background, and finally divides the image by the estimated
background. We crop the image to the region with visible fringes to
avoid fitting to areas where the signal-to-noise ratio is low. We model
the noise as independent and Gaussian for each pixel with constant
standard deviation σ_noise, which we include as an
additional free parameter in the model to better estimate the noise
level. When we estimate other parameters, we marginalize over the noise
parameter, incorporating its uncertainty into the uncertainties of the
parameters of interest.
A Bayesian framework requires explicit choices of prior probabilities
for each free parameter. We choose normal distributions for parameters
that can be positive or negative, such as position, and gamma
distributions for parameters that must be positive, such as scattering
amplitude. For the phase factor ϕ_0, we use a uniform distribution
from -π to π. For the misalignment angles, we use truncated
normal distributions to constrain the angles to the appropriate
quadrant. We find that fitting converges with relatively uninformative
priors on all parameters except for the horizontal position (x_0,y_0),
which must be well-constrained. We estimate the iSCAT image center
within about 100 nm, or 2 pixels, with a Hough transform <cit.> and use this estimate as the mean for the prior
on the horizontal position.
The full statistical model for the iPSF_data(x,y) is
Ê_0 ∼Gamma(μ_Ê_0,σ_Ê_0),
ϕ_0 ∼Uniform(-π,π),
x_0 ∼Normal(μ_x_0,σ_x_0),
y_0 ∼Normal(μ_y_0,σ_y_0),
z_p' ∼Gamma(μ_z_p',σ_z_p'),
θ_b ∼Truncated Normal(μ_θ_b,σ_θ_b, 0≤θ_b≤15^∘),
φ_b ∼Truncated Normal(μ_φ_b,σ_φ_b, 0≤φ_b≤90^∘),
σ_noise ∼Gamma(μ_σ_noise,σ_σ_noise),
iPSF_data(x,y) ∼Normal(μ_iPSF(x,y), σ_noise),
where μ_iPSF(x,y) is the iPSF model, (<ref>).
The values used for the priors are provided in Supplemental Table S1.
§.§ Hamiltonian Monte Carlo
To fit this model to data, we use MCMC techniques from the Python
package <cit.> which leverages the tensor-based
library , based on
<cit.>, to calculate
gradients. We primarily employ a No-U-turn (NUTS)
sampler <cit.>, which implements an efficient HMC
technique <cit.>. We use NUTS whenever the phase factor
ϕ_0 is a free parameter. But because the gradient is undefined at
ϕ_0 = ±π, we separately fit the absolute value of ϕ_0 as a
continuous parameter (0 ≤ϕ_0 ≤π) and fit its sign as a
Bernoulli parameter (0 or 1), so that the sampler can move through
the cut-off at 0 or π by flipping the sign. When ϕ_0 is
fixed, however, such as in particle tracking experiments with a
stationary focal plane, the posterior becomes multi-modal with a local
maximum in z_p' every half wavelength. In such posteriors, it is
possible for the NUTS sampler to become stuck in a local maximum and
fail to converge. In these cases, we use a Sequential Monte-Carlo (SMC)
sampler <cit.>, which employs a tempered scheme to efficiently
explore multi-modal posteriors. We find that the SMC sampler
consistently finds the global maximum of the posterior (see
Sec. 5<ref>).
The computational runtime of the MCMC method depends primarily on the
size of the iSCAT image. For a 100×100 pixel image of a 120 nm
polystyrene particle, for example, fitting on a single modern CPU
(2.2 GHz Intel Core i7) takes approximately 4 min. For a 17×17
pixel image of a 60 nm lambda phage, fitting takes less than 10 s.
§ VALIDATION
§.§ Fitting data from a single particle
To validate the method, we fit our model to iSCAT images of a 120 nm
polystyrene sphere immobilized on a coverslip (see Appendix for
experimental methods). The best-fit image matches the recorded data well
(Fig. <ref>a), although there is a discrepancy between
the model and data for the intensity of the central fringe
(Fig. <ref>b), which likely arises because the
point-scatterer approximation becomes less accurate for particles of
this size.
The sampling approach yields a detailed view of the uncertainties and
the correlations of the parameter estimates
(Fig. <ref>c), where correlations between parameters
manifest as diagonal joint distributions in the pair plots. We find that
the phase ϕ_0 and the axial position z'_p are strongly correlated
with each other and, to a lesser degree, with the scattering amplitude
Ê_0. We expect some correlation because changes in axial
position affect the amplitude of the scattered wave reaching the
detector as well as the optical path length between the scattered and
reference beam, which affects the phase. These parameters are not
completely degenerate because the axial position also affects the
distance between fringes and their relative amplitudes. Thus the axial
position can be independently inferred from the phase and amplitude,
though its uncertainty is affected by correlations with these variables.
We also note some correlation between the misalignment angles and the
horizontal position of the particle, which arises because both affect
the location of the central fringe on the detector.
By marginalizing the posterior over all parameters except those of
interest, we can quantify both the precision and accuracy of the
technique. To quantify the localization precision, we examine the widths
of the marginal distributions for x, y, and z'_p (plots along the
diagonal in Fig. <ref>c). We find an uncertainty of less
than 10 nm in both the lateral and axial directions. To quantify the
accuracy of the amplitude measurement, we examine the marginal
distribution for Ê_0. Following (<ref>) and
(<ref>), we estimate Ê_0≈ 0.7, assuming Fresnel
reflection at the coverslip and n_m = 1.33, n_p = 1.586, λ =
635 nm, and a=60 nm. The value inferred from the data is Ê_0 =
0.303 ± 0.004. The agreement is reasonable, considering the
assumptions involved. In practice, one measures the particle size from
the inferred amplitude by calibrating the amplitude against a particle
of known size. We compare to a calculated value instead of a calibrated
one because we are interested in assessing the validity of the results.
§.§ Fitting data at varying focal-plane location
We fit the model to data from the same immobilized 120 nm polystyrene
sphere across a large range of axial distances by translating the
objective upward to move the focal plane at a rate of 10 nm per frame
(Supplemental Video 1). The resulting fits match the data well
(Fig. <ref>), though the data contain some
additional modulation that might be due to fringe noise from
out-of-focus particles in the sample.
We find that as we sweep the focus toward the particle, the inferred
distance between the particle and focal plane decreases linearly, as
expected (Fig. <ref>a). The absolute axial
position of the particle, obtained by adding the inferred distance to
the displacement of the focal plane, remains largely constant, also as
expected (Fig. <ref>b). We do see some axially
dependent fluctuations larger than the parameter uncertainty (inset of
Fig. <ref>b). This systematic error in the
axial position is about 40 nm over the total distance of 1800 nm, a 2%
variation. We see similar effects in the best-fit scattering amplitude
Ê_0 (Fig. <ref>c), which remains
largely constant across the focal sweep but includes some variations
with relative amplitude of a few percent, larger than the uncertainty.
In both cases, the systematic errors likely arise from unmodeled optical
effects, such as Mie scattering or additional effects of the objective
lens, both of which could contribute to the variation with z. When the
particle is at least three diameters (here about 360 nm) from the
focal plane, however, we obtain consistent and reasonable parameter
estimates with systematic errors of only a few percent and
non-systematic uncertainties that are much smaller.
§ APPLICATIONS
§.§ 3D particle tracking
Recent work has focused on using iSCAT to track small particles in three
dimensions. The first studies used only the central contrast of the
fringe pattern to axially track the particle <cit.>
while later studies analyzed the entire fringe pattern <cit.>. Here we use both the central spot
and surrounding fringes to track a nanoparticle in 3D. We use the
efficient Bayesian inference framework described above, which allows us
to quantify the uncertainty on the particle position in all three
dimensions. This uncertainty can then be propagated in further analyses,
as we demonstrate by inferring the diffusion coefficient.
We track the Brownian motion of a freely diffusing polystyrene sphere
with a hydrodynamic diameter of 79 ± 14 nm, as characterized by the
manufacturer. The iSCAT images are shown in Supplemental Video 2. We fit
this data using our model with a correction for a Gaussian beam, but
with fixed misalignment angles obtained from calibration
(Sec. 4<ref>). We also fix the phase ϕ_0 to an arbitrary
value, since the focal plane is kept constant throughout the experiment
and we are interested only in the relative displacement of the particle
and not the absolute value of z_p'. Fixing the phase improves the
accuracy of the axial tracking but makes the posterior multimodal in
z_p', with peaks at every half wavelength. To efficiently explore the
multimodal landscape, we use the SMC sampler. We use the posterior
estimate for the 3D position in one frame as a prior for the next frame,
setting the standard deviation of our Gaussian prior for the position to
150 nm to account for particle movement.
From the best-fit (posterior mean) results, we can construct the
trajectory of the sphere in 3D space over a wide range of axial
positions (Fig. <ref>). The uncertainty in the
particle position, which we calculate from the standard deviation of the
marginalized posterior of each inferred coordinate, is 6 nm in both the
lateral and axial directions, on the order of one tenth of a pixel.
We then directly fit a model of a Gaussian random walk to this inferred
trajectory to infer the diffusion coefficient. We consider the
horizontal and vertical trajectories separately. Since the Brownian
motion is itself a statistical process, we can use a Bayesian framework
to infer the diffusion coefficient directly without calculating a
mean-square displacement <cit.>. The advantage of this approach
is that it allows the uncertainty at each point of the trajectory to be
easily propagated to quantify the final uncertainty in the diffusion
coefficient. Furthermore, we can infer the most credible trajectory
given the data, and the uncertainty on this trajectory.
The full description of the Gaussian random walk model is
D_xy ∼Normal(μ_D_xy, σ_D_xy),
D_z ∼Normal(μ_D_z,σ_D_z),
dx(t), dy(t) ∼Normal(0,√(2 D_xydt)),
dz(t) ∼Normal(0,√(2 D_z dt)),
[ x(t); y(t); z(t) ] = [ x_data(0); y_data(0); z_data(0) ] + ∑_t [ dx(t); dy(t); dz(t) ],
x_data(t) ∼Normal(x(t), σ_x(t)),
y_data(t) ∼Normal(y(t), σ_y(t)),
z_data(t) ∼Normal(z(t), σ_z(t)),
where x_data(t), y_data(t), z_data(t) are the
best-fit positions and σ_x(t), σ_y(t), σ_z(t) are the
uncertainties, as inferred previously. The displacement of the particle
is (dx(t), dy(t), dz(t). Values for the
distribution parameters are provided in Supplemental Table S2. We use a
NUTS sampler to fit this model to the data.
We find D_xy = (5.6 ± 0.3) × 10^-12 m^2/s for the
horizontal directions and D_z = (4.8 ± 0.4) × 10^-12 m^2/s
for the vertical direction, with the full joint posterior shown in
Fig. <ref>. We attribute the discrepancy between the
horizontal and vertical diffusion coefficient to interactions between
the sphere and the coverslip. While the distributions of the horizontal
positions are largely Gaussian, as is expected for a random walk, the
distribution of the vertical position is skewed, peaking around z =
1500 nm (Fig. <ref>). The skew might arise from
electrostatic interactions with the coverslip, interactions that could
be precisely characterized by this method with larger data sets.
Because the horizontal diffusion coefficient should be less sensitive to
sedimentation or interactions with the coverslip, it provides a more
reliable estimate for the true free diffusion coefficient. Assuming
purely Stokesian drag, we calculate the particle diameter from the
horizontal diffusion coefficient to be 78 ± 5 nm, in good agreement
with the particle size provided by the manufacturer.
§.§ Lambda phage viral DNA ejection
Another promising application of iSCAT is the investigation of the
dynamics of viruses <cit.>, such as the process by which a lambda
phage, a double-stranded DNA virus that infects E. coli, ejects
its encapsulated genetic material. Because the phage DNA is packaged at
high density inside its capsid, the intensity of the iSCAT image of a
phage before ejection is much higher than the intensity after ejection.
In previous work, Goldfain and coworkers used the change in intensity of
the central spot of the iSCAT image to measure the amount of DNA that
was ejected as a function of time <cit.>.
Here, we use our Bayesian method to quantitatively analyze the ejection
process. In contrast to the previous analysis, we use a fitting approach
rather than a processing approach, and we fit the model to all the visible
fringes rather than just the central spot. We analyze iSCAT images of a
single lambda phage immobilized on a coverslip (Supplemental Video 3).
We fit for the scattering amplitude of the particle, which should depend
linearly on the volume and mass of the DNA inside the
capsid <cit.>.
In this experimental setup, the phage is below the focal plane, rather
than above the focal plane as we assumed in our model development.
However, for an unaberrated system, the effect of relocating the
specimen from above to below the focal plane is simply an additional
Gouy phase shift <cit.>. The fitted z_p' (which we
constrain to be positive) becomes the distance below the focal plane,
and the Gouy phase shift is absorbed into the fitted phase ϕ_0, the
value of which is not of interest to us. We therefore use the same
simplified model to fit the data, but we interpret z_p' as the
distance below the focal plane. We also fix the misalignment angles to
their previously calibrated values, and we do not correct for the beam
Gaussianity, since the visible fringe pattern is small compared to the
width of the beam.
We find good agreement between the data and the best-fit iPSF from our
model (Fig. <ref>). This result shows that the model
can be used to analyze data of particles below the focal plane, so long
as the absolute value of the phase difference is not of interest, and
the particle is not so close to the focal plane that the effects of the
lens must be modeled.
By analyzing the full sequence of frames throughout the ejection
process, we obtain a time series for the scattering amplitude
Ê_0 (Fig. <ref>). In the Rayleigh scattering
regime, the ratio between the scattering amplitude at a given time in
the ejection process and the initial scattering amplitude relates
directly to the fraction of remaining DNA in the viral capsid. We
observe that the DNA ejection process, as captured by the scattering
amplitude, occurs over approximately 5 s. The fluctuations in the
amplitude appear to be within the uncertainty of the measurement.
In contrast to the previous analysis of the DNA ejection process of
lambda phage <cit.>, which relied on integrating a selected
number of central pixels of the fringe pattern, our method makes use of
more information from the fringe pattern and accurately accounts for
correlations between the particle position and size. In particular, the
method allows us to decouple variations in intensity due to the motion
of the particle from changes of the scattering amplitude due to ejection
of DNA. Altogether, we achieve an increase of the signal-to-noise ratio
of around 50% over the former method. These results illustrate that our
method can be useful for accurate mass photometry <cit.>,
although calibration against a particle of known size is required to
account for all experimental factors, such as the finite detector
efficiency.
§ CONCLUSION AND OUTLOOK
We have demonstrated that with a Bayesian approach to the analysis of
iSCAT data, we can infer not only the best-fit values for the position
and scattering properties of nanoscopic objects, but also quantify the
statistical uncertainties and correlations between these parameters. We
have shown that a simplified model that does not account for lens
effects can nonetheless accurately capture many features of the iPSF
when the particle is either above or below the focal plane. By
implementing this model in a tensor-based language, we have demonstrated
that MCMC methods that leverage automatic differentiation techniques can
efficiently calculate the full posterior to yield parameter estimates
and uncertainties.
We anticipate that this method will be useful for the types of
applications we have demonstrated here: 3D tracking of nanoparticles and
characterization of the dynamics of viruses. In both cases, it is
critical to quantify uncertainties, since one is typically interested in
testing physical models of the dynamics. Arguably, MCMC-based Bayesian
inference is the ideal workhorse for such problems, because it
quantifies the uncertainties and correlations among all parameters in
the model. Furthermore, the point-scatterer approximation allows for
straightforward translation of the scattering model into highly
efficient tensor-based libraries, which enables fast MCMC approaches.
Our method can be extended to other situations of interest, such as
modeling more than one particle in the field of view or modeling the
effects of the objective lens <cit.>. Because HMC/NUTS-based samplers operate
efficiently even with large numbers of parameters, the framework we have
developed is a useful base upon which more complex models can be built.
Appendix
§ EXPERIMENTAL METHODS
section
The iSCAT microscope used here has been described in detail in
Refs. and . Samples are
mounted to a NanoMax three-axis stage (Thorlabs, MAX343). Polystyrene
sphere samples are illuminated by a 200 mW, 635 nm single-mode laser
diode (Lasertack PD-01230), while lambda phage samples are illuminated
with a 300 mW, 405 nm laser (Toptica iBeam Smart), both modulated with a
1 MHz square wave to reduce the temporal coherence of the illumination
and suppress background intensity variations. The beams
are spatially filtered with a single-mode optical fiber and focused onto
the back aperture of a 100× oil-immersion objective (Nikon Plan
Apo objective, NA 1.45 for spheres; Nikon Plan Apo VC, NA 1.4 for lambda
phage) to collimate the beam at the sample. We record iSCAT images with
a camera (PhotonFocus MV1-D1024E-160-CL for spheres; Andor Zyla 5.5 for
lambda phage).
We calibrate the pixel size of the image by laterally translating a
polystyrene particle stuck to the coverslip using NanoMax piezoelectric
actuators, and recording the voltage and an image of the particle. We
estimate the horizontal particle position using the
HoloPy <cit.> implementation of the Hough
transform <cit.>, and calculate a pixel size
of 73 nm using the position-voltage conversion provided by the
manufacturer. To account for systematic uncertainties in the size
calibration, we include an additional overall rescaling factor of the
image in our model. This parameter also accounts for any uncertainty in
the wavelength or medium refractive index of the specific experimental
set-up. We determine it by fitting, but we keep it constant for all fits
within a single experiment. We find typical scaling factors between 0.8
and 1.2, which is reasonable given the uncertainty in the calculated
pixel size.
We use an open-top sample chamber to image the polystyrene particles. We
clean No. 1 coverslips (VWR Micro Cover Glass) by sonicating in 1% w/v
Alconox detergent in water for 30 min, sonicating in deionized (DI)
water (output from Millipore Elix 3 and Millipore Milli-Q Synthesis) for
30 min, rinsing in DI water, and drying with nitrogen gas. We place
samples inside a small ring of vacuum grease on the clean coverslip,
forming a contained droplet. The round top of the droplet prevents
reflection back into the objective from the top of the sample.
To immobilize a particle for a focal sweep, we suspend 120 nm
polystyrene particles (Invitrogen S37204) diluted in 0.5 M NaCl
solution. The high salt concentration screens electrostatic repulsion
between the coverslip and particles, allowing them to stick to the
glass. We record focal-plane sweep data by bringing a single particle
into focus, driving the stage in the z direction at 0.001 mm/s using
the NanoMax stepper motors, and recording images (2 ms exposure time,
10 ms frame interval). For the free diffusion experiment, we dilute
79 nm polystyrene particles (Spherotech PP-008-10) in DI water to 5
× 10^-5% w/w. Since the buffer contains no salt, particles do
not stick to the coverslip. We image this solution with the focus close
to the coverslip-water interface to prevent diffusing particles from
passing below the focal plane (2 ms exposure time, 3 ms frame interval).
The lambda phage experiments have been previously described
<cit.>. In brief, we purify lambda phage and LamB receptor
from E. coli <cit.> and store them in TNM buffer
(50 mM Tris-HCl pH 7.5, 100 mM NaCl, 8 mM MgCl_2). The sample chamber
consists of a No. 1 coverslip closest to the objective and two pieces of
glass slide sealed together to form a tilted roof that prevents
back-reflections. We modify the coverslips with APTES (98% purity; Alfa
Aesar), which causes the phages to stick, and NHS-modified polyethylene
glycol (5,000 MW, >95% purity, Nanocs Inc.) to limit the number of
bound phages. We pipette 20 L of lambda phage at 10^10 plaque
forming units per mL to the side of the chamber, which is open. We rinse
out unbound phage by adding 20 L of TNM to one side of the
chamber and aspirating 20 L from the other side. We then add LamB
receptor in TNM with 1% n-octyl-oligo-oxyethylene (oPOE) detergent
(Enzo Life Sciences Inc.) using the same method, and we record images
(9 ms exposure time, 10 ms frame interval).
Funding This research was partially supported by the
National Science Foundation through the Harvard University Materials
Research Science and Engineering Center (grant no. DMR-2011754), and
partially by the Harvard Quantitative Biology Initiative through the
NSF-Simons Center for Mathematical and Statistical Analysis of Biology
(grant no. 1764269), by the National Science Foundation Graduate
Research Fellowship (grant no. DGE-1745303 and DGE-2140743), and by
the Department of Defense through the National Defense Science and
Engineering Graduate Fellowship.
Acknowledgment We thank Paul van der Schoot for useful discussions.
Disclosures The authors declare no conflicts of interest.
|
http://arxiv.org/abs/2307.01029v1
|
20230703140124
|
Advancing O-RAN to Facilitate Intelligence in V2X
|
[
"Eugenio Moro",
"Francesco Linsalata",
"Maurizio Magarini",
"Umberto Spagnolini",
"Antonio Capone"
] |
cs.NI
|
[
"cs.NI"
] |
Möbius Homology
[
===============
Vehicular communications at high frequencies are envisioned to be a breakthrough application for the 6g cellular systems. Traditional ran lack the flexibility to enable sophisticated control mechanisms that are demanded by the strict performance requirements of the dynamic vehicular environment.
In contrast, the features of oran can be exploited to support advanced use cases. Indeed, the emerging paradigm of oran represents an ideal framework for the orchestration of vehicular communication. Although the high potential stemming from their integration can be easily seen and recognized, the effective combination of the two ecosystems is an open issue.
This article pioneers the integration of the two strategies for seamlessly incorporating v2x control within oran's ecosystem.
We propose and discuss an enabling architecture that tightly integrates v2x and oran. In the proposed solution, an oran-based control plane operates at low frequencies to achieve reliable and efficient connectivity among autonomous vehicles at higher frequencies. The technological feasibility of this integrated architecture is investigated. A detailed case study is presented and analyzed to demonstrate the design of an xApp to showcase a practical example of oran solution for a specific v2x scenario.
O-RAN, V2X, 6G, dynamic control
§ INTRODUCTION
The current ran paradigm does not easily support network intelligence due to network components - mainly bs - being operated as monolithic and inflexible black-boxes <cit.>. To address this limitation and bridge the gap between real-world ran deployments and cutting-edge network intelligence, a consortium of vendors, operators, and research institutions have proposed oran as an architectural overhaul of ran.
oran is a disaggregated and open architecture that separates the hardware and software components of the ran, enabling interoperability, modularity, and flexibility <cit.>. ric are one of the major innovations of O-RAN: softwarized control loops that enable data collection and dynamic control implemented as micro-services over large-scale and heterogeneous ran deployments.
Specialized oran-based control solutions have been successfully applied to optimize different aspects of 5g cellular systems, confirming the disruptive effects of this architecture. However, oran still supports only the most traditional of 5g deployments, where the network components are only bs and ue. There is a case to be made for advancing oran to support emerging 6G use cases, where network intelligence plays an even stronger role <cit.>. In this work, we argue that cav exploiting high data-rate links represents one of these fundamental use cases, and we propose our vision on the matter.
Vehicular communication is likely to be a key driving force in the future 6g wireless networks, as it will enable advanced vehicular mobility paradigms such as autonomous driving, coordinated sensing, and enhanced navigation. At the core of vehicular communications lies v2x, a communication technology that facilitates the interconnection among vehicles and infrastructure.
v2x realizes direct and multi-hops links among cav, namely v2v or Sidelink communications. Direct vehicular links reduce the network infrastructure involvement, facilitating communication even in out-of-coverage areas and considerably decreasing latency <cit.>.
Moreover, since most of the cav use cases require high data rates, v2v will make use of higher carrier frequencies, such as mmwave - currently standardized in 5g as fr2 - or sub-THz, with the introduction of beam-based highly directive communication to counteract the considerable pathloss that characterizes propagation in these bands <cit.>.
The unique challenges posed by cav scenario, the dynamic nature of the vehicular environment, the harsh propagation conditions at high frequencies, as well as the hybrid nature of v2x necessitate the development of sophisticated control mechanisms to ensure the success of this disruptive technology. Albeit currently being limited to traditional ran deployments support, oran represents the ideal candidate to enable management and orchestration in the challenging scenario mentioned above. This article focuses on integrating a next-generation oran with v2x communications. It addresses the challenges and opportunities associated with this integration towards the ultimate goal of enhancing the performance, reliability, and adaptability of ran-based vehicular communications.
To this end, we identify fundamental key challenges of v2x, elaborating on how these can be successfully addressed through oran-based solutions to present some potential research directions yet to be explored in the literature.
In addition, this article proposes a next-generation oran architecture that, dfor the first time, embodies a tight integration of v2x within the oran concepts. Through a proper extension of the oran interfaces, we show how it is possible to support the additional network components of v2x and let them be managed by the oran control loops.
Most notably, we allow the communication stack of connected vehicles to be part of the entire oran architecture. The result is a comprehensive vehicular communication solution where oran ric acts as the orchestrator of a hybrid network where vehicles are connected both to the ran and among themselves. Reliable and pervasive low frequency ran (i.e., incumbent 5G fr1 deployments) are used to support a control plane where oran messages can be exchanged between vehicles and the ric. This, in turn, will manage high-frequency v2v links to effectively create a high-performance data plane, namely a vanet, to cater to the communication needs of autonomous driving, infotainment, and other vehicular communication applications.
Finally, a case study is presented, where we use the proposed architecture to demonstrate the impact of a next-generation oran in addressing a specific v2x challenge. In particular, we design an oran micro-service (i.e., an xApp) and test it in a simulated environment to showcase the capabilities and benefits of leveraging oran in solving real-world v2x problems. Numerical results in terms of network connectivity and control overhead are provided to demonstrate the superior performance of the xApp-controlled v2x network compared to the unmanaged solution. At the same time, we positively confirm the feasibility of the proposed next-generation architecture.
§ AN O-RAN APPROACH TO THE V2X CHALLENGES
The oran architecture features the possibility of applying centralized control to the ran through the so-called ric, as exemplified in Fig. <ref>. These functional components can implement arbitrary data collection and control logic by communicating with the network infrastructure (i.e., bs) thanks to open and standardized interfaces. In particular, oran introduced a nrric, which operates on a 1ms to 1s time scale and, thus, it is capable of operating under stringent v2x latency requirements. Arbitrary data collection and control mechanisms are then implemented through the so-called xApps, which are network microservices that run on the primitives exposed by the nrric. Additionally, oran has also standardized the nrtric, which is a centralized control loop operating on a slower time scale, but with broader network visibility. As such, it enables large-scale orchestration and policing mechanisms implemented as network applications called rApps. When applied to v2x, these two control loops can potentially unlock significant optimization and orchestration gains with respect to the current architecture.
We now discuss a key set of fundamental v2x challenges, where oran-based solutions are expected to have a disruptive impact.
§.§ Beam selection and management at mmwave
In the context of v2x, mmwave provides the communication capabilities required to support most of the core concepts of its <cit.>.
To establish effective directional communication at mmwave, beams have to be aligned both at the transmitter and the receiver, as shown in Fig. <ref>. This critical operations are costly in terms of beam training overhead and become even more challenging due to the relative mobility of the vehicles <cit.> that requires tight beam tracking. As such, traditional beam management mechanisms are considered inadequate and v2x-tailored solutions are required instead. Data-driven approaches are proven to be effective in providing fast beam alignment and tracking for vehicular communication. Locally sourced data coming from on-board sensors can assist the vehicle in autonomously identifying ideal beam direction candidates. However, centralized solutions based on a fusion of reported vehicle positions and planned path, blockage prediction, urban layout information, and past successful beam alignment show the potential of orchestrating large-scale beam alignment to improve performance and reduce interference <cit.>. While promising, such sophisticated solutions require access to a large amount of fresh network data coming from heterogeneous sources, which is hardly practical for the traditional ran architecture.
With the capability of abstracting the physical equipment idiosyncrasies and enabling large-scale data collection, oran represents an ideal enabler for these solutions. By tapping into the wealth of up-to-date data that a nrric can expose, an xApp could effectively host a well-informed beamforming management function based on arbitrary algorithms, i.e., ml-based, that operate down to the ms timescale. Concurrently, an rApp running in the nrtric can update beam management policies to fine-tune the overall objective of the beam management solution according to specific policies. In particular, the rApp can select a particular beam-width and codebook size to further reduce alignment overhead (fewer larger beams) or reduce interference and increase single-link performance (more but narrower candidate beams). Overall, the potential of an oran-based beam management has been proven in the most general settings <cit.>. Nonetheless, there is still a lack of v2x-dedicated studies on this matter where the capabilities of oran are applied to beam management and massive mimo for vehicular communications.
§.§ Radio resource management.
its are characterized by a large set of diverse services that present extremely challenging communication requirements.
Efficiently managing the scarce radio resources in the ran represents a critical challenge. 5g foresees the use of network slices, which can be briefly described as bundles of virtualized resources dedicated to providing specific connectivity services to a subset of network users. Owing to the possibility of activating flexible and service-tailored slices, network slicing is considered a natural enabler of the diverse v2x use cases <cit.>. However, physical resources still need to be efficiently allocated so that slices can support the communication requirements, and slice isolation needs to be guaranteed. Static resource partitioning is hardly viable due to the fast-changing state of the wireless network. Dynamic slice resource allocation based on continuous monitoring of the network parameters is to be preferred. This is all the more true for v2x due to the aforementioned challenging environment <cit.>.
Thanks to its extensive data collection and control capabilities, oran is considered the fundamental enabler of slicing resource management <cit.>. Nonetheless, the problem of practically enabling the slicing for v2x in the real world has yet to be satisfactorily addressed. In this case, an oran-based approach can fuse network measurements and external information (i.e., vehicle localization and planned path) to detect any potential criticality, reallocate slice resources accordingly, and allow for more efficient spectrum use. For instance, an xApp could monitor vehicular traffic and proactively allocate slice resources in those cells that will soon be subject to increased vehicular activity. If such resources are unavailable, xApps can unload the receiving cells by triggering handovers, reducing foreign slices' allotments, or disconnecting some users in the extreme. Additionally, by dynamically controlling sps and cg, the xApp can guarantee resource-efficient low latency and reliable communications both at the slice and single vehicle levels, ultimately providing isolation for safety-critical services. Given the complexity of the problem, ml-based approaches are likely to be required. In this case, an rApp could monitor and fine-tune the mechanism put in place by the xApps and build or retrain appropriate ml models, further adapting the v2x slicing management to long-term environmental variations.
The problem of resource allocation is also relevant for direct v2v connections.
According to the 3gpp standard, a central entity (i.e., a bs or a rsu) is expected to allocate the radio resources <cit.>. This mechanism is hindered by the limited perception of the central entity with respect to each v2v link condition and traffic requirement <cit.>. In this context, an xApp could gather data about the vehicle's position and mobility, as well as channel status and interference profile. This information can be processed to adapt the allocation strategies to the fast-varying vehicular environment.
§.§ Enhanced Vehicle-to-vehicle connectivity.
Fundamental its services such as cooperative awareness, augmented reality, and coordinated autonomous driving require extensive data exchange among cav that are in close proximity to each other <cit.>. However, relying on base stations to forward the entirety of this traffic is impractical due to inefficiency and increased burden on the traditional ran infrastructure. High-frequency Sidelink communications enabled by 5G nr are thus inevitable for reliable low latency and high throughput v2v links.
The challenges presented in the previous paragraphs interest v2v communications as well, making a case for addressing them through an oran approach. Additionally, the vanet created through the activation of direct links needs to be properly managed in order to support the its services <cit.>.
The high mobility of the vehicles and the harsh mmwave propagation creates a challenging twist on the traditional problems of Ad-hoc networks, which include link selection, network graph and routing optimization, and congestion control, as shown in Fig. <ref>. By tapping into the control and monitoring capabilities of the oran architecture, an xApp could select which v2v links to activate according to the expected channel quality and probability of blockage.
The overall link selection strategy could optimize the underlying vanet graph to meet different objectives. For instance, different superimposed graphs - low latency graphs prioritizing short paths or high throughput graphs prioritizing high-quality links - can be precomputed and dynamically activated according to the instantaneous communication needs or the policies dictated by an rApp.
§.§ Programmable and up-to-date V2X digital twin.
The network dt will be the key enabling technology to counteract it. The dt aims at providing a high-fidelity digital representation of physical phenomena. This is obtained not only by integrating simulations and available data in the network nodes but also by accounting for the entire phenomenon life cycle, which provides up-to-date insights about the physical entity, as illustrated in Fig. <ref> <cit.>.
The 6G v2x communications will exploit the cooperation among CAVs to augment environment perception and to enable the creation of a dt of the surrounding environments <cit.>.
To obtain an accurate real-time digital reproduction of the physical environment, the envisioned digital twin-enabled v2x system has to use high-definition 3D maps and combine multi-modal sensory data from several vehicles' onboard sensor data, as well as a detailed description of the communication network state.
The oran architecture is well-positioned to source the network information required to build high-fidelity v2x dt, reducing the amount of data that the network nodes should manage and exchange.
At the same time, oran applications can exploit the dt itself to run inference on the overall v2x scenario without causing communication overhead with the infrastructure. In particular, xApps can exploit the dt to obtain reasonably precise information on current and future vehicle positions without directly interrogating them. Proactive and optimized traffic forecasting capabilities from the v2x dt can also be exploited.
rApps can retrain ml models on the virtual v2x environment recreated by the dt to ensure that the xApp data-driven approaches always employ up-to-date models.
§ A NEXT-GENERATION ORAN ARCHITECTURE FOR V2X
In the previous section, we have detailed how some relevant v2x challenges can be successfully addressed by exploiting the network programmability enabled by oran. However, the specifications of the oran are currently designed around traditional access networks. Due to the peculiarities of the V2X environment, some extensions to the oran architecture are required such that all of these solutions can be practically realized. We now focus on this matter, proposing our vision of an enabling architecture where oran and v2x are integrated to unlock the aforementioned opportunities. As shown in Fig. <ref> for the case of using 5g as the rat, a typical oran deployment includes a nrtric and a nrric embodied as software components in the ecc <cit.>. A dt could also be deployed inside the same ecc, allowing for undisturbed data exchanges with the ric. oran apps running on top of both ric communicate with the network infrastructure through open interfaces: the E2 interface for the nrtric and the O1 and O2 interfaces for the nrric.
In light of this, creating an oran empowered v2x requires all the v2x devices to be reachable by the oran ric through these interfaces. bs are normally equipped with interface terminations to enable data collection and control, as shown in Fig. <ref>; thus, no modifications are required from the communication standpoint. However, the interface definitions will likely require extensions to support the specifics of data collection and control required by vehicular use cases. rsu are currently not supported by oran specifications. Nonetheless, integrating the O1, O2, and E2 terminations in rsu would be a relatively straightforward operation, as the communication between the devices and the ric in the ecc could take place by using the already existing rsu control plane. As previously mentioned, proper extensions to the oran interface definitions will enable rsu to be subjected to data collection and control. On the other hand, direct communication between the ric and the cav results in being more challenging. However, allowing the nr Sidelink stack of cav to be centrally controlled opens the opportunity of addressing a key issue in v2x networks: orchestrating the v2v communication.
§.§ oran-based control plane for v2v
In our integrated architecture proposition, we envision a vanet which still retains its decentralized nature, but it is supported by a next-generation oran to enhance its performance to the point of supporting the stringent requirements of the safety-critical its services. As Fig. <ref> shows in the exemplary scenario on the left, the data plane of the v2v ad-hoc network is represented by direct nr sidelink mmwave connections established among cav, offering high throughput without occupying bs resources. For the control plane, sub6 links are employed, allowing for reliable and efficient communication between vehicles and the centralized controller through the existing ran infrastructure. The use of sub6 frequencies in the control plane offers a wider coverage and better penetration capabilities, leveraging on the ubiquitous coverage of modern cellular network deployments. The role of this out-of-band control plane is to relay oran messages between the ric and the interface terminations of the cav[In Fig. <ref>, only the E2 termination is included in cav for the sake of simplicity.]. According to this architecture, a cav would require a 5g fr1 ue to connect to the 5g bs and an fr2 nr Sidelink ue to connect with other vehicles. After an fr1 connection is established with the bs, dedicated radio bearers for the oran-based management plane are established. In particular, at least one is required to create a gtp tunnel with a upf co-located with the ric in the ecc. As Fig. <ref> shows in detail, the gtp tunnel can be then used to transparently connect the nrric and the E2 termination in the cav. Thanks to the capability of gtp tunnels of maintaining IP endpoint connectivity through handovers, the high mobility of cav is not expected to disrupt the proposed control plane. In other terms, the burden of managing the mobility of E2 terminations is left to the 5G connection, while oran microservices obtain a reliable connection with the cav as they navigate through the coverage area.
§.§ Technological feasibility
The proposed architecture is feasible from a technological realization standpoint, with minimal modifications required. Integrating a 5g ue and a Sidelink ue into each vehicle allows for the establishment of both the fr1 connection with the 5g bs and the fr2 nr Sidelink connections with other vehicles. This integration can be achieved without significant challenges, as vehicles generally have flexible energy consumption constraints, making it feasible to accommodate the necessary communication modules.
Moreover, the proposed architecture does not require substantial modifications to the existing 5g stack, as it relies on standard 5g communication modes.
However, although realizable, the architecture's effectiveness and performance must be thoroughly analyzed. Factors such as latency and control plane overhead (i.e., bs resource utilization) must be carefully considered to ensure the architecture's viability. Such analysis should be conducted on a case-by-case basis, representing another open question in the context of oran for v2x. In the following section, we conduct a preliminary analysis of the effectiveness and viability of addressing a challenging v2v goal through the proposed oran-enabled architecture.
In V2X systems, performance and an Eorahigh reliabilityn2 to enhenable ance its termination could be also included in rsu, both to enable their dynamic control and to tap into the wealth of CAV-related information that they make available. E2 messages can thus be multiplexed together with the other communications in the rsu control plane. Additionally, there is a case to be made for including an E2 termination in the CAVs themselves, as it will be motivated in the following paragraphs. However, E2AP does not currently support mobility and proper modifications to the protocol are needed before the CAV can be directly accessed by the xApps deployed on the nrric. In both cases, new sm definition will be also required to support data collection and control applied to V2X.
The nrtric employs the O1 interface to apply smo functions over the entire network infrastructure and the O2 interface to control the life-cycle of network components. These interfaces will require modifications that are naturally similar to the E2 case. In particular, the O1 interface will require the definition of dedicated V2X Management Services or at least the modification of existing ones. Furthermore, inserting O2 terminations in the rsu could enable adaptive network deployment strategies that can selectively activate/deactivate all the network components of V2X systems to scale the system performance when required, and decrease energy consumption and interference when it is possible.
§ O-RAN FOR V2V: A CASE STUDY
We conduct a case study based on a typical vehicular communication scenario where multiple cav traverse a busy and blind urban intersection.
The scenario is compatible with the architecture proposed in the previous section.
§.§ Problem definition
cav traversing the urban intersection in the scenario require uninterrupted v2v connectivity to exchange safety-critical information throughout the entire navigation area. However, due to the nature of mmwave propagation, v2v links are often interrupted by blockages caused by buildings, urban obstacles, and vehicles themselves. In case of los obstruction, rsu and other cav can act as relays to maintain connectivity between two vehicles. However, a central coordinator is required to optimally chose the path and set up the new connection.
§.§ oran solution
To address the problem defined above, we propose an xApp that optimally selects relays (namely rsu or other cav) among those available to establish a multi-hop path between two vehicles whose direct link is in a nlos condition.
Fig. <ref> shows how such an xApp can be implemented by detailing the exchange of messages between the involved components.
The exchange starts with a v2v link failure report originating from the cav. This report travels through the low-frequency control plane and reaches the xApp, which is now tasked with finding an alternative relayed path. To do so, the xApp first needs to reconstruct the vehicular network graph based on the quality of all the v2v links that can be established at that moment. Such information can be obtained, for instance, by knowing the relative position of the vehicles and their communication characteristics to extract the channel quality according to an appropriate propagation model <cit.>. Alternatively, this information can be acquired through measures carried out by cav and periodically reported through the control plane. We now assume the presence of a dt that the xApp can interrogate to obtain an up-to-date and reliable representation of the vehicular scenario to also demonstrate the role of this technology.
Once the data-plane graph is reconstructed, the xApp finds an alternative route between the two vehicles according to an arbitrary logic represented by the New path computation in Fig. <ref>. Each vehicle involved in the path is then informed to establish a new v2v link through control messages from the xApp and delivered through the low-frequency control plane.
Finally, this approach can be generalized to allow any two vehicles to communicate by requesting a route to the xApp when a direct link is unavailable.
§.§ Effectiveness
With the goal of investigating the attainable performance of the proposed solution, we replicated the actions of the xApp in a simulated environment.
We recreated the study scenario through a ray tracer simulator as in <cit.>.
We simulated the action of a sample xApp by computing the alternative route between any pair of vehicles using a shortest path algorithm.
We repeated the experiments by deactivating links to guarantee an increasingly stringent minimum snr. This was done to explore the capability of an xApp to select a path based on specific communication performance requirements.
Results are shown in Fig. <ref>. The baseline approach, considering only the direct links, shows that no more than 25% of the vehicles can establish a connection throughout the entire time window. On the other hand, it is shown how the proposed xApp has the potential of guaranteeing full vehicular connectivity even for high levels of minimum guaranteed snr. Please note that our intention here was not to propose a state-of-the-art solution but to evaluate the impact of a simplified, centralized approach with respect to an unmanaged v2v scenario. Fig. <ref> also plots the average number of hops required to ensure vehicular connectivity. This measure shows a trade-off between minimum snr - affecting the throughput - and path length - affecting the latency - suggesting that a more refined xApp should be capable of exploiting it.
Overall, these results confirm how even such a simple xApp can be highly beneficial in addressing one of the fundamental challenges of v2x.
§.§ Control plane overhead
In the previous analysis, we replicated the action of a sample xApp to confirm its effectiveness. We now focus on the feasibility of the proposed approach.
According to the enabling architecture proposed in Sec. <ref>, the ric and the cav communicate through a low-frequency control plane that utilizes the resources of the existing 5G infrastructure. Consequently, a feasible xApp must not generate excessive traffic while operating.
As shown in Fig. <ref>, the proposed v2v xApp requires two control messages in uplink and as many downlink control messages as the number of hops. Based on this, it is possible to measure the control traffic that the xApp would generate in the simulated scenario by counting the path computation events.
Fig. <ref> reports the results averaged over the 5 minutes time window and with a pessimistic oran packet size of 1Kb, showing that the downlink traffic constitutes the largest part of the control plane overhead, as expected. Nonetheless, the worst-case traffic of 160Kbps is negligible for the 5G rat. This confirms the feasibility of the proposed xApp in terms of communication overhead.
§.§ Control latency
A reactive xApp, as the one proposed, must be able to apply the control solution fast enough to be effective. The timescale is highly dependent on the application. For our case study, the xApp must be able to compute and establish an alternative route well before the los obstruction is cleared. It is possible to measure the control delay of the proposed xApp in the simulated scenario. We consider negligible the communication delay between the bs and the xApp, as it happens in the edge cluster.
Additionally, due to the extreme computational efficiency of the shortest path algorithm, we again consider the alternative route computation negligible. Consequently, the control latency boils down to the latency of the 5G system supporting the control plane, which we fix to a conservative 30ms. We then compare this value with the duration of the direct link outage events, whose statistic is reported in Fig. <ref>. Here we have filtered out outage events lasting less than 1s - supposed to be recovered by the beam tracking mechanism - and more than 30s - as they mostly represent outliers. Overall, the statistics show how the outage duration is centered around values going from 3 to 10s.
The comparison with the previously computed control delay shows the xApp's high feasibility and quick response to disruptions in the communication system. In other words, the xApp promptly adapts to changing conditions and maintains communication between vehicles when direct links are blocked. This highlights the xApp effectiveness in mitigating link failures and minimizing communication outages.
§ CONCLUDING REMARKS
As the world moves towards a more connected and automated future, the need for reliable and efficient communication between vehicles and network infrastructure has become increasingly important.
In this article, we focused on the use of the oran architecture for 5g and beyond v2x communication. We highlighted how oran has the potential to provide a more flexible, scalable, and cost-effective solution compared to current solutions for v2x systems. Also, we discussed integration points, proposing our envisioned architectural solution.
A first set of simulations demonstrated numerically the benefits of a managed, controlled, and programmable v2x system compared with an unmanaged one.
§ ACKNOWLEDGMENT
This article was supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on “Telecommunications of the Future” (PE00000001 - program “RESTART”, Structural Project 6GWINET).
IEEEtran
§ BIOGRAPHIES
-2plus -1fil
Eugenio Moro is a researcher at Politecnico di Milano. His research area is wireless networks, with a focus on optimization, programmability and smart propagation.
-2plus -1fil
Francesco Linsalata is a researcher at Politecnico di Milano. His main research interests focus on V2X communications and the next generation of wireless networks.
-2plus -1fil
Maurizio Magarini is an Associate Professor at Politecnico di Milano. His research interests are in the broad area of communication and information theory, with focus on molecular communications, massive MIMO, vehicular communications, study of advanced waveforms for 6G, and wireless networks using unmanned aerial vehicles and high-altitude platforms.
-2plus -1fil
Antonio Capone (Fellow, IEEE) is currently the Dean of the School of Industrial and Information Engineering, Politecnico di Milano (Technical University of Milan). His main research activities include radio resource management in wireless networks, traffic management in software defined networks, network planning, and optimization.
-2plus -1fil
Umberto Spagnolini is Professor of Statistical Signal Processing, Director of Joint Lab Huawei-Politecnico di Milano and Huawei Industry Chair, scientific coordinator of 6G Wireless Networks and Technologies, a large Eu-National project.
Recent interests are on MIMO channel estimation, cooperative and distributed inference, vehicular systems (V2X and radar), integrated communication and sensing.
|
http://arxiv.org/abs/2307.05510v1
|
20230704152639
|
Carbon Emissions of Quantum Circuit Simulation: More than You Would Think
|
[
"Jinyang Li",
"Qiang Guan",
"Dingwen Tao",
"Weiwen Jiang"
] |
physics.soc-ph
|
[
"physics.soc-ph",
"quant-ph"
] |
Carbon Emissions of Quantum Circuit Simulation: More than You Would Think
Jinyang Li†,
Qiang Guan,
Dingwen Tao,
Weiwen Jiang†
†George Mason University, Department of Electrical and Computer Engineering, VA, USA.
Kent State University, OH, USA.
Indiana University, IN, USA.
{jli56, wjiang8}@gmu.edu
August 1, 2023
=============================================================================================================================================================================================================================================
The rapid advancement of quantum hardware brings a host of research opportunities and the potential for quantum advantages across numerous fields. In this landscape, quantum circuit simulations serve as an indispensable tool by emulating quantum behavior on classical computers. They offer easy access, noise-free environments, and real-time observation of quantum states. However, the sustainability aspect of quantum circuit simulation is yet to be explored. In this paper, we introduce for the first time the concept of environmental impact from quantum circuit simulation. We present a preliminary model to compute the CO_2e emissions derived from quantum circuit simulations. Our results indicate that large quantum circuit simulations (43 qubits) could lead to CO_2e emissions 48 times greater than training a transformer machine learning model.
§ INTRODUCTION
Quantum computing, recognized as the next technological revolution, holds immense potential to transform a wide range of areas, including cryptography, materials science, and AI. Currently, however, quantum hardware is both limited and expensive to access, making its widespread adoption challenging. While quantum circuit simulation has emerged as a supporting tool. It employs classical computing resources to emulate the behavior of quantum circuits, thereby bypassing the need for physical quantum hardware. Quantum circuits comprise quantum bits (qubits) and quantum gates, which manipulate these qubits.
For example, State Vector Simulators, simulate a quantum circuit by computing the wavefunction of the qubit’s statevector as gates and instructions are applied.
There are also cloud-based simulators from most of the major quantum cloud providers such as IBM.
The importance of quantum circuit simulation can be attributed to several reasons: Limited Quantum Hardware Availability: Quantum computing resources are scarce, expensive, and often accessible only through cloud-based platforms with waiting times. And this situation could be worse as the increased demand for cloud resources<cit.>. Noise Influence: In the "Noisy Intermediate-Scale Quantum"(NISQ) era<cit.>, where quantum computers have limited qubits and high error rates, quantum circuit simulators play a vital role. They enable researchers to develop, test, and optimize quantum algorithms in an ideal, noise-free environment before deploying them on physical quantum hardware. Algorithm Testing and Development: Quantum circuit simulators, unlike actual quantum systems, allow real-time observation of a computation's state. Measuring states in quantum systems is challenging due to the disruption of computation and partial view of the quantum state. Simulators, however, mimic quantum states on classical computers, enabling developers to inspect the full quantum state at any moment, facilitating the development of algorithms such as Variational Quantum Circuit(VQC)<cit.>, and deepening quantum programming understanding. Quantum Error Mitigation: Quantum systems are inherently noisy, and quantum error correction is still in its infancy. Simulations can help model the noise characteristics of quantum devices and develop error mitigation techniques<cit.>.
Conventionally, the evaluation metrics for quantum circuit simulations include simulation fidelity, computational speed, and resource usage. However, as with any computational process, quantum circuit simulation comes with an energy cost, which translates into substantial CO_2e emissions. Considering the global need to reduce greenhouse gas emissions, understanding the carbon footprint of quantum circuit simulations is as crucial as enhancing their performance. A balance needs to be struck between the pursuit of technological advancement and environmental sustainability. This sphere of interest encompasses diverse stakeholders: researchers and developers focused on quantum algorithm design, hardware companies like NVIDIA who are advancing quantum simulation projects such as cuQuantum<cit.>, and organizations advocating for sustainable practices due to the environmental implications.
To further illustrate the environmental impact of quantum circuit simulations, we compare their energy consumption and emissions with those of common life activities and classical machine training, as shown in Table <ref>. The result reveals that the CO_2e emissions resulting from quantum circuit simulations can exceed those of other activities and processes. For instance, the simulation can generate up to 1.81 times the CO_2e emissions of a one-way flight from New York to London. Notably, when compared with classical computing, the quantum circuit simulation demonstrates an even more pronounced environmental impact; it produces approximately 48 times the CO_2e emissions of training a standard transformer base model.
The main contributions of this paper are: (1) Bring the notion of environmental impact from quantum circuit simulation. (2) Build the initial model for calculating the CO_2e emissions of simulation.
§ ENERGY CONSUMPTION AND CARBON FOOTPRINT OF QUANTUM CIRCUIT SIMULATION
The carbon emissions from quantum circuit simulations derive from a multitude of sources. This includes Embodied Emissions, encompassing the carbon footprint from the manufacturing and disposal of hardware, Idle Power Consumption, representing the emissions when the system is powered but not actively processing, and Dynamic Power Consumption, which relates to active processing and data transfer. Dynamic Power Consumption is affected by the properties of a given quantum circuit, such as the number of qubits, the circuit depth, etc.
Besides, it also hinges on other factors, such as the computational resources utilized, their efficiency, and the simulation duration. These resources include processors (CPU/GPU), memory modules, cooling systems, and a multitude of peripheral devices.
Note that this investigation primarily focuses on Dynamic Power Consumption for quantum circuit simulations, but the proposed model can be easily extended to support other factors such as the load on the processors, the utilization of memory, and the efficiency of the cooling systems.
To formulate the simulation-emission model, we first define the system to run the simulation as follows.
For a granular estimate of the energy consumed, we factor in the number of processors (`n'), the average power per processor (`Pp' in kW), and the simulation duration (`T' in hours).
The environmental impact, measured as Carbon Dioxide Equivalent (CO_2e) emissions, is then determined by the Carbon Intensity (`CI'), a measure of CO_2e emissions per unit of electricity consumed (in kg/kWh), and the Power Usage Effectiveness (`PUE'). The PUE, a measure of data center efficiency, describes the proportion of total power consumption utilized directly by the computing equipment.
Incorporating the above definitions, the CO_2e emissions are calculated as: CO_2e = n × Pp × T × CI × PUE. This comprehensive approach allows us to estimate the precise environmental implications of quantum circuit simulations.
For example, consider a quantum circuit simulation on a personal computer with an average power draw of P = 0.04276 kW over an execution time of T = 0.01861 hours. The energy consumption for this simulation would be E = 0.04276 kW× 0.01861 hours = 0.00080 kWh.
Furthermore, using a Carbon Intensity value of CI = 0.429 kg/kWh, as per the average datacenter carbon emissions <cit.>, and a Power Usage Effectiveness of PUE = 1.58, reflecting the average industry datacenter PUE <cit.>, the CO_2e emissions can be computed as CO_2e = 0.00080 kWh× 0.429 kg/kWh× 1.58 = 0.00054 kg.
§ RESULTS
§.§ Small Quantum Circuit Simulations
We first conducted our own simulations on small quantum circuits to examine the impact of certain parameters on energy consumption and emissions. However, due to that a single run of quantum circuit simulation was rather too fast, which would make less sense in the context of analyzing the energy cost and emission, we set the experiments on the task of quantum machine learning. That is a very promising field that takes the power of quantum computing in machine learning tasks. In our experiment, we used the MNIST dataset for training purposes, comprising 20,000 training samples. The training was set for 20 epochs with a batch size of 256. We tracked the execution time during these runs, alongside the average power consumption for each processor core, to calculate the final CO_2e emission. The results of these measurements are depicted in Figure<ref>.
In addition, we conducted parallel experiments with classical neural network models, ensuring that the number of parameters corresponded with those in the quantum circuits. The outcomes from these classical models are represented by the red bars in Figure <ref>.
§.§ Large Quantum Circuit Simulations
In this section, we analyzed existing data on large quantum circuit simulations. Due to the limitation of the device, we can't now scale up the quantum circuit simulation, however, we can still collect the experiment results from large companies such as NVIDIA or AWS. Here is the emission result based on the classic simulations for quantum circuits from AWS <cit.>. To be noticed, the result here in Table <ref> is a single run of simulation instead of a training VQC. It is evident that the CO_2e emissions escalate as the number of qubits increases, meriting attention and concern during the development of quantum circuit simulation.
§ CONCLUSION
By analyzing the energy consumption and carbon emissions, we can better comprehend the environmental footprint of quantum circuit simulations, thus enabling us to devise strategies to reduce their impact.
IEEEtran
|
http://arxiv.org/abs/2307.02855v1
|
20230706084616
|
It's more than just money: The real-world harms from ransomware attacks
|
[
"Nandita Pattnaik",
"Jason R. C. Nurse",
"Sarah Turner",
"Gareth Mott",
"Jamie MacColl",
"Pia Huesch",
"James Sullivan"
] |
cs.CR
|
[
"cs.CR",
"cs.CY"
] |
Harms from ransomware attacks
Pattnaik et al.
Institute of Cyber Security for Society (iCSS) & School of Computing,
University of Kent, UK Royal United Services Institute (RUSI), UK
[email protected]
It's more than just money: The real-world harms from ransomware attacks
Nandita Pattnaik1 Jason R.C. Nurse1,2* Sarah Turner1 Gareth Mott 1
Jamie MacColl 2 Pia Huesch 2 James Sullivan 2
August 1, 2023
=======================================================================================================================
As cyber-attacks continue to increase in frequency and sophistication, organisations must be better prepared to face the reality of an incident. Any organisational plan that intends to be successful at managing security risks must clearly understand the harm (i.e., negative impact) and the various parties affected in the aftermath of an attack. To this end, this article conducts a novel exploration into the multitude of real-world harms that can arise from cyber-attacks, with a particular focus on ransomware incidents given their current prominence. This exploration also leads to the proposal of a new, robust methodology for modelling harms from such incidents. We draw on publicly-available case data on high-profile ransomware incidents to examine the types of harm that emerge at various stages after a ransomware attack and how harms (e.g., an offline enterprise server) may trigger other negative, potentially more substantial impacts for stakeholders (e.g., the inability for a customer to access their social welfare benefits or bank account). Prominent findings from our analysis include the identification of a notable set of social/human harms beyond the business itself (and beyond the financial payment of a ransom) and a complex web of harms that emerge after attacks regardless of the industry sector. We also observed that deciphering the full extent and sequence of harms can be a challenging undertaking because of the lack of complete data available. This paper consequently argues for more transparency on ransomware harms, as it would lead to a better understanding of the realities of these incidents to the benefit of organisations and society more generally.
§ INTRODUCTION AND BACKGROUND
The volume of ransomware attacks — i.e., malware-based cyber-attacks characterised by blocking access to a device or/and encrypting valuable data <cit.> — is constantly increasing, with some reports finding that infections in businesses worldwide are as high as 71% <cit.>. The UK's National Cyber Security Centre highlight this significance by defining ransomware as the most acute threat faced by organisations today <cit.>. While there have been several articles and reports reflecting on ransomware, its nature, attack patterns, and mitigation strategies <cit.>, there is much less research on the actual negative impacts that can result from these incidents. We characterise such negative impacts using the term harms; this is similar to approaches taken by existing research <cit.>. Understanding harms from cyber-attacks is vital for a plethora of reasons, especially given their relevance in preparing for the consequences of attacks in the future. As argued by current literature, irrespective of an organisation's threat-driven or impact-driven risk assessment, the limitation of an incomplete understanding of the potential harms and the relationship between those harms can lead to the selection and deployment of inappropriate risk treatments and controls <cit.>.
This paper contributes to the field by critically examining the multitude of harms that can arise from cyber-attacks, with a focus upon the present threat of ransomware. We also propose a new methodology by which such incidents and their harms can be comprehensively modelled. Our research makes the point that researchers, businesses and policymakers must go beyond the current focus on financial harms (e.g., payment of ransoms, cost of recovery or cyber insurance claim amounts) to examine all types of real-world harm that can result (e.g., human, physical, social) and how these harms may influence or trigger each other. Ransomware poses a unique case study considering its prominence and ability to cripple unprepared organisations (e.g., UK's NHS and WannaCry <cit.>).
While existing research on ransomware harms and impacts is limited, there are some key articles worthy of review.
By empirically studying a dataset of 453 ransomware data investigation reports, Meurs et al. reported on specific factors contributing to the ransom requested, the likelihood of ransom payment and their influence on the financial losses <cit.>. They conducted a detailed statistical analysis to present several factors (such as the ransom paid, the revenue of the victims and the use of RaaS (Ransomware-as-a-Service) by an attacker) which were seen to be statistically significant determinants of the financial losses reported.
Wilner et al., on the other hand, commented on the wider international, political, intelligence and diplomatic ramifications of ransomware by analysing several ransomware cases <cit.>. This is a pertinent example of research into the non-financial and international impacts of such attacks. While these studies generally align with our work, Wilner et al. do not discuss the individual harms that might originate from various ransomware attacks and Meurs et al.'s analysis was focused on factors that contribute to financial harm; rather than an a reflection on differing types of harm.
On the broader concept of harms from cyber-attacks (i.e., not only ransomware), Agrafiotis et al. introduced a taxonomy of harm consisting of five major harm types, namely Physical/Digital, Economic, Psychological, Reputational and Social and societal harms <cit.>. This taxonomy was created using a mixed approach of deductive and inductive analysis and based on publicly-available organisational harm data, harm-related literature, and public databases. This enumeration and modelling of harm is one of the closest to our work and while it does not focus on ransomware nor a detailed modelling of harms from attack cases, it can inform our study.
Recent related research has also examined the nature of losses from cyber-related events across different risk categories and business sectors <cit.>. They used a comprehensive database of cyber-loss data over 12 years from 2008–2020, affecting 49,496 organisations across 20 business sectors. That work highlighted the heavy-tailed nature of cyber risks by analysing both the frequency and severity of losses from cyber events. This financial emphasis is clearly relevant to the research and business community but, as mentioned in the previous paragraph, again it risks, not capturing the full range of negative impacts or intangible costs from cyber-attacks.
Studies, particularly Axon et al., have sought to complement existing research by using cyber insurance claims data to build harm-propagation trees that can enhance the understanding of the harms and links between harms after cyber-attacks <cit.>. The graph output from their study is a valuable tool for defining the frequency of each harm's occurrence and also the strength of the relations
between harms. Our research is similar though we benefit from a wider pool of data than what is available from insurance claims. Insurance forms also arguably prioritise harms with a financial component and therefore we expect our study to be more comprehensive in its definition and modelling of harms.
To address the gap in existing literature related to the definition and understanding of harms from ransomware attacks, we conducted a data-driven, sociotechnical research study. Specifically, we used publicly available data to analyse eight different ransomware incidents and enumerated the harms and harm relations (i.e., which harms lead to other harms) that emerged. These incidents were investigated through the construction of a series of ransomware harm models enumerating the relevant data. In addition to providing an improved appreciation of the long tail of harms after a ransomware incident, we posit that the modelling methodology proposed and these models themselves are significant for two reasons. First, they provide businesses with data that is necessary for effectively implementing risk controls within their organisations. That is, they encourage consideration of harms beyond initial server compromise or loss of data to wider harms that negatively affect the business and its stakeholders. Secondly, the methodology and resulting models explicitly highlight the wide nature of harms to researchers studying cyber-attacks and policymakers responsible for protecting an increasingly digital society.
§ METHODOLOGY
§.§ Definition and scope
The first step in our research process was to define its parameters and scope. Harm, as described earlier, is any negative impact that can occur from a cyber incident; a description adapted from existing work <cit.>. By their nature, harms can be vast and can transpire immediately (e.g., a compromised and inaccessible server) or in the longer term (e.g., a regulatory fine years after suffering a data breach). To facilitate a structured extraction and analysis of harms emerging from ransomware attacks, we decided to adopt an existing harm taxonomy <cit.>. This taxonomy provided an initial list of validated types of harm that could act as a foundation for our work. In terms of the cases scoped for data gathering, our choice of scenarios was informed by two factors: well-publicised or high-profile ransomware cases that took place at least three months prior (i.e., before December 2022), and sectors regularly impacted by ransomware attacks. The first factor was important because such cases would have more extensive reports and media coverage for us to draw on, and we would also be able to track harms over a longer period of time (i.e., not only immediately after the incident). The second factor was necessary to understand the extent and type of harms initiated and propagated by the frequent attacks in certain specific sectors.
In total, we selected and assessed eight incidents: (1) NHS, UK – WannaCry, 2017, (2) Health Service Executive (HSE), Ireland – Conti, 2021, (3) Hackney Council, UK – Pysa, 2020, (4) Atlanta City government, US – SamSam, 2018, (5) Colonial Pipeline, US – Darkside, 2021, (6) Travelex, UK – REvil, 2020, (7) UK Schools – Vice Society, 2022, and (8) Los Angeles Unified School District (LAUSD), US – Vice Society, 2022. These incidents represent significant ransomware attacks across highly impacted sectors such as healthcare, energy, government and finance. Due to space constraints, we will report on only two cases, the HSE Conti attack and the Hackney Council Pysa attack. These cases were specially chosen because they present relevant exemplars of the multitude of harms that can emerge from ransomware incidents and aptly capture some of the central themes arising from the other cases.
§.§ Collection and analysis of cases
For each case, a web search using the name of the organisation and the ransomware attack/group was used to source a diverse range of relevant articles, including audit investigation reports, where available, alongside newspaper articles, media reports and academic literature. Once collected, we extracted insights pertinent to the harms that occurred and the relations between harms (i.e., how a harm may lead to, or trigger, another harm). The range of sources amassed for each incident was crucial in creating a more complete picture of each attack. To guide our harm annotation and extraction process, four rules were followed.
* Rule 1 (R1): In cases where the harms—as guided by the harm taxonomy <cit.>—from the attack were directly stated in the article's text, these should be recorded and extracted as harms emerging within the case being studied.
* Rule 2 (R2): When an article's text does not precisely use a harm in its writing, but its connotation indicated a strong affinity towards a particular harm, annotate the paragraph/sentence as close as possible to the above-indicated meaning.
To demonstrate this point we use the following example excerpt taken from one of the case reports of Hackney Council <cit.>. For the excerpt, “some residents in the borough are still waiting for payments for various benefits” the harm in this situation was annotated as “Financial loss to residents” (as this pertains to residents not being able to receive payments that would allow them to access the welfare benefits they are entitled to).
* Rule 3 (R3): To more clearly articulate and present the harms discovered, similar harms should be placed into representative groups.
The following is an example taken from the HSE case <cit.>:
“In the community, primary care staff were unable to access patient appointment lists or contact details, patient history, treatment plans, x-ray facilities or monitoring of instrument sterilisation tracking.” This paragraph was recorded and noted as a single harm, namely “Loss/unavailable clinical data”, so as to capture the clinical nature of the data impacted, without creating new harms for each of the individual data types (e.g., patient history, treatment plans).
* Rule 4 (R4): Once a harm was identified, we also reviewed the article's text for any other harms that occurred as a result of (i.e., were triggered by) that initial harm, and these were recorded and extracted as harm relations. For example, consider the following text, “Inability to use HSE email accounts led to a delay in the General Register Office process leading to delays in child benefit payments for new births” that was extracted from the independent post-incident review of the Conti ransomware attack on HSE <cit.>. This led to the definition of the harm relations “Unavailable non-clinical system → Disrupted/delayed non-clinical services” and “Disrupted/delayed non-clinical services → Delayed child benefit payment”.
Each article, in turn, was then analysed and coded by two researchers separately according to these rules. Both harm and harm relations were stored in Mendeley[https://www.mendeley.com/] with all relevant document texts annotated as denoted above. Rules were essential for reducing subjectivity in the harm recording and extraction process, and to further validate the harms extracted, annotated texts from the articles were discussed across the author team. A representative sample of harms and relations from texts were also validated by a group of four researchers to settle any differences and produce an agreed set of harm and harm relations.
§.§ Harm model design
The primary aim of this research is to examine, and provide a methodology for highlighting, the multitude of harms that can arise from ransomware attacks, thereby providing an evidence base for an increased acknowledgement and understanding of these harms. To support this aim and to portray harms and their relationships visually, we constructed a series of harm models (one for each case) using the harm list and harm relations developed earlier (and based on the rules above). Models are a well-known technique to characterise complex real-world phenomena and have also been applied to explain harms in prior literature <cit.>.
We depict the harm model as a non-weighted directed graph G = (V, E), where each node u ∈ V represents an observed harm from the ransomware incident and each directed edge (p, q) ∈ E indicates that a relationship exists between the two harm nodes p and q (i.e., that harm p has been observed as causing or otherwise leading to harm q). Modelling harms as a directed graph also has other advantages — e.g., in detecting and preventing the possible propagation of harms, a key task for risk managers — as will be discussed later in this paper. This design methodology is also central to our contribution.
§ RESULTS
We structure this section by first presenting two of the cases that were modelled to provide further insights into the cases studied, the harms and how they emerged, and the models designed (using the process above) to demonstrate harms and their relations. The section then progresses to report on key observations and findings from the complete set of eight case studies. These observations are central to our research contribution as they present unique findings related to the wider understanding of harm from ransomware attacks.
§.§ Hackney Council, UK, 2020 attack by Pysa Ransomware
The first harm model to be presented covers the attack by Pysa Ransomware on Hackney Council. In October 2020, Hackney Council, a local authority within Greater London in the UK, came under attack by the Pysa ransomware group. The attack compromised essential council resources making them inaccessible. It consequently brought most of the council's operations to a standstill <cit.>. The various harms discovered in this case can be seen in Fig. <ref>. As depicted in the figure, examples of harms that emerged in the aftermath of the ransomware attack included Compromised council resources, Unavailable council systems, Loss/unavailable data in council system and Disruption of internal council operations. This led to the shutting down of several of the council's key external-facing services, such as the social benefits system and social care services.
The inability to serve residents living in the local authority area was exacerbated with the passing of time (as long as two years) <cit.>, resulting in an expensive clean-up and recovery cost of nearly £12 million, a huge backlog of work and subsequent leak of data publicly <cit.>.
We can visualise some of these downstream harms in the figure as Delay/loss of external council services, Recovery costs, Backlog of attending to council services and Data leak/exposure in the Hackney Council harm model. To explain this in the context of our harm model design notation, when Data leak or exposure (p) is connected to Concerned residents (q), it demonstrates the fact that that there is a relationship between these two nodes and it has been observed in the data that the harm node Data leak or exposure might lead to Concerned residents.
The non-functioning of crucial services such as social care services and benefit payments affected thousands of local residents whose daily lives were dependent on them. Harms affecting both staff and individual residents within the local authority area can be seen towards the right of the model, portraying both the loss of facilities i.e., Unable to buy essential items, Disruption of life plans and various psychological harms, e.g., Concerned resident and Worried resident.
§.§ HSE, Ireland, 2021 attack by Conti Ransomware
The HSE, Ireland's biggest public sector employer, was hit by a ransomware attack in 2021 leading to the closure of HSE's 4,000 locations, supporting 54 acute trusts and 70,000 devices <cit.>. The harms resulting from the attack to the hospitals, patients, clinical and non-clinical staff, and all third-party users of the hospital system were long-lasting, widespread, and devastating. Specifically, the ransomware infection led to an immediate shut down of all hospital IT-driven clinical facilities resulting in harms including Unavailable clinical system (Patient information system/Laboratory system/Clinical care), Unavailable non-clinical systems, Unavailable clinical data to mention a few <cit.>. The respective harm model is shown in Fig. <ref>. The aforementioned harms triggered a host of subsequent harms for patients and staff such as Patient being distress/disappointed/Frustrated or Confused staff and Reduced staff performance. The compromised system also led to the leak of sensitive patient data (i.e., Data loss/exposed) <cit.>. In Fig. <ref> we have also grouped two sets of related harms, i.e., Costs and Disrupted clinical services, primarily for ease of visualisation. This does, however, also have the benefit of showing how a single harm can led to various others; for instance, Disrupted/delayed clinical operation causing a host of Disrupted clinical services.
§.§ Observations from case analysis and modelling
The process of identifying and modelling harms and their relationships by drawing on publicly-available data provided us with substantial insight into the real-world consequences of ransomware attacks. There are several salient observations that can be made from this research.
Ransomware attacks can result in a significant and diverse set of harms substantially beyond financial impacts. This point emerged clearly from our case studies. Physical/digital harm was one of the most common harm types and presented in every case we analysed; this was undoubtedly because ransomware attacks primarily aim to encrypt/block digital resources as a prerequisite to demanding a ransom. More specifically, the assessed ransomware cases depicted the physical/digital harms of Unavailability of resources, which subsequently can led to Disruptions of internal operations and likely then to Disruption of (external) services. Another example of a common digital harm was the Stolen/exposed data as seen in the ViceSociety attack on Los Angeles Unified School Districts (LAUSD) <cit.> where the hackers allegedly stole 500GB of data from LAUSD.
Economic harms are the other set of common harms that result from a ransomware attack. These manifested in many different forms in the cases observed. For instance, we noted Ransom costs in the SamSam attack on the Atlanta government <cit.>, Recovery costs in the REvil attack on Travelex <cit.>, and Clean-up costs in the Conti attack on HSE. Apart from the aforementioned harms, which almost always receive more attention in public, there are, of course, other sets of harms, i.e. psychological and societal harms, which are equally important and, more often than not, materialised as a consequence of the harms above. This reiterates the fact that ransomware harms are more than just the financial and monetary impact. We could see various examples of such harm presented in our data. For example, there were psychological harms in Frustrated/upset employees in the REvil attack on Travelex <cit.> and Stressed staff in the WannaCry attack on the NHS <cit.>. At a societal level, Increased fuel prices nationally due to the Darkside attack on Colonial Pipeline <cit.> and Disrupted healthcare nationally after the Conti attack on HSE, were also apparent.
One difference to prior research on wider cyber-attacks that was identified from our analysis of cases was less coverage of reputational harms (i.e., negative external impressions on the impacted organisation) in some ransomware attacks. This is surprising given that cyber-attacks usually result in comprehensive negative impacts on the reputation of the breached organisation <cit.>. Our research did find that there was significant media attention and scrutiny placed on the compromised organisations as a result of the attack (undoubtedly due to their public-facing nature, the large-scale impact, and the money spent on the response). However, there was never a clear link to wholesale damage to the organisation's image. While it is out of the scope of this research to determine the reasons for this, some possibilities include sympathy for the victim organisation given, for instance, the well-resourced nature of these threat actors <cit.>, or a feature of specific sectors (e.g., the lack of coverage was salient with government entities in particular). Alternatively, this may also represent a limitation in the publicly accessible data comprising our dataset.
In assessing the range of harms, another salient observation was the absence of appropriate methods to formally capture and record the full set of
harms that may transpire. For instance, in literature <cit.> covering the NHS WannaCry attack, it is explained that a ransomware-related death would currently be impossible to formally report — and thus officially recognise a harm — as there is no code to input into a hospital form for that particular incidence, i.e. death due to a cyber/ransomware incident. This is one example, but we observed similar incidents where there were no protocols to properly capture/record, and therefore acknowledge or report on, the extent of harms from a ransomware attack.
Grounding our analysis and modelling in prior work (e.g., the harm taxonomy <cit.>) proved particularly useful as it enabled us to define a structured set of harms that was also closely linked to the actual data. For example, the generic physical harm of “Damaged or Unavailable” was adapted to “Unavailable council system (Key & non-key system)”, “Loss/unavailable data in council systems", and “Unavailable communication systems” in the Hackney case study. Similarly, we were able to present a level of granularity in our models that expanded beyond general recovery costs to different types of specific costs (recovery, rebuild, clean up, opportunity, etc.) that featured in mitigation, recovery and rebuilding in the aftermath of the incident. These further signified the more complicated and detailed nature of the cost involved. In general therefore, our work acts to further validate that taxonomy and exemplify how it can be applied and extended.
The analysis conducted also discovered a complex web of interconnected harms caused by ransomware attacks; this is aptly depicted in the harm models. The rules were especially useful here as they allowed us to assess the cases in depth and consider the various chains of events that arose which then depicted different harms and sequences of harms. The Colonial Pipeline ransomware attack by the Darkside group provided one such example. For instance, the shutdown of major gasoline pipelines (i.e., Disruption to gasoline supply) led to a reduction in gasoline availability (i.e., Unavailable gasoline resources), which caused anxiety and panic buying by consumers (i.e., Anxious and panicked consumers) and also resulted in a spike in gas prices. The situation became life-threatening when a car carrying four cans of gasoline burst into flames; although no-one was killed, this also resulted in more physical damage (i.e., Destroyed property harm) <cit.>.
One point to further explore from the modelling process was that ransomware attacks initially impact the technology and systems, but there is often subsequent harm affecting individuals. This builds on our earlier observation on psychological harms and their common occurrence across attacks. If we reflect on the Hackney Council case, for example, six out of the rightmost eight leaf harm nodes represent either harm to the residents of the council or harm to the staff working there. Reviewing the models broadly, harm relations often start with some digital/physical or economic harm and ultimately lead to harm to individuals, as depicted in the HSE model Unavailable clinical system leads to Disrupted clinical services which in turn leads to Angry/anguished patient, i.e., harm to the individual. This demonstrates the long tail of harms and highlights the social/human harms organisations might overlook as they are more difficult to assess, measure and accommodate.
To complement the observations above and summarise the various stakeholders that can be harmed by ransomware attacks as identified from our analysis, we present Fig. <ref>. This spotlights the infected organisation but also the several other entities and individuals likely to experience some form of harm. We take this opportunity to provide some insight into another case, namely the NHS WannaCry attack, and also present the stakeholders that experienced harm in Fig. <ref>. Having a clear idea of who might be the affected parties, what harms affect them, and what triggers those harms could put the businesses and policymakers in a better position to respond to attacks and draft appropriate policies.
A final notable observation was that the fear of ransomware attacks could also prompt the same types of harm in uninfected organisations with a link to (e.g., in the same supply chain or enterprise context as) the infected one, as in infected or compromised ones. This was witnessed in the NHS case where 46 non-infected Trusts had to shut down their operations in fear of infection and therefore experienced the same harms as the infected Trusts. The situation was even worse for some of these groups because due to the attack they were unable to get online to then execute the kill switch needed to stop the attack <cit.>.
§ DISCUSSION AND CONCLUSION
§.§ Discussions
This paper contributed to existing work by investigating, and providing a methodology to explore, the plethora of real-world harms emerging from ransomware attacks, thereby directly informing the sociotechnical evidence base for researchers, businesses and policymakers. We engaged in a study of these harms by reviewing well-documented cases of ransomware attacks, creating models to understand the presence and relations between harms, and critically reflecting on these to extract a set of pertinent observations of importance to the wider community.
To comment more broadly on our findings, the nature and extent of harms was extensive. We were able to identify harms that are rarely acknowledged in research or industry and link these directly to the threat of ransomware attacks. We also noticed each set of harm triggered a chain of harms i.e., infections lead to unavailable data which leads to disruption of services, which can result in direct harm to employees or other individuals. This detailed identification of harms and also the harm relations provide valuable new information for organisations and policymakers seeking to implement measures that limit the harm caused by ransomware attacks.
Organisations can draw on our work's insights to: (a) improve the accuracy of their risk assessments and subsequent risk treatments because they would be able to incorporate a more complete set of harms that can emerge from ransomware-related risks; and (b) set up appropriate business continuity processes and incident response plans in preparation for a ransomware attack. Moreover, models such as these might serve as a sector-specific blueprint of harm propagation in case of ransomware attacks on certain sectors (e.g., healthcare or education) and help affected parties, including governments (who need to understand the harms of ransomware on critical infrastructure, healthcare, etc.), to plan preemptively. Generally, this modelling process also provides methodological guidance for policymakers in identifying the type and trajectory of ransomware harms which can then be used to develop more formalised cyber harm models.
Although temporal factors are not represented in these models, our analysis indicated that digital/physical and some economic harms are often experienced in the short-term period after/during an attack. Psychological harms on the other hand might be immediate or delayed depending on the nature of the service affected. This discussion around harms and their sequence is an interesting one as a better understanding of sequence provides an opportunity for remediation and preventing further harms. One challenge for organisations will undoubtedly be how far downstream in a harm model to consider and what is appropriate to include when assessing a cyber risk (inclusive of its impact).
§.§ Limitations and future work
It is challenging to understand the full extent of harm resulting from ransomware attacks. We used published reports, articles and literature as the basis for our study given the richness of information it presented. This information, however, is likely to be incomplete as there is almost certainly information that was withheld by the organisation or was not covered in the publicly-accessible reports that we were able to source. A relevant example is the HSE case and its harm model. This model was one of the richest in our study but this was undoubtedly influenced by the fact that there was an official audit report that was publicly released; only the NHS attack also had a similar public report. This further highlights some of the issues of exploring the harms of ransomware, or any cyber-attack; that is, a complete understanding may not be feasible even years after an attack.
Another related factor is the sequence of harms and harm relations identified. Our work aimed to represent the relationships present in the data and did not prejudge or rearrange stated sequences. As such, if it were to transpire that a relation was not accurate or overly simplistic, this would impact our work. We did attempt to address this issue via triangulating harms and relations across sources, however this is still a potential weakness. In spite of these potential issues, this research presents one of the few contributions to better understand harms of ransomware attacks, and thereby provide an evidence base beyond financial consequences.
There are two primary avenues for future research. The first involves expanding the set of cases studied to explore a few sectors in more depth in order to determine whether there are any patterns of harm (especially sequences and harm relations). The health sector is of most interest given the constant stream of attacks, the extensive coverage it tends to attract, and that it may release audit reports into the attack (as seen with HSE and NHS). Understanding patterns of harm is useful as it provides an opportunity to break the spread of harm and thereby limit the stakeholders impacted. The second avenue builds on the first and would seek to encode harm models such that automated analysis of harms across cases — ours or any others provided by the community — could be achieved. This would allow a quicker identification of patterns and would also ease uptake for organisations considering integrating our work into their risk analysis methods or policymakers reflecting on sector-wide harms.
§ ACKNOWLEDGEMENTS
This research was funded by The Research Institute for Sociotechnical Cyber Security, a collaboration of the UK's Engineering and Physical Sciences Research Council (EPSRC) and the National Cyber Security Centre (NCSC). We also thank Keenan Jones for contributions to the earlier parts of this research.
splncs04
|
http://arxiv.org/abs/2307.02239v1
|
20230705122844
|
An Overview of the NET Playground -- A Heterogeneous, Multi-Functional Network Test Bed
|
[
"Paul Schwenteck",
"Sandra Zimmermann",
"Caspar von Lengerke",
"Giang T. Nguyen",
"Christian Scheunert",
"Frank H. P. Fitzek"
] |
eess.SY
|
[
"eess.SY",
"cs.SY"
] |
An Overview of the NET Playground - A Heterogeneous, Multi-Functional Network Test Bed
Paul Schwenteck1,
Sandra Zimmermann1,
Caspar von Lengerke1,
Giang T. Nguyen25,
Christian Scheunert3,
Frank H. P. Fitzek15
1 Deutsche Telekom Chair of Communication Networks, TU Dresden, Germany
2 Haptic Communication Systems, TU Dresden, Germany
3 Chair of Communication Theory, TU Dresden, Germany
5 Centre for Tactile Internet with Human-in-the-Loop (CeTI)
E-mails: {firstname.lastname}@tu-dresden.de
August 1, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper provides an overview of the hardware and software components used in our test bed project the NET Playground. All source information is stored in the GitLab repository in <cit.>. In the Hardware section, we present sketches and 3D views of mechanical parts and technical drawings of printed boards. The Software section discusses relay control using shell scripts and the utilization of Ansible for automation. We also introduce a C++ framework for connecting with the INA231 energy sensor. This paper serves as a reference for understanding and replicating our project's hardware and software components.
Hardware test bed, energy measurement, distributed architectures
§ INTRODUCTION
The successful execution of any project relies on the effective integration of hardware and software components. This paper presents a detailed overview of the hardware and software aspects employed in our GitLab repository <cit.>, the NET Playground test bed, providing valuable insights into its design and functionality. The NET Playground is a heterogeneous, multi-functional network testbed with 128 single-board computers connected in an extensive network. All data in the repository is freely available to all and will continue to be updated. So the content of this overview may still change in the future.
The hardware section showcases our meticulous approach to gathering and organizing essential information. We present a comprehensive collection of sketches, 3D views, and dimension drawings for the individual components utilized in our project. The mechanical parts, including the sturdy frame, metal components, and transparent plexiglass plates, are presented with detailed 3D views. Additionally, we document the technical drawings and pictures of the printed boards, such as the level shifter, relay board, and INA231 energy sensor board.
The software section delves into two critical aspects: relay control and orchestration with Ansible. We introduce a shell script specifically developed for relay control, highlighting its role as an interface between software and hardware components. The script defines the Odroid pins and their connection to a level shifter, which enables efficient switching of the relays and precise regulation of the power supply. We discuss the development of individual shell scripts for each relay, tailored to their specific roles within the project, and a combined version for simultaneous control. Additionally, we explore the utilization of Ansible, an automation tool, in conjunction with an inventory file and ansible-playbooks. We explain how host groups are defined based on IP addresses, and we provide examples of general-purpose ansible-playbooks for powering on/off Odroids and installing/configuring IPFS on designated host groups.
Furthermore, we introduce a C++ framework for connecting with the INA231 energy sensor. This framework facilitates TCP-based communication,
§ HARDWARE
In the Hardware Section, we have gathered a comprehensive collection of sketches for the individual components of our project.
To ensure organized data management, we have classified the information into two distinct groups:
i) Mechanical Parts: This group comprises various components, such as the sturdy frame, all the essential metal parts, and the transparent plexiglass plates. We have meticulously prepared detailed 3D views of these mechanical elements, providing a comprehensive understanding of their structure, dimensions, and interconnections. These visual representations offer valuable insights into the overall design and facilitate accurate hardware assembly.
ii) Printed Boards: In this category, we have focused on the intricate circuitry of our project. Specifically, we have documented the technical drawings and schematics of critical boards, including the level shifter, relay board, and the board housing the INA231 energy sensor. These detailed drawings allow for a precise understanding of the circuit connections, component placement, and proper positioning of the relevant electronic elements. Additionally, we have provided dimension drawings for the metal parts associated with the printed boards, enabling a comprehensive understanding of their exact measurements.
§ SOFTWARE
§.§ Relay Control
Here, we have included the shell script responsible for controlling the relays that govern the power supply of our devices. The script is a crucial interface between the software and hardware components, allowing us to manage the relay's operation effectively.
We have defined the pins on the odroid within the script, our chosen microcontroller for relay control. These pins are connected to a level shifter, which facilitates the conversion of the output voltage to 5 V. This 5 V output is then utilized to effectively switch the relays, enabling us to accurately regulate the power supply to our devices.
To provide comprehensive control over the relay system, we have developed individual shell scripts for each of the four relays. This approach allows us to tailor the behavior of each relay based on its specific role within the project. These individual scripts provide fine-grained control, Whether activating or deactivating a particular device or managing power flow to specific components.
Additionally, we have also created a combined version of the shell script. This unified script allows us to control multiple relays simultaneously, streamlining the management process and simplifying the overall control of the power supply.
§.§ Ansible
Ansible <cit.> works with a combination of an inventory file and ansible-playbooks.
The inventory defines host groups depending on their IP addresses.
A group consists of a [name] and an IP address range. In our case, we have a group called [odroids-testgroup] that consists of devices with IP addresses ranging from 192.168.1.1 to 192.168.1.16.
In addition to the address range, we can define the ssh password that Ansible should use when connecting to the device, which is, in our case, odroid.
When executing ansible-playbooks, the path to the inventory needs to be defined with the -i flag.
[basicstyle=, frame=single]
# The inventory file for the NET PLayground
[odroids-testgroup]
192.168.1.[1:16] ansible_ssh_pass=odroid
[odroids-testgroup-consumer]
192.168.1.1 ansible_ssh_pass=odroid
[odroids-control]
192.168.[1:8].42 ansible_ssh_pass=odroid
We divide our ansible-playbooks into two categories—one for general purposes and one for specific use cases. The two ansible-playbooks for general purpose are odroids_power and ipfs_init. Generally usable ansible-playbook.
The ansible-playbook odroids_power copies the gpio.sh, script to all control-droids and runs the script on them. The host group odroid-control is defined in the inventory file. The sh-script defines gpio pins for the shifter board connected to the relays. Periodically all pins are activated, which switches the relays and powers the individual odroids. The control-odroid must not switch all nodes simultaneously, as that would lead to voltage spikes in the power supply and damage it. In the following, we show the code for the playbook, which includes the host group odroids-control defined in the inventory, the option become for executing the tasks on a sudo level, and the two tasks. It is also possible to define variables like power that can be defined when executing the ansible-playbook.
[basicstyle=, frame=single]
—
# This playbook turns on all odroids;
# control-odroids need to be connected
- hosts: odroids-control
become: yes
tasks:
- name: copy swr file to remote host
copy:
src: ./gpio.sh
dest: /home/odroid/
- name: switch odroids power
shell:
| bash /home/odroid/gpio.sh power
The ansible-playbook ipfs_init installs and configures IPFS <cit.> on the defined host group. For the script to work, we first need to build IPFS on the destination device and copy the built version of IPFS into the directory of the ansible-playbook. The playbook then copies the build version to all device and initiate it. Additionally, it created an IPFS service file, so it starts when the device is booted. Logs from the IPFS service are stored in /var/ on the device. The service file is included in the directory.
The individual playbooks are divided according to the experiments. We have only one experiment that measures energy consumption in an IPFS peer-to-peer network.
The individual playbooks are divided according to the experiments.
We have only one experiment that measures energy consumption in an IPFS peer-to-peer network. Here we have defined scripts that define the link properties like delay and bandwidth on one.
Fixed links are essential for a deterministic measurement process.
Another playbook deletes all added content in IPFS and resets all traffic control settings. This way, the odroids can be used for another experiment without previous presets.
§.§ INA231
For the connection with the INA231 sensor, we have developed a framework in C++. Installing the framework on a device can be connected to any other device with a connected INA231. The connection is established via TCP. The measured current of the INA231 is sent over the established TCP connection. Thus, a central device can retrieve the measurement data from all devices in the network. Installing can be done with the install.sh file included in the install files. Additional help is offered in the help.txt.
For the connection with the INA231 sensor, we have developed a robust and flexible framework in C++. This framework facilitates seamless integration of the INA231 sensor with other devices in our project. Installing the framework on a device can easily connect to any other device equipped with an INA231 sensor, enabling efficient data communication and measurement retrieval.
The connection between devices is established using TCP, a reliable and widely-used communication protocol. Through this TCP connection, the measured current values from the INA231 sensor are transmitted, allowing for real-time monitoring and data collection. This approach enables a central device to retrieve measurement data from multiple devices in the network, providing a centralized and comprehensive view of the energy consumption across the project.
To facilitate the installation process, we have included an installation script, install.sh, in the install files. This script streamlines the framework's setup on the target device, ensuring a smooth and hassle-free installation experience. Additionally, we provide a detailed help file, help.txt, which offers comprehensive instructions and guidance for configuring and utilizing the framework effectively.
§ ACKNOWLEDGMENT
Funded in part by the German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) as part of Germany's Excellence Strategy – EXC 2050/1 – Project ID 390696704 – Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden as well as by the German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) under Project ID 450566247 and the Federal Ministry of Education and Research of Germany in the programme of "Souverän. Digital. Vernetzt.” – Joint project 6G-life – projectID: 16KISK001K and 16KISK002
IEEEtran
|
http://arxiv.org/abs/2307.00833v1
|
20230703081845
|
Anisotropic Fanning Aware Low-Rank Tensor Approximation Based Tractography
|
[
"Johannes Grün",
"Jonah Sieg",
"Thomas Schultz"
] |
eess.IV
|
[
"eess.IV"
] |
Anisotropic Fanning Aware Low-Rank Tensor Tractography
J. Gruen et al.
Institute for Computer Science, University of Bonn, Germany
Bonn-Aachen International Center for Information Technology, Germany
Anisotropic Fanning Aware Low-Rank Tensor Approximation Based Tractography
Johannes Gruen1,20000-0002-9154-3929 Jonah Sieg10009-0002-0604-7320 Thomas Schultz2,10000-0002-1200-7248
============================================================================================================
Low-rank higher-order tensor approximation has been used successfully to extract discrete directions for tractography from continuous fiber orientation density functions (fODFs). However, while it accounts for fiber crossings, it has so far ignored fanning, which has led to incomplete reconstructions. In this work, we integrate an anisotropic model of fanning based on the Bingham distribution into a recently proposed tractography method that performs low-rank approximation with an Unscented Kalman Filter. Our technical contributions include an initialization scheme for the new parameters, which is based on the Hessian of the low-rank approximation, pre-integration of the required convolution integrals to reduce the computational effort, and representation of the required 3D rotations with quaternions. Results on 12 subjects from the Human Connectome Project confirm that, in almost all considered tracts, our extended model significantly increases completeness of the reconstruction, while reducing excess, at acceptable additional computational cost. Its results are also more accurate than those from a simpler, isotropic fanning model that is based on Watson distributions.
§ INTRODUCTION
Diffusion MRI tractography <cit.> permits the in-vivo
reconstruction of white matter tracts in surgery planning or scientific studies.
Spherical deconvolution is widely used to account for intra-voxel heterogeneity
by estimating a continuous fiber orientation density function (fODF) in each
voxel <cit.>. Representing fODFs as higher-order tensors and
applying a low-rank approximation to these tensors has been shown to be a robust and
efficient approach to estimating discrete tracking directions
<cit.>.
However, while low-rank approximation accounts for fiber crossings, it ignores fiber fanning <cit.>. Consequently, even though recent work <cit.> has achieved promising results by performing low-rank approximation within the framework of Unscented Kalman Filter (UKF) based tractography <cit.>, some fanning bundles were extracted incompletely when using single-region seeding strategies <cit.>.
We address this limitation by explicitly modeling anisotropic fanning in the
low-rank UKF with Bingham distributions <cit.>. This involves three
main technical challenges: Firstly, initializing additional parameters in the
UKF state. Section <ref> solves this by observing
that the Hessian matrix at the optimum of the low-rank approximation indicates
the amount and direction of fanning. Secondly, the computational effort of
convolving rank-one tensors with Bingham distributions.
Section <ref> solves this by pre-computing the corresponding
integrals and storing results in lookup tables. Thirdly, maintaining a full 3D rotation per fiber compartment. Section <ref> solves this with a quaternion-based representation. Results in Section <ref> indicate that our extension reconstructs fanning bundles significantly more completely, while reducing excess, at acceptable additional computational cost.
§ BACKGROUND AND RELATED WORK
§.§ Low-rank tensor approximation model
Constrained spherical deconvolution (CSD) computes the
fiber orientation distribution function (fODF), a mapping from
the sphere to ℝ_+ which captures the
fraction of fibers in any direction <cit.>. One widely used strategy for estimating principal fiber orientations is to consider local fODF maxima. Our work builds on a variation of CSD, which
represents the fODF as a symmetric higher-order tensor 𝒯 and estimates r fiber
directions via a rank-r approximation
𝒯^( r ) = ∑_i=1^r α_i
𝐯_i^⊗ l,
where the scalar α_i ∈ℝ_+ denotes the volume fraction of the
ith fiber, 𝐯_i ∈𝕊^2 its direction, and the superscript ⊗ l indicates an l-fold symmetric outer product, which turns the vector into an order-l tensor. The main benefit of this approach is that it can separate crossing fibers even if they are not distinct local maxima, which permits the use of lower orders and in turn improves numerical conditioning and computational effort <cit.>. Specifically, the angular resolution of fourth-order tensor approximation for crossing fibers has been shown to exceed order-eight fODFs with peak extraction <cit.>. To additionally capture information about anisotropic fanning, our current work increases the tensor order to l=6, which parameterizes each fODF with 28 degrees of freedom.
§.§ Bingham distribution
The Bingham distribution <cit.> is the spherical and antipodally symmetric
(f( 𝐱) = f ( -𝐱)) analogue
to a two dimensional Gaussian distribution. It is given by the probability density
function
f ( 𝐱; 𝐌, 𝐙) 1/N (
𝐙)exp( 𝐱^T 𝐌𝐙𝐌^T 𝐱),
where 𝐙 is a diagonal matrix with decreasing entries z_1 ≥
z_2 ≥ z_3, 𝐌 = ( μ_1, μ_2, μ_3 ) is an orthogonal matrix and
N ( 𝐙) denotes the hypergeometric function of matrix argument. Without loss of generality, we set
z_3 = 0 and rename κ = z_1, β = z_2 to rewrite the
density function as:
f ( 𝐱; μ_1, μ_2, κ, β) = 1/N (
κ, β)exp( κ⟨μ_1, 𝐱⟩^2
+ β⟨μ_2, 𝐱⟩^2).
The Bingham distribution was used previously to model anisotropic fanning: Riffert et al. <cit.> fitted a mixture of Bingham distributions to the fODF to compute metrics such as peak spread and integral over peak. Kaden et al. <cit.> used it for Bayesian tractography. Our contribution combines the Bingham distribution with the low-rank model, and estimates the resulting parameters with a computationally efficient Unscented Kalman Filter.
§.§ Unscented Kalman Filter
The Kalman Filter is an algorithm that estimates a set of unknown variables,
typically referered to as the state, from
a series of noisy observations over time. The Unscented Kalman Filter (UKF)
<cit.> is an extension that permits a non-linear relationship between the
unknown variables and the measurements. It has first been used for tractography
by Malcolm et al. <cit.>, who treat the
diffusion MR signal as consecutive measurements along a fiber, and the parameters of a mixture of diffusion tensors <cit.> or Watson distributions <cit.> as the unknown variables. Compared to independent estimation of model parameters at each location, this approach reduces the effects of measurement noise by combining local information with the history of previously encountered values. Consequently, it has been used for scientific studies <cit.> as well as neurosurgical planning <cit.>.
Recent work has used the UKF to estimate the parameters of the low-rank model <cit.>. This variant of the UKF treats the fODFs instead of the raw diffusion MR signal as its measurements, which increases tracking accuracy while reducing computational cost, due to the much lower number of fODF parameters compared to diffusion-weighted volumes. A remaining limitation of that approach is that it does not account for fanning.
§ MATERIAL AND METHODS
We extend the previously described low-rank UKF <cit.> by modeling directional fanning with a Bingham distribution (Section <ref>). Implementing this requires solving problems related to initialization (Section <ref>), efficient evaluation of certain integrals (Section <ref>), and representing rigid body orientations within the UKF (Section <ref>). Section <ref> describes the resulting tractography algorithm, while Section <ref> reports the data and measures that we use for evaluation.
§.§ Low-rank model with anisotropic fanning
The higher-order tensor variant of CSD adapts the deconvolution so that it maps the single fiber response to a rank-one tensor <cit.>. Therefore, fanning can be incorporated by convolving the rank-1 kernel k with the Bingham distribution
h^( r ) = ∑_i = 1^rα_i f ( ·;
μ^( i )_1,
μ^( i )_2, κ^( i ),
β^( i )) ⋆ k ,
where α_i denotes the volume fraction of the ith fiber in direction
μ^( i )_1, κ^(i) the concentration around it (i.e.,
the inverse to the amount of fanning). In case of anisotropic fanning,
β^(i)>0 indicates the additional amount of fanning in direction
μ^( i )_2. For κ^(i)→∞ and
β^(i) = 0, the Bingham distributions converge to delta peaks and
the model (<ref>) converges towards the original low-rank model
(<ref>) with fiber directions μ^( i )_1=𝐯_i.
§.§ Initialization via the low-rank model
Since it is difficult to fit the model in Eq. (<ref>) to
data, we initialize the UKF based on the original low-rank approximation in
Eq. (<ref>). Firstly, we use the same main fiber directions,
μ^( i )_1=𝐯_i. Secondly, we initialize the fanning related parameters by observing that the rate at which the approximation error grows when rotating a given fiber direction away from its optimum depends on the amount of fanning: The lower the amount of fanning (the sharper the fODF peak), the more sensitive is the approximation error to the exact direction.
For each fiber, this information is captured in the second derivatives of the cost function with respect to its orientation, i.e., a 2× 2 Hessian that can be computed in spherical coordinates; an equation for this is derived in <cit.>. There is a one-to-one mapping between the eigenvalues of that Hessian and corresponding values of κ and β. The eigenvector corresponding to the lower eigenvalue indicates the dominant fanning direction μ_2.
We pre-compute a lookup table for the values of κ and β, given the Hessian eigenvalues. To this end, we utilize the model (<ref>) to generate single fiber
fODFs for various combinations of
κ and β values, and record the resulting eigenvalues. Figure <ref> visualizes the mapping from eigenvalues to κ and β. Subfigure <ref> shows that κ increases with the larger
eigenvalue, indicating a higher concentration around the main fiber direction. Subfigure <ref> shows how β depends on both eigenvalues. If they are the same, fanning is isotropic (β=0), while for any given larger eigenvalue (EV1), β increases, indicating an increasingly elliptic fanning, as the smaller eigenvalues (EV2) decreases towards zero.
We apply this lookup table to multi-fiber voxels by computing the residual fODF for each fiber (i.e., we subtract out the remaining fibers), and normalizing it such that α=1 to eliminate scaling effects. After fixing all fiber directions and fanning parameters, we fit the remaining volume fractions α_i in Eq. (<ref>) with a non-negative least squares solver.
§.§ Pre-computing the convolution
Equation (<ref>) involves a
convolution between a rank-1 kernel and
a Bingham distribution. To compute it efficiently, we first split the Bingham distribution into a standard version and a rotation part.
We rewrite
f (
𝐱; 𝐌, 𝐙) = ( D ( ϑ,
ψ, ω) g ) ( 𝐱; κ, β) = g (
𝐌^-1𝐱; κ, β),
where g ( 𝐱, κ, β) 1/N (
𝐙)exp( κ𝐱_3^2 + β𝐱_2^2 ) is a
standard Bingham distribution in the canonical basis oriented towards the north pole and
D is the zyz rotation matrix, which is defined as
𝐌 = D ( ϑ, ψ, ω) R_z ( ϑ) R_y ( ψ) R_z ( ω)
with
R_z ( α) (
cosα sinα 0
- sinα cosα 0
0 0 1
)
and
R_y ( α) (
cosα 0 sinα
0 1 0
- sinα 0 cosα
) .
This decomposition
is a significant simplification, because
we can now pre-compute the convolution between the standard Bingham
distribution and the kernel, and apply the rotation afterwards.
As it is standard practice in CSD <cit.>, we perform the convolution on the sphere using spherical and rotational harmonics. A rotational harmonics representation of the rank-1 kernel has been computed previously <cit.>. Unfortunately, no closed form solution is available for the spherical harmonics coefficients of the Bingham distribution. Therefore, we pre-compute them numerically, for the relevant range of κ∈{ 2.1, 2.2 , … , 89 } and
β∈{ 0, 0.1, … , κ - 2 }.
§.§ Representing rotations with quaternions
Unlike previous UKF-based tractography methods, our model requires a full three-dimensional rotation per fiber to account not just for the fiber direction, but also for the direction of its anisotropic spread. Unit quaternions are a popular representation of rotations, since they overcome limitations of Euler angles, such as gimbal lock. However, integrating them into a UKF is non trivial, since their normalization leads to
dependencies within the state <cit.>. We
overcome the problem by utilizing a homeomorphism between quaternions and ℝ^3. We discuss the relevant steps of that approach, but refer the reader to work by Bernal-Polo et al. <cit.> for a more detailed discussion of quaternions and this way of integrating them into the UKF. More detailed explanations of UKF-based tractography are also available in the literature <cit.>.
Given a quaternion q = [ q_w, q_x, q_y, q_z ] ∈ℍ,
we define a homeomorphism
ϕ : 𝕊^3 →{𝐞∈ℝ^3 : 𝐞≤ 4 }
q ↦ 4 q_1:/1+q_0
to the so-called Modified Rodrigues Parameters <cit.> and, vice versa,
ϕ^-1 : {𝐞∈ℝ^3 : 𝐞≤ 4 } →𝕊^3
𝐞 ↦1/16+𝐞^2( 16 - 𝐞^2, 8𝐞).
In a close neighborhood of
the identity quaternion, these charts behave like the identity transformation
between the imaginary part of quaternions and ℝ^3. For a given mean
quaternion q̅ of a quaternions set { q_i }_i, we define a mapping which
first maps each q_i by the conjugated mean quaternion, pushes it to
ℝ^3,
ϕ_q̅( q_i ) ϕ( q̅^⋆⋆ q_i ) =𝐞,
where q̅^⋆ = [ q̅_w, -q̅_x, -q̅_y, -q̅_z
] denotes the conjugated quaternion, pulls it back and rotates it back via
ϕ^-1_q̅( 𝐞) q̅⋆ϕ^-1( 𝐞) .
Assuming that the quaternions are highly concentrated around the mean
quaternion, the embedding resembles the distribution of quaternions closely.
With these preliminaries, we set up the UKF as illustrated in Figure <ref>. We only show it for a single
fiber direction. For simplicity, our implementation updates the parameters for each fiber separately. The state at point t is
defined by the parameters of a single Bingham distribution in the embedded space
X_t {α, κ, β, 𝐞_1, 𝐞_2,
𝐞_3
}.
The embedding is fully determined by the quaternion q_t and the covariance is
denoted by P_t.
We create sigma points to capture the distribution of the covariance around the
current mean. We use the sigma points to calculate a chart update q_t+1, by taking a weighted mean with weights w_i and pulling the embedded part back into quaternion space. With the new chart we
perform a chart transition. Afterwards, we follow the standard UKF update
scheme: Firstly, calculate the weighted mean of the sigma points,
evaluate our model for all sigma points and take the corresponding weighted mean.
Secondly, calculate the covariance P_xx of the sigma points, the
covariance of the evaluation P_zz and the cross correlation P_xy. This
information is then used to calculate the Kalman gain K and correct the
current state X_t dependent on the difference between the expected measurement
Z̅ and the fODF z as well as the covariance.
§.§ Probabilistic streamline-based tractography
For a given seed point, we initialize the UKF as discussed in Section <ref>. We perform streamline integration with second-order Runge-Kutta: At the jth point of the streamline, we
update the UKF, select the Bingham
distribution whose main direction is closest by angle to the current tracking direction, and draw a direction from that Bingham distribution via rejection sampling. We use that direction for a tentative half-step, again update the UKF and perform rejection sampling. Finally, we reach point (j+1) by taking a full step from point j in that new direction. This process is iteratively conducted
until a stopping criterion is reached. We stop the integration if the white matter density drops below 0.4 or if we
cannot find any valid direction within 60 degrees.
§.§ Data and evaluation
It is the goal of our work to modify the UKF so that it more completely reconstructs fanning bundles from seeds in a single region. We evaluate this on 12 subjects from the Human Connectome
Project (HCP) <cit.> for which
reference tractographies have been published as part of
TractSeg <cit.>. They are based on a segmented and manually refined whole-brain tractography. We evaluate reconstructions of these tracts from seed points that we obtain by intersecting the reference
bundles with a plane, and picking the initial tracking direction that is closest to the reference fiber's tangent at the seed point. We estimate fODFs using data from all three b shells that are available in the HCP data <cit.>.
We use a step size of 0.5 mm and seed 3 times at
each seed point. Due to the probabilistic nature of our method, we perform density filtering to remove single outliers. Since diffusion MRI tractography is known to create false positive streamlines <cit.>, we also apply filtering based on inclusion and exclusion regions similar to the ones described by Wakana et al. <cit.>. We place those regions manually in a single subject, and transfer them to the remaining ones via linear registration. Any streamline that does not intersect with all inclusion
regions or intersects with an exclusion region is removed entirely.
To make the comparison against the previously described low-rank UKF <cit.> more direct, we set its tensor order to 6. We also evaluate the benefit of modeling anisotropic fanning by implementing a variant of our approach that uses an isotropic Watson distribution, and could be seen as an extension of the previously proposed Watson UKF <cit.>. For this
model and for the Bingham UKF, we conducted a grid search to tune parameters and
finalized Q = {α =0.05, κ =0.05, v_1 = 0.02, v_2 =
0.02,v_3= 0.02 } and R=0.02 for the Watson UKF and Q = {α = 0.01, κ=0.1, β=0.1, e_1=0.005, e_2=0.005, e_3=0.005
} and R=0.02 for the Bingham UKF. For all models we set the fiber
rank to 2.
We judge the completeness and excess of all tractographies based on distances between points on the reference tracts, and the generated ones. Specifically, we employ the 95% quantile χ^95% of the directed Hausdorff distance
h ( A,B ) χ^95%{min_𝐛∈ B𝐚 - 𝐛 : 𝐚∈ A } ,
where A and B denote point sets <cit.>. Intuitively, if the 95% quantile
of h(A, B) = d, then 95% of the vertices of A are within distance d
from some point of B. This measure is not symmetric.
Thus, setting A to the reference tractography and B
to the reconstruction penalizes false negatives (it scores completeness), while switching the arguments penalizes false positives (it scores the excess).
§ RESULTS
Figure <ref> presents a qualitative comparison of the reconstruction of
the Cingulum (CG) in an example subject. In comparison to the low-rank UKF, both the Watson
UKF and the Bingham UKF result in a more complete reconstruction of the
parahippocampal part a). Moreover, compared to the Watson UKF, the Bingham UKF
achieves a more complete reconstruction of fibers entering the anterior cingulate cortex b). Similar trends are observed in Figure <ref> for the
reconstruction of the cortospinal tract (CST). The
Bingham UKF successfully reconstructs a majority of the lateral fibers, while
both the low-rank UKF and the Watson UKF are missing some parts of the fanning.
We quantify these results by evaluating directed Hausdorff
distances. The upper part of Figure <ref> shows
distances from the reference to the reconstruction. In 6 out of 7 tracts, the Bingham UKF exhibits the lowest median, indicating the most complete reconstructions.
The lower part measures distances from the reconstruction to the reference, so that low values indicate low excess. In 6 out of 7 tracts, the Bingham UKF leads to a lower median than the low-rank UKF, indicating that specificity is improved in addition to the increased sensitivity.
To statistically assess the differences between the proposed methods, we
conducted a Friedman test <cit.> for each tract. An asterisk denotes
significant differences at
significance level of p < 0.007, due to Bonferroni correction. In 6 out of 7
tracts, we found significant differences in the completeness of reconstruction.
In 4 out of 7 tracts, significant differences were observed for the
excess.
Generation of 1000 CST streamlines took 92.5 seconds for the Bingham
UKF, 85.2 seconds for the Watson UKF, and 58.1 seconds for the low-rank UKF on a
single core of a 3.3 GHz CPU.
§ CONCLUSION
We developed a new algorithm for probabilistic tractography that incorporates anisotropic fanning into the recently described
low-rank UKF. We demonstrated that this
results in more complete reconstructions, while also reducing false positives, in almost all bundles. Our proposed technical solutions for initialization, convolution, and representation of rotations contribute to maintaining acceptable computational efficiency. Our code will be made available along with the publication.
§ ACKNOWLEDGMENT
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 422414649. Data were provided by the
Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded
by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems
Neuroscience at Washington University
splncs04
|
http://arxiv.org/abs/2307.01923v1
|
20230704211108
|
An Algorithm for Persistent Homology Computation Using Homomorphic Encryption
|
[
"Dominic Gold",
"Koray Karabina",
"Francis C. Motta"
] |
cs.CR
|
[
"cs.CR",
"math.AT",
"68P25 (Primary), 55N31 (Secondary)",
"G.4.0; E.3.2"
] |
D. Gold et al.
Florida Atlantic University, Boca Raton, FL, USA
[email protected], [email protected]
National Research Council Canada, Ottawa, Ontario, CA
University of Waterloo, Waterloo, Ontario, CA
[email protected]
An Algorithm for Persistent Homology Computation Using Homomorphic Encryption
Dominic Gold1 Koray Karabina2, 3 Francis C. Motta1
August 1, 2023
=============================================================================
plain
plain
Topological Data Analysis (TDA) offers a suite of computational tools that provide quantified shape features in high dimensional data that can be used by modern statistical and predictive machine learning (ML) models. In particular, persistent homology (PH) takes in data (e.g., point clouds, images, time series) and derives compact representations of latent topological structures, known as persistence diagrams (PDs). Because PDs enjoy inherent noise tolerance, are interpretable and provide a solid basis for data analysis, and can be made compatible with the expansive set of well-established ML model architectures, PH has been widely adopted for model development including on sensitive data, such as genomic, cancer, sensor network, and financial data. Thus, TDA should be incorporated into secure end-to-end data analysis pipelines. In this paper, we take the first step to address this challenge and develop a version of the fundamental algorithm to compute PH on encrypted data using homomorphic encryption (HE).
§ INTRODUCTION
Topological Data Analysis (TDA) has blossomed into a suite of computational tools, built on firm mathematical theory, that generate quantified, discriminating, shape-based features of data, which can provide interpretable representations of high dimensional data and be taken in by modern statistical and predictive ML models. To apply the flagship approach, known as persistent homology (PH), data—usually in the form of point clouds or scalar-functions defined on a mesh (e.g., images, time series)—are transformed into a binary matrix that encodes the evolution of a family of simplicial complexes. From this matrix a collection of persistence diagrams (PDs) can be derived through a simple reduction algorithm. PDs provide compact representations of the number and size of geometric/topological structures in the data as multisets of planar points, and can be equipped with natural metrics. Models can then be developed either directly on PDs using, for example, hierarchical clustering or k-medoids in the metric space of diagrams for classification tasks, or subsequent transformations can be applied to produce topological feature vectors <cit.> to be used with ML model architectures such as random forests, support vector machines, and neural networks. We refer to these steps as the TDA-ML pipeline, as illustrated in Figure <ref>.
Crucial to the use of PH in applications are the numerous stability results that establish—under a variety of assumptions about the data and the metrics placed on the data and the PDs—the (Lipschitz) continuity of the map sending data to PDs <cit.> and to feature vectors <cit.>. Due its inherent noise tolerance and suitability across domain and data types, PH has been widely adopted for model development including on sensitive data, such as genomic <cit.>, cancer <cit.>, sensor network <cit.>, and financial data <cit.>. The reader may refer to recent review articles for references to a variety of PH applications <cit.>.
As the scale of predictive models and their data demands grow, there is pressure to move to collaborative and cloud-based systems in which analysis is performed remotely and in a distributed fashion (e.g., federated learning <cit.>). This is especially true for propriety and sensitive models that require large training data. On the other hand, a user—be it an independent data producer or agent lacking the capabilities demanded by the models—may need to keep private both their data and the decisions informed by that data. Thus, there is a growing need in industry and government for efficient, secure end-to-end data analysis pipelines that protect vital information on which sensitive decisions are made; to protect privacy, ensure compliance with personal data management regulations, and prevent hostile interference or misuse.
Example application domains, where bridging topological data analysis and secure end-to-end algorithms will yield more efficient,
privacy-preserving, and robust applications where data analysis, data mining, statistical
inference and pattern recognition tasks are performed on private data collected from a large
number of, and potentially competing, parties include video surveillance
for law enforcement, location and energy use tracking for smart cities and autonomous
vehicles <cit.>, financial data <cit.>, and biomedical data such as genomics <cit.> and cancer <cit.>, to name a few.
In order to address challenges with outsourcing sensitive data analysis, cryptographic researchers have been developing secure multiparty computing tools since the 1980s <cit.>. A good portion of the theoretical foundations of these primitives have been successfully adapted for practical applications in industry <cit.>. For example, recent innovations in homomorphic encryption (HE) have expanded the variety and complexity of the operations and algorithms that can compute on encrypted data (e.g., comparisons and conditionals <cit.>.
Secure multiparty computing tools are nowadays interacting with privacy-preserving machine learning (ML) applications <cit.>.
Indeed, there has been a recent surge in the development of secure ML algorithms using HE <cit.>. Thus, HE promises to expand to support complex algorithms and models that protect the privacy of both input data and model outputs. Similarly, sensitive data may be outsourced to a third party database management system (DBMS), where data owner may not fully trust DBMS but still request DBMS to perform some relational operations on the data such as sort, join, union, intersect, and difference. Specialized (symmetric key) encryption schemes allow data owners to encrypt their data, while preserving the ability of DBMS to perform such operations over the encrypted data <cit.>.
In practice, a hybrid use of public key and symmetric encryption schemes are complementary in creating secure and trustworthy data analytical services and applications, which take encrypted data and perform both training and inference on it. Many such models have been performed this way, like logistic or ridge regression <cit.>, support vector machines <cit.>, random forests <cit.>, and even neural networks <cit.>. The dual benefits of an HE framework for ML model training and inference are that while the client protects their data, the server protects their models that take in this encrypted data. In the TDA-ML pipeline (Fig. <ref>), both feature generation and model training/evaluation on those features represent critical components of the model development and deployment. Thus, each step back in the pipeline that can be realized in an HE framework relaxes the preprocessing demands on the client and strengthens the protection of the server's model. Thus securing the boundary matrix to persistence diagram step (green box in Fig. <ref>) is a critical step to allow a Server to fully protect any model that uses topological data features.
Our contributions: We develop
(Algorithm <ref>) as a first-of-its-kind version of the boundary matrix reduction algorithm (, Algorithm <ref>), which is at the heart of PH and TDA, and which is suitable for secure computation using HE. We achieve this
by modifying the logical structure of
Algorithm <ref> and
by developing new arithmetic circuits to replace its computational and conditional statements.
As a result,
traces essentially the same steps as in but in a HE-friendly manner so that computations can be performed securely in the ciphertext space. We prove the correctness of our proposed algorithm and provide a complexity analysis. Our analysis is constructive and provides lower bounds on the implementation parameters that guarantee correctness.
We implement our algorithms using the CKKS scheme from the OpenFHE library <cit.> but our techniques can be adapted for other HE schemes by implementing a compatible comparison function using BGV/BFV or TFHE schemes at a comparable cost; see <cit.>. Finally, we highlight some limitations of our proposed algorithm and suggest some improvements together with some empirical evidence.
Outline: The rest of this paper is organized as follows. Section <ref> establishes the mathematical and computational preliminaries of PH and HE.
Section <ref> also outlines the main challenges associated with transforming to .
In Section <ref>, we establish an HE-compatible version of the boundary matrix reduction algorithm, presented in Algorithm <ref>, and establish conditions guaranteeing correctness.
Section <ref> provides a complexity analysis for Algorithm <ref> and notes on the implementation, including limitations of the proposed algorithm and potential improvements. Our plaintext implementation of Algorithm <ref> in Section <ref> simulates an implementation of using HE, verifies the correctness of our theoretical results, and provides some positive evidence for improvements.
Our experiments showcase the propagation of errors due to relaxing algorithm parameters; see Figure <ref>.
We make concluding remarks in Section <ref> concerning potential future research thrusts in secure TDA. In some cases, we have deferred technical proofs to the Appendix.
§ PRELIMINARIES
Our approach to adapting the PH boundary matrix reduction algorithm into a secure framework is to encrypt the input to the reduction algorithm and to allow computations to be performed on ciphertexts in such a way that the decrypted output of the algorithm is equivalent to the output of the algorithm running on the plaintext input.
In Section <ref>, we provide some necessary background information on PH and present the main PH boundary matrix reduction algorithm in Algorithm <ref>. In Section <ref>, we present an overview of HE and explain some of the challenges that would occur when developing a cryptographic version of Algorithm <ref>
based on HE.
We denote vectors and matrices with boldface, as in v∈ℝ^n, R∈ℝ^n × n, and denote the i-th components of vectors with brackets, e.g., v[i], and columns of matrices with subscripts, R_i. We denote the infinity norm of v by |v| = ‖v‖_∞ = max_i| v[i] |. We then define the following metric between any two vectors x, y∈ℝ^n in the usual manner:
|x - y| = ‖x - y‖_∞ = max_i| x[i] - y[i] |,
where |·| in the final expression is the usual absolute value of a real number.
Furthermore, for v∈ [0, 1]^n, we denote l_v = low(v) as the integer-valued maximum index containing a 1 to help ease notation when appropriate.
§.§ Persistent Homology
PH, a mathematical device from algebraic topology, provides a means of comparing data through its latent shape-based structures. This is achieved by first associating to a dataset an ordered, nested family of combinatorial objects that are equipped with well-defined notions of shape. In particular, these shape features will be representations of k-dimensional holes in the data. Intuitively, a k-dimensional hole is a vacancy left by a (k+1)-dimensional object whose k-dimensional boundary remains. In this way, PH can be regarded as a feature extraction tool which pulls from data topological/geometric features which may provide scientific insights and can be used to train discriminating or predictive models. Although there are different forms of (persistent) homology theory, we restrict our attention to simplicial homology because of its intuitive appeal and practical computability.
An abstract simplicial complex, K, is a finite collection of finite subsets (called simplices) such that if σ∈ K, then τ∈ K for all τ⊂σ. A k-simplex, or a simplex of dimension k, is a set of size k+1, and the dimension of a complex, dim(K), is the maximum dimension of any of its simplices. A proper subset, τ⊊σ∈ K, is called a face of σ. If τ is a codimension-1 face of σ, i.e., τ⊂σ∈ K and |τ| = |σ|-1, we call τ a boundary face of σ. For simplicity, we will denote the k-simplex {x_0, x_1, …, x_k} by x_0x_1 … x_k.
One may regard 0-simplices (singleton sets) as points in some Euclidean space, 1-simplices (pairs) as edge segments between points, 2-simplices (sets of size 3) as filled-triangles, 3-simplices (sets of size 4) as filled tetraheda and so on, with the requirement that simplices in the geometric realization intersect only along common faces. Figure <ref> illustrates such geometric realizations of abstract simplicial complexes. For example, K_5 is the geometric realization of the abstract simplicial complex {∅, a, b, c, ab, ac, bc}. The empty triangle formed by the edges ab, bc, and ac at index 5 in Figure <ref> provides an example of a 1-dimensional hole formed by the vacancy of the missing 2-simplex, abc, enclosed by its three boundary edges, ab, ac, and bc. The holes in a simplicial complex, K are collected into a group, denoted H_1(K), composed of equivalence classes of collections of 1-simplices that form cycles (e.g., ab, ac, and bc in K_5) that could be the boundary faces of some collection of 2-simplices, but aren't. Similarly, a collection of triangles in K that enclose a void become represents of elements in H_2(K). More generally, for each dimension k, the k-dimensional homology group H_k(K) comprises equivalence classes of k-dimensional cycles that are not boundaries of a collection of (k+1)-dimensional simplices. H_0(K) encodes the connected components of K.
By ordering the simplices of a simplicial complex so that no simplex appears before any of its faces, one forms a nested sequence of simplicial complexes, which we'll call a filtration. Across this filtration one can track which simplices gave birth to homological features and which simplices kill off those homological features to determine (birth, death) pairs that track the persistence of each homological feature. For example, in Figure <ref>, H_1(K_4) is trivial since K_4 contains no holes. This is in contrast to the complexes K_5-K_9 that have a non-trivial H_1 element represented by the boundary edges ab, bc, and ac that was born with the introduction of bc at index 5. In K_8 there appears another hole with the introduction of the edge bd, which then disappears in K_9 when the triangle bcd fills the cycle formed by bc, bd, cd.
In practice one usually defines a complex, K, from a dataset and computes a filtration from a real-valued function f: K →ℝ that satisfies f(τ) ≤ f(σ) if τ⊆σ∈ K. f encodes the `scales' at which each simplex appears in the filtration gotten by ordering simplices according to their scales and sorting ties arbitrarily while ensuring each simplex never appears before its faces. A multitude of methods have been proposed to derive such filtrations <cit.>, both from point cloud data (e.g., Vietoris-Rips filtration <cit.>, alpha filtration <cit.>) and related filtrations for functions on a cubical mesh <cit.>. However determined, the structures in the filtration can be encoded in a square, binary matrix Δ( K) called a boundary matrix, whose rows and columns are indexed by the simplices in K, ordered σ_1, …, σ_n so that i < j if f(σ_i) < f(σ_j) or if σ_i ⊂σ_j. The entries of the boundary matrix are
Δ_i,j = 1, if σ_i is a boundary face of σ_j
0, otherwise.
Thus, Δ encodes the order in which simplices appear in the filtration and the relationship between each simplex and its boundary simplices. We let the first row and column correspond to the empty simplex, ∅, so that the vertices have boundary equal to ∅. Thus, vertices are encoded by a column [1,0,…,0], while Δ_0 is then necessarily a zero column, which could be omitted. The scales, f(σ_i), at which each simplex is added to the complex may be regarded as a real-valued vector in ℝ^n and can be held separately from the combinatorial information encoded in the boundary matrix.
It is shown in <cit.> that calculation of the persistence pairs can be achieved through a straightforward algorithm (Algorithm <ref>) that brings a boundary matrix into a reduced form. The critical operation needed to transform a filtered simplicial complex K—given by the monotonic filtration function f: K →ℝ and encoded in a boundary matrix Δ—into its PDs is the function
low(v) = max ({i | (v[i] = 1}),
which returns the largest index among those coordinates of the binary vector v that are equal to 1. Progressing from j=1 to n (i.e., in the order of the simplices given by the monotonic function f), each column Δ_j is replaced with the mod-2 sum Δ_i + Δ_j, whenever low(Δ_i) = low(Δ_j) and i<j, until the lowest 1 in column j is distinct from all lowest 1s in the preceding columns. The lowest 1s in the reduced boundary matrix then specify the indices of the pair of simplices at which each PH class of the corresponding dimension is born and dies. More precisely, let R = (Δ) be the reduction of the boundary matrix Δ after applying Algorithm <ref>. Then (f(σ_i), f(σ_j)) is a (finite persistence) point in the k-dimensional PD dgm_k( K) if and only if σ_i is a simplex of dimension k and i = low( R_j). In other words, a k-dimensional homology class was born with the introduction of the simplex σ_i = σ_low( R_j) and died when σ_j was added to the filtration.
In Figure <ref> we illustrate the original boundary matrix, its reduced form after applying Algorithm <ref>, and H_0 and H_1 PDs associated to the given filtration. In the reduced matrix, columns b and c consist of all zeros, since their appearance created homology (H_0) classes[The first vertex a is a special case, and technically kills off the (-1)-dimensional (reduced) homology class at index -1.]. The connected components represented by vertices b and c are then killed by the introduction of ab and ac respectively, since these edges merge the connected component into the component represented by a, which was born earlier. This is encoded in the reduced boundary matrix by the low 1s at indices (b, ab) and (c, ac) respectively. The edge bc likewise gives birth to an H_1 class, that is later killed off by the introduction of the triangle abc. This is why, in (Δ), column bc consists of all zeros and the low 1 in abc is in row bc.
The low 1s in the reduced matrix encode the birth-death simplex pairs appearing in the PDs of the filtration. Here we take the scale of each simplex to be the index of the complex in which it first appears so that the low 1 at (b,ab) is sent to the point (1,3) in the H_0 diagram. Similarly, (bd, bcd) maps to (8,9) and (bc, abc) maps to (5,10) in the H_1 PD, dgm_1( K). If the scales of each simplex were determined instead by some geometric information in the data (e.g., using pairwise distances between points as is the case for the Vietoris-Rips filtration), the positions of the points in the PDs would capture these scales, rather than merely the indices.
§.§ Homomorphic Encryption
Let ℳ be a message (plaintext) space and
𝒞 be a ciphertext space. We assume that ℳ and 𝒞 are commutative rings with their respective identity elements, and addition and multiplication operations, denoted
(ℳ, 1_ℳ, +, ×) and (𝒞, 1_𝒞, ⊕, ⊗). When the underlying ring is clear from the context, we simply denote the identity element by 1 and so by abuse of notation the scalar space of the ring consists of elements s = ∑_i=1^s1∈ℤ.
For a given parameter set ,
an HE scheme consists of algorithms as described in the following:
* (): Takes as input, and outputs a public key and secret pair (, ), and an evaluation key .
* _(m): Takes
a plaintext message m∈ℳ and the public key as input, and outputs a ciphertext c∈𝒞.
* _(c): Takes
a ciphertext c∈𝒞 and the public key as input, and outputs a plaintext message m∈ℳ.
* _(c_1, c_2): Takes a pair of ciphertexts (c_1, c_2), c_i ∈𝒞 and the evaluation key as input, an outputs a ciphertext c_∈𝒞.
* _(c_1, c_2): Takes a pair of ciphertexts (c_1, c_2), c_i ∈𝒞 and the evaluation key as input, an outputs a ciphertext c_∈𝒞.
* _(f; c_1, ..., c_k): Takes
an arithmetic circuit f: ℳ^k →ℳ, ciphertexts c_i∈𝒞, and the evaluation key as input, and outputs a ciphertext c_∈𝒞.
Here, generally consists of a security parameter λ and a multiplicative depth parameter L. The security parameter
λ says that complexity of the best attack to break the security of the HE scheme is 𝒪. The depth parameter L guarantees that the HE scheme can evaluate circuits of maximum multiplicative depth L.
We frequently refer to multiplicative depth and computational complexity of circuits in our analysis and they are defined as follows.
Let f be an arithmetic circuit. Multiplicative depth, or simply depth, of f is the maximum number of sequential multiplications required to compute f. Computational complexity, or simply complexity, of f is the number of multiplication and addition operations required to compute f.
For example, f(m_1, m_2, m_3, ..., m_n) = ∑_i=1^nm_i^2^i is a depth-n multiplicative circuit, where m_i^2^i can be computed after i successive multiplications (squarings). A naive way to compute f would require n(n+1)/2 multiplications and (n-1) additions and so we can say that f has computational complexity 𝒪(n^2).
A basic correctness requirement[The correctness and homomorphic features of HE may be violated with negligible probability.] for an HE scheme is that the decryption operation is the inverse of the encryption operation, that is
_(_(m)) = m
for all m∈ℳ.
The homomorphic featureneg_prob of an HE scheme requires
_(_(f; c_1, ..., c_k)) = f(m_1, ..., m_k)
for all c_i∈𝒞 such that c_i= _(m_i).
In other words, HE allows one to evaluate polynomials
on encrypted data such that the decryption of the result is exactly the same as the value of that polynomial evaluated on plaintext messages. We should note that we presented here a limited overview of HE schemes so that our paper is self-contained. HE schemes are much more involved (e.g., consisting of other algorithms such as scaling, relinearization, bootstrapping, etc.) and their implementations require a great deal of detail (e.g., encoding and decoding algorithms so that the plaintext messages can be mapped into the message space of the scheme, batching operations, etc.). Moreover, most of these details depend on the choice of the HE scheme. For a survey of HE schemes and existing libraries, we refer the reader to <cit.>.
Some of the challenges of using HE in practice are:
* Increasing the depth of the arithmetic circuit significantly increases the complexity of the circuit's encrypted evaluation. Practical HE schemes can handle arithmetic circuits with relatively low depth. For example, <cit.> reports and compares some results for homomorphic evaluation of circuits up to depth 30. Bootstrapping is a viable option to reset the level of a ciphertext right before maximum tolerance is reached.
* Algorithms in general require evaluation of functions that are not necessarily polynomials and approximation of functions through low-depth circuits is a challenge. Similarly, algorithms involve conditional statements and evaluating these statements while running an algorithm on ciphertext variables requires different ways of handling conditionals. As an example, given m_1, m_2∈ℤ_𝕡 for some prime p, the conditional statement that returns m_1+m_2 if m_1=m_2; and that returns m_1 if m_1 m_2 can be implemented over ciphertexts as
(_(f; c_1,c_2)⊗ c_1)
⊕ ((1-_(f; c_1,c_2))⊗(c_1⊕ c_2)),
where c_i = _(m_i), and f(m_1, m_2) = (m_1-m_2)^p-1 can be implemented as an arithmetic circuit of depth 𝒪(log_2p) using a square-and-multiply type exponentiation algorithm.
Our objective is to adapt
Algorithm <ref>
so that secure boundary matrix reduction operation can be performed based on encrypted boundary matrices
using HE. In the light of our discussion above, there are three main challenges to address:
* Develop an arithmetic circuit for encrypted low computations so that given
a pair of ciphertexts c_1 and c_2
(representing the encryption of column vectors v_1 and v_2), low(v_1) = low(v_2) can be verified; see line 3 in Algorithm <ref>.
* Develop an arithmetic circuit so that the conditional modular addition operation (line 4 in Algorithm <ref>) can be performed in the ciphertext space.
* Modify the logical structure of Algorithm <ref> so that all of the modular vector additions in lines 2-4 in Algorithm <ref> are correctly executed in the ciphertext space, until low(R_j_0) low(R_j) for all j_0<j, and for all j = 0,...,(n-1).
§ HE-COMPATIBLE MATRIX REDUCTION
§.§ : HE-compatible computation of low
The first obstacle to realizing an HE-compatible algorithm is computing the largest index of any 1 in an n-dimensional binary vector v∈{0, 1}^n, called low(v) (see Section <ref>). For reasons that will become clear, it will be necessary for us to extend the usual definition of low—as defined in Section <ref>—to the n-dimensional 0-vector; we assign low(0) = n-1. By construction, a non-zero column in a boundary matrix of a valid filtration can never have a low of n-1 before or during reduction by Algorithm <ref>. [If it did, that would imply the simplex that appeared latest is the boundary of a simplex that appeared earlier, which violates the condition that each step in the filtration gives a valid complex.]
In <cit.>, the authors introduce a method of locating the index of the maximum value of a vector (maxidx) of distinct numbers
using HE. We adapt this method to obtain an approximation of the low value of a binary vector. First, in Lemma <ref>, we establish the correctness of our reimagining of the exact low function obtained by monotonically scaling vector coordinates with respect to their index while ensuring all coordinates remain distinct and guaranteeing the low corresponds to the new largest coordinate.
For v∈ℝ^n, let S(v) := [v[i] + i/n]_i=0^n-1
Let 𝒟^n = {v∈ℝ^n | v[i] ≠v[j], 0 ≤ i ≠ j < n} be the collection of n-dimensional vectors with distinct coordinates. For a vector v∈𝒟^n, define
maxidx: 𝒟^n →ℤ by
maxidx(v) = k
if v[k] > v[j] for all j different from k.
For any binary vector v∈{0, 1}^n,
low(v) = maxidx(S(v))
See Appendix <ref>.
How does our argument about maxidx approximating low hold in our “approximate arithmetic” setting? The following generalization of Lemma <ref> states that as long as our approximate binary vector v'∈ℝ^n isn't too far from an underlying, true binary vector v∈{ 0, 1 }^n, then we may continue to extract low(v) using maxidx(v').
Let v∈{0, 1}^n and v'∈ℝ^n be given such that | v' - v| < 1/2n. Then
low(v) = maxidx(S(v')).
See Appendix <ref>.
The proximity between v and v' cannot be relaxed for the above choice of Transformation <ref>, since it is possible to construct vectors, v and v' such that |v - v'| = 1/2n+ c, with 0 = low(v) ≠ maxidx(S(v')) = n-1 for any c > 0.
Using this construction, it is then natural to apply the function presented in <cit.> (Algorithm <ref>), to develop the function (Algorithm <ref>). This function will estimate low with arbitrary accuracy, for real vectors that well-approximate binary vectors.
takes a vector v∈ [1/2, 3/2)^n and returns a vector b with b[k] ≈ 1 if maxidx(v) = k and b[j] ≈ 0 for j ≠ k. The component-wise accuracy in approximating the coordinates of the true maximum value indicator vector (b with b[maxidx(v)] = 1 and 0 elsewhere) is controlled by a tuple of parameters 𝒫_ = (d, d', m, t). In <cit.>, the authors show that the error in each coordinate is bounded by 2^-α, for α > 0 which can be made arbitrarily large with sufficiently large choices of d, d', m, and t.
To attain the actual index containing the maximum value of v, as opposed to the maximum index indicator vector, b, we compute the dot product between b and [0, 1, ..., n-1]. This is the approach we adopt in the function given in Algorithm <ref>. Since the algorithm requires the input vector to be in the interval [1/2, 3/2)^n, and our inputs S(v') will be in the interval [0, 2)^n, we apply a linear transformation that preserves the maxidx of its input.
T_(v) := [ v[i] + 1/2]_i=0^n-1
The error in propagates through the algorithm in the following manner:
Let α > 0 and fix parameters d, d', m, t for the algorithm so that
|(x; d,d',m,t) - e_maxidx(x)| < 2^-α,
for all x∈ [1/2, 3/2)^n.
Further assume v'∈ [0, 1]^n and v∈{ 0, 1 }^n are such that |v' - v| < 1/2n. Then
| (v'; d, d', m, t) - low(v) | < 3/2(n)(n-1) 2^-α.
The result follows from Lemmas <ref> and <ref> in Appendix <ref> and the triangle inequality.
In the next section we establish choices of parameters ensuring a specified level of accuracy of the approximating function.
As a final remark, we note that the dependence of 's error on n^2 is a consequence of extracting the low of a vector using a dot product between the vector of indices, [0,…,n-1], and the max-index-indicator vector. This may be unavoidable when using the current implementation of the function, although it is conceivable that a fundamentally different approach to computing may yield a better error growth with the size of the boundary matrix.
§.§ Parameters for
Having established an approximation of the low function that is amenable to an HE framework, we next establish the prerequisite results needed to inform the choices of 's parameters that will guarantee correctness. There are two results we create in order to help ease the proof of the theorem at the end of this section. The first of these is to establish a lower bound on this ratio over for all binary vector inputs to , as this value will directly affect the choice of parameters for the and, subsequently, the functions.
Let us borrow Theorem 5 from <cit.>, which gives the parameter choices (d, d', m, t) to achieve any desired non-zero error
|(v; d, d', m, t) - e_maxidx(v)| < 2^-α.
Let v∈ [1/2, 3/2)^n be a vector with n distinct entries. Define c to be the ratio of the maximum value over the second maximum value such that c ∈ (1, 3). If
t ≥1/log(m)[log(α + log(n) + 1) - loglog (c)]
min(d, d') ≥log(α + t + 2) + (m-1)log(n) - 1
then the error (component-wise) of the (v; d, d', m, t) algorithm compared to e_maxidx(v) is bounded by 2^-α.
Of great importance to us is a lower bound on c, the ratio of the largest to the second largest coordinate values in the input to 's parameters. As c approaches 1, and 's parameters d, d', and t grow without limit. For this reason, we aim to obtain a larger lower bound on c across all possible (approximate binary) input vectors. We re-write the bound | v - v' | < 1/2n as |v - v'| ≤ε/2n where ε∈ [0, 1) to fine-tune parameter c.
We compute that a lower bound on c is given by c ≥ 1 + 2-2ε/6n-4+ε in Lemma <ref> in Appendix <ref>. Importantly, if ε = 1 (and so assume v' is approximately binary only within the bound 1/2n needed for Lemma <ref> to compute low via maxidx) then the ratio of the first to the second largest coordinates of the transformed v' can be arbitrarily close to 1. As a consequence, there will no longer exist a choice of finite parameters in the algorithm that guarantees correctness over all possible approximately-binary vectors v'.
On the other hand, as ε gets closer to 0, the lower bound on c increases away from 1, which will allow to be computed more efficiently. Thus there will be a trade-off between the computational cost of maintaining v' sufficiently close to binary throughout the boundary matrix reduction, and estimating low efficiently.
The variable α specifies the desired level of accuracy of (to 2^-α), and informs the minimum parameters needed to attain said accuracy. Lemma <ref> recasts the accuracy parameter of to an arbitrary δ > 0. With this, we can specify the choice of parameters needed to approximate low(v) using (v'; d, d', m, t) to arbitrary accuracy.
Assume v∈{ 0, 1 }^n and v'∈ [0, 1]^n are such that | v - v'| ≤ε/2n, for some 0 ≤ε < 1. Choose the parameters d, d', m, and t for the function, along with a pre-determined δ>0, such that
α > log(3) + 2log(n) - log(δ)-1
t ≥log(α + 1 + log(n) ) - loglog( 1 + 2-2ε/6n-4+ε)/log m
min (d, d') ≥log (α + t + 2) + (m-1)log (n) - 1
Then (v'; d, d', m, t) has δ-error. That is,
|(v'; d, d', m, t) - low(v) | < δ.
See Appendix <ref>.
As these parameters are now well-established for the function, we now refer to this tuple of parameters (d_, d'_, m_, t_) as 𝒫_ to avoid confusion with the upcoming function which will have a similar parameter naming convention. Furthermore, when 𝒫_ is clear from context, define
_v(v; 𝒫_)
for ease of notation in the upcoming sections.
§.§ : HE-compatible Equality Check
Theorem <ref> approximates low(x) and low(y) via
(x^'; 𝒫_) and (y^'; 𝒫_). One of the remaining challenges is to characterize the
equality check low(x) = low(y) using
(x^'; 𝒫_) and (y^'; 𝒫_). The second challenge is to rewrite (<ref>) for z^' so that it can be computed by avoiding the if statement and the mod 2 addition.
Suppose that x^' and y^'
are two real valued vectors that are approximations of the binary vectors x and y, respectively.
We must now determine a method that takes x^' and y^' as input, and outputs z^' such that z^' approximates the binary vector
z =
x + y 2 if low(x) = low(y)
x if low(x) ≠ low(y)
In Section <ref>, we show that
z in (<ref>) can be approximated by
z^' = Ω(x^'-y^')^2 + (1-Ω)x^',
where the predicate Ω takes (x^'; 𝒫_) and (y^'; 𝒫_)
as input, and approximates the boolean value low()==low(). We establish the theory to calculate Ω in this section.
In this section, we build up the theory to compare two approximate values to determine if their underlying low values are equal, and if so, perform the mod 2 addition in a way that is not conditional, and thus leaks no information.
Consider two approximate binary vectors x', y'∈ [0, 1]^n with underlying binary vectors x, y∈{0, 1}^n. We wish to gate the column vector addition by verifying whether low() = low() or low() ≠ low(), but only using their respective estimates. As low is an integer valued function, we do not expect the accuracy of estimates of low by to be particularly demanding to distinguish between the two cases. We make this requirement precise in the following theorem.
Let x, y∈{0, 1}^n and x', y' ∈ [0, 1]^n and assume that 𝒫_ is chosen such that |(x'; 𝒫_) - low(x) | < δ and |(y'; 𝒫_) - low(y) | < δ for some 0 < δ < 1/4. Let ϕ be any value in the interval (2δ, 1-2δ). Then
|(x'; 𝒫_) - (y'; 𝒫_)|≤ϕ iff low(x) = low(y)
Suppose that |(x'; 𝒫_) - (y'; 𝒫_)| > ϕ. Then
ϕ < | (x'; 𝒫_) - (y'; 𝒫_) |
≤ | (x'; 𝒫_) - low(x) | + | (y'; 𝒫_) - low(y) | + | low(x) - low(y) |
< 2δ + | low(x) - low(y) |.
This implies that |low(x) - low()| > ϕ - 2δ > 0 as ϕ > 2δ by assumption. Both low() and low() are integer-valued functions, so it must be the case that low() ≠ low().
Conversely, suppose that
|(x'; 𝒫_) - (y'; 𝒫_)|≤ϕ.
Then
| low(x) - low(y) | ≤ | (x'; 𝒫_) - low(x) |
+ | (y'; 𝒫_) - low(y) |
+ | (x'; 𝒫_) - (y'; 𝒫_) |
< δ + δ + ϕ
And so we have that |low() - low()| < 2δ + ϕ < 1 as ϕ < 1 - 2δ. Again, as low is an integer-valued function, it must be the case that low() = low().
Tracing the proof of Lemma <ref> also reveals that the intervals on which |(x'; 𝒫_) - (y'; 𝒫_)| and ϕ live are disjoint, and so it will never be the case that
|('; 𝒫_) - ('; 𝒫_)| = ϕ,
despite the statement of the lemma.
The implications of Lemma <ref> is that one does not need to be very accurate in the calculation of ('; 𝒫_), and in fact only needs to approximate low() (using ('; 𝒫_)) to an accuracy of 1/4. If that condition is guaranteed, then one may compare the value |('; 𝒫_) - ('; 𝒫_)| to any 2δ < ϕ < 1 - 2δ to check whether the underlying low values are equal or not.
With this lemma, our strategy to compare low values of two approximately binary vectors will be to exploit an approximation of the function that compares the relative size of its two inputs. First, we introduce the following function:
For x, y∈{ 0, 1 }^n, let l_ = low() and l_ = low(). Define
lowcomp(l_, l_) = 0, if l_≠ l_
1, if l_ = l_.
The function lowcomp will be used to gate the mod 2 addition of two columns in place of the conditional equality check in Algorithm <ref>. In particular, for a given x and y∈ [0, 1]^n, the statement “update x to x + y 2, if their lows are equal” may be reinterpreted as
x = x + lowcomp(l_, l_) y 2.
We now establish a algorithm to estimate the lowcomp function for approximately binary vectors. Our formulation is based on the algorithm, which estimates the comp function given in Definition <ref> (both introduced in <cit.>) that compares the relative size of its inputs.
For any non-zero real numbers a,b, define
comp(a, b) = lim_k →∞a^k/a^k + b^k =
1, if a > b
1/2, if a = b
0, if a < b
The algorithm (Algorithm <ref>), approximates the comp function by evaluating the expression a^m^t/a^m^t + b^m^t, for t a positive integer, and m often chosen to be a power to 2.
, along with Lemma <ref>, are the building blocks we need to build . Using Lemma <ref>, we make the observation that
lowcomp(, ) = 1 ⇔ low() = low()
⇔ϕ≥ |('; 𝒫_) - ('; 𝒫_)|
⇔ϕ^2 ≥ ((') - ('))^2
and so we compare (('; 𝒫_) - ('; 𝒫_))^2 to ϕ^2 to determine if the underlying low values are equal or not. This construction removes the need to implement an HE circuit to compute absolute value at the cost of two squarings.
We make two important notes before we explicitly define . The first is that, by construction, |('; 𝒫_) - ('; 𝒫_)| and ϕ exist in disjoint intervals (refer to Lemma <ref>'s remark), and so ϕ and |('; 𝒫_) - ('; 𝒫_)| will never be equal. Thus may be treated as an approximate binary indicator function for our application. The second is that the input (('; 𝒫_) - ('; 𝒫_))^2 is in the interval [0, (n-1)^2]. As the function requires its inputs to be in the interval [1/2, 3/2), we apply a linear transformation to bring values in the correct interval.
T_(x) := 1/2 + x/n^2
Since T_ is a monotonic function, the relative order of the inputs are preserved. We now explicitly define by performing on T_(ϕ^2) and T_((_x' - _y')^2) as described in Algorithm <ref>.
inherits from that its outputs live in (0,1) and that it can approximate lowcomp arbitrarily well given appropriately chosen parameters. We formalize this in the following theorem.
Let x, y∈{0, 1}^n and x', y' ∈ [0, 1]^n and assume that 𝒫_ is chosen such that |(x'; 𝒫_) - low(x) | < δ and |(y'; 𝒫_) - low(y) | < δ for some 0 < δ < 1/4. Let ϕ be any value in the interval (2δ, 1-2δ). Define as in Algorithm <ref>.
If the parameters in the function are chosen such that
|(a, b; d, d', m, t) - comp(a, b)| < η,
then we also have
|(_x', _y', ϕ; d, d', m, t) - lowcomp(l_, l_)| < η.
See Appendix <ref>.
§.§ Parameters for
We shall proceed with the analysis of 's parameters in a similar fashion to 's parameters in Section <ref>. Theorem 4 from <cit.> gives lower bounds for the parameters d, d', m, and t to achieve 2^-α error in the function.
Let x, y ∈ [1/2, 3/2) satisfy
c ≤ max(x, y)/min(x, y)
for a fixed c ∈ (1, 3). If
t ≥1/log (m)[log(α + 1) - loglog (c)]
d ≥log(α + t + 2) + m - 2
d' ≥log(α + 2) - 1
then |(x, y; d, d', m, t) - comp(x, y)| < 2^-α.
The role of c in is similar to in the Section <ref>: the closer c = max(a, b)/min(a, b) is to 1, the larger the value for all subsequent choice of parameters, thus increasing the “effort” needed for the function to distinguish which of the two inputs is larger. For this reason, it is our goal to bound c = max(a, b)/min(a, b) as far from 1 as possible.
Once a ϕ is fixed, the only guarantee is that T_((_' - _')^2) is either strictly greater or less than said T_(ϕ^2) (see Lemma <ref>'s remark). Since we are only concerned with whether low() and low() are equal or not, the ratio c may be reinterpreted as
c = max{ T_(ϕ^2), T_((_' - _')^2) }/min{ T_(ϕ^2), T_((_' - _')^2) } > T_(ϕ^2)/T_((_' - _')^2), if low() = low()
T_((_' - _')^2)/T_(ϕ^2), if low() ≠ low()
It follows that
c > min{T_(ϕ^2)/T_((2δ)^2) , T_((1-2δ)^2)/T_(ϕ^2)} > 1
where the minimum changes depending on which case we are in. Thus, once a δ∈ (0, 1/4) is chosen, this expression is variable with respect to the value of ϕ and thus T_(ϕ^2). The optimal choice of ϕ will ensure the minimum of these two ratios are as far away from 1 as possible. So, we aim to optimize the right side of this expression with respect to ϕ: that is, to determine what value of T_(ϕ^2) solves
max( min{T_(ϕ^2)/T_((2δ)^2), T_((1-2δ)^2)/T_(ϕ^2)}),
where the max is taken over T_(ϕ^2) in the interval (T_((2δ)^2), T_((1-2δ)^2)).
This solution to Eq. (<ref>) comes from a general fact about positive real numbers, which we prove in Proposition <ref>, and which establishes the following corollary:
The value of T_(ϕ^2) which solves Eq. (<ref>)
is
T_(ϕ^2) = √((1/2 + (2δ/n)^2 ) (1/2 + (1-2δ/n)^2 )).
Thus, c > √(n^2 + 2(1-2δ)^2/n^2 + 2(2δ)^2).
See Appendix <ref>.
Having determined the bottleneck value c, we explicitly construct a choice of parameters for to achieve any desired level of accuracy (which has been re-contextualized from the 2^-α error in to an arbitrary η error in , see Lemma <ref>).
Let x, y∈{0, 1}^n and x', y' ∈ [0, 1]^n and assume that 𝒫_ is chosen such that |(x'; 𝒫_) - low(x) | < δ and |(y'; 𝒫_) - low(y) | < δ for some 0 < δ < 1/4. Define as in Algorithm <ref>, where we explicitly pick
ϕ = n√(√((1/2 + (2δ/n)^2 ) (1/2 + (1-2δ/n)^2 )) - 1/2).
If the parameters in the function are chosen such that
α > -log(η)
t ≥1/log (m)[ log(α + 2) - loglog( √(n^2 + 2(1-2δ)^2/n^2 + 2(2δ)^2)) ]
d ≥log(α + t + 2) + m - 2
d' ≥log(α + 2) - 1
then has η-error. That is,
|(_x', _y', ϕ; d, d', m, t) - lowcomp(l_, l_)| < η
See Appendix <ref>.
can now be thought of as a function of only two inputs (_' and _'), as we will always choose this optimal value of ϕ.
The theorem also implies a trade-off between δ and η. Indeed, estimating low using to a high degree requires less “effort” for to distinguish the (in)equality of two values. Similarly, less accurate low estimates will require to do more of the heavy-lifting. This intuition is confirmed by the dependence on δ of the lower bound on c. As δ approaches 0, the bound on c increases further away from 1, causing our choice of parameters for to get smaller. On the flip side, as δ approaches its upper limit of 1/4, then c may get arbitrarily close to 1, causing 's parameters to get arbitrarily large. We refer to these parameters as = (d_, d'_, m_, t_).
§.§ Conditional modular addition of vectors
For a given x and y∈ [0, 1]^n, the statement “update x to x +y 2, if their low values are equal” from Equation <ref> may be reinterpreted as
x = x + lowcomp(l_, l_) y 2.
Furthermore, addition modulo 2 can be recast as a polynomial operation using the observation that for any two a, b ∈{ 0, 1}, the operation (a-b)^2 = a + b 2.
Thus, we may rewrite (<ref>) as
x = lowcomp(l_, l_)(x - y)^2 + (1 - lowcomp(l_, l_))x,
taking all operations component-wise, to remove mod 2 addition.
We may then approximate this operation using to esimate low and to estimate lowcomp. That is, the operation we will be performing on approximate binary vectors is
x' = (_x', _y')(x' - y')^2 + (1 - (_x', _y'))x',
as alluded to in Eq. (<ref>) in Section <ref>.
§.§ Modifying the Logical Structure of
The main operation in (Algorithm <ref>) is gated by a conditional while loop. As mentioned before, conditional statements cannot be explicitly implemented (or traced) over ciphertexts. Therefore, we need to rewrite lines 2-6
in Algorithm <ref> so that they are HE-compatible.
This is done by replacing the while loop with a double nested for loop, each of which run through all preceding column indices. Assume low(Δ_i) ≠ low(Δ_k) for all 0 ≤ i ≠ k < j, as is the case when the algorithm, applied to a boundary matrix Δ, first encounters column j. If we loop through the preceding j columns once, comparing each Δ_k, k=0…, j-1 to Δ_j, either low(Δ_k) = low(Δ_j) for some k<j or not. In the latter case, we know Δ_j is already in reduced form and will not change—no matter how many times one loops again through the preceding j-1 columns—since the (binary) addition only happens when the low's of two columns match. On the other hand, if some low(Δ_k) = low(Δ_j) for some k<j, Δ_j ←Δ_j + Δ_k 2 will change Δ_j, and in particular, this addition necessarily causes the low(Δ_j) to decrease. Thus, after such an update this new Δ_j will never again be equal to Δ_k. In other words each column vector will only update column j at most once. Without any assumptions about the order in which preceding columns update Δ_j, we simply loop over the preceding columns enough times to guarantee every vector which should have updated column j has done so. This requires exactly j loops over all preceding columns since each preceding column can only update Δ_j at most once. For the base case, note that column j=0 is trivially in reduced form and Δ_1 will certainly be in reduced form after a single comparison with Δ_0. This aligns with the worst case complexity for the original algorithm: O(j^2) for column j, O(n^3) overall <cit.>.
In Section <ref>, we modified the existing from <cit.> to attain the algorithm to estimate low. We have already discussed how to check the equality of two low values using and in Section <ref>. Finally, the mod 2 addition over rational numbers was constructed in Section <ref>. With all of this combined, we may now rewrite the main block of the algorithm as written in lines 6-9 of Algorithm <ref> in a way which makes it compatible with each approximate algorithm on approximate vectors framework.
§.§ An HE-Compatible
As the challenges as listed
in Section <ref>
have now been addressed in Sections <ref>-<ref>, we now present Algorithm <ref>, which
is an HE-compatible version
of and which can take an encrypted boundary matrix as input and reduce it using HE-operations in the ciphertext space.
We note that the moment we do the very first column addition, vectors have moved from {0, 1}^n to [0, 1]^n, requiring the need for all algorithms to be compatible with approximate binary vectors. For this reason, we must have a guarantee of correctness, which is a function of the controllable errors in our approximation variables: v' (Theorem <ref>), (Lemma <ref>) , and (Section <ref>).
As long as |v' - v| < 1/2n, we know that (v'; 𝒫_) will approximate low(v) as accurately as wanted. And as long as (v'; 𝒫_) estimates low(v) to within 1/4, is able to distinguish between low() and low() using (x'; 𝒫_) and (y'; 𝒫_). directly defines an approximately binary indicator
Ω(_j_0, _j; 𝒫_)
which will be used to perform the “mod 2” addition, which will naturally have accumulating non-zero errors (determined by η). The finiteness of the algorithm guarantees the existence of an η such that the accumulation of errors never exceeds the maximum threshold of 1/2n. In a strict sense, only fails to produce the correct reduced boundary matrix if the maximum error in some component is 1/2 or larger. If |(Δ) - R'| < 1/2, then (R') = (Δ), where casts entries to the nearest integer. This condition is guaranteed by the stricter requirement that errors are within 1/2n.
§ COMPLEXITY AND IMPLEMENTATION ANALYSIS
As in all HE-compatible functions, there is particular interest in 's complexity and depth to understand the noise growth that a ciphertext will accumulate as it passes through the algorithm. We will prove a more general statement that establishes the depth of our algorithm on an n × n boundary matrix. We note that while we establish the textbook version of the algorithm as , an immediate improvement to the algorithm to make it even more HE-compatible is easily seen. We implement this verison, , and analyze its depth and complexity.
§.§ Analysis of
In our implementation, we use Algorithm <ref>, which is a slightly modified version of Algorithm <ref>. Here, the computation in line 10 in
Algorithm <ref> is now pushed out of the for loop (see line 14 in Algorithm <ref>) and the repetitive update operations
Δ'_jΩ (Δ'_j - Δ'_j_0)^2 + (1 - Ω) Δ'_j, j_0=0,...,j-1
in line 9 in Algorithm <ref> are now replaced by a single cumulative update operation (line 13 in Algorithm <ref>), which can be explicitly rewritten as
Δ'_j∑_j_0=0^j-1Ω_j_0,j((Δ'_j - Δ'_j_0)^2) + (1 - ∑_j_0=0^j-1Ω_j_0,j) Δ'_j,
where Ω_j_0,j = (_j_0, _j; 𝒫_).
The correctness follows from the fact that
Ω_j_0,j is either approximately zero for all j_0=0,...,j-1 except for at most one value of j_0=k (where it is approximately one), whence we have either
Δ'_j stays approximately the same
or is updated to
Δ'_j≈ (Δ'_j - Δ'_k)^2,
as required.
Let 𝐁∈ℤ_2^n × m be a binary matrix with n ≥ m. Furthermore suppose the tuples of parameters = (d_, d'_, m_, t_) and = (d_, d_', m_, t_) are given which give depth D_ = d_ + 1 + t_(d'_ + log(m_) + 2) and D_ = d_ + 1 + t_(d'_ + log( m_) + 2) to the and functions, respectively.
Then, the depth of the (Algorithm <ref>) is m(m-1)/2[D_L + D_C + 1] and its complexity is
𝒪(m^3[1 + d'_ + t_(d_ + log(m_))] + m^2[d'_ + t_ (d_ + m log (m_))] ).
We proceed with an induction on m. For the base case, note that column j=0 is trivially in reduced form and Δ_1 will certainly be in reduced form after a single comparison with Δ_0. This aligns with the worst case complexity for the original algorithm: O(j^2) for column j, O(n^3) overall <cit.>.
For the inductive hypothesis, assume that for all j ≤ m-1, that the depth of the algorithm, after termination on a n × j matrix, is d(j) = j(j-1)/2 D.
Now consider an n × m matrix 𝐁 = [ _0 | ... | _m-2 | _m-1 ] ∈ℤ_2^n × m. Then the sub-matrix 𝐁' obtained by excluding the last column _m-1 is an n × (m-1) matrix, and thus has depth d(m-1) = (m-1)(m-2)/2 D by the inductive hypothesis. Let us now focus on the last column, x_m. Consider the outer loop corresponding to k = 0 in the algorithm. After the first inner loop finishes, we have the depth of column x'_m is exactly d(m-1) + D = (m-1)(m-2)/2 D + D, where the last D term is added from the very last update.
However, in , for all subsequent k = 1, ..., m-1, every k loop adds exactly D to the depth only one time. This is because every run of the inner j_0 for loop runs in parallel with ciphertexts of lower depth than the most recent update of x'_m. A counting argument will yield that the depth of column x'_m after all loops are completed is [(m-1)(m-2)/2 D + D] + [(m-2)D] = m(m-1)/2D, thus completing the induction.
As for the complexity, the optimized algorithm calls (which has complexity O(m + d_L' + t_L(d_L + m log m_L)) exactly m(m-1)/2 times but still calls (which has complexity O(d_C' + t_C(d_C +log m_C))) exactly m(m-1)(2m-1)/6 times. Thus, the overall complexity is as stated.
This algorithm performed on a boundary matrix Δ∈ℤ_2^n × n is depth n(n-1)/2 [ D_ + D_ + 1 ] and cost 𝒪(n^3 + n^2[d'_ + t_(d_ + n)]) for the choice of m_ = m_ = 2, and assuming that d'_ > d'_, d_ > d_, and t_≈ t_.
§.§ Implementation Notes
In this section, we discuss our implementation of Algorithm <ref> using HE. We assume that a generates , , , for some suitable ; and the knows and . Note that the can evaluate circuits on ciphertexts but cannot decrypt; see Section <ref>.
By construction, variables of
Algorithm <ref> deals with vectors over the set ℝ of real numbers and the approximate arithmetic is performed over ℝ. Additionally, as comparisons feature heavily in our implementation, we note that CKKS comparison circuits are comparable in amortized time to both BFV/BGV and TFHE schemes <cit.>. Therefore, the HEAAN <cit.> HE scheme, also known as the CKKS scheme, would be a suitable choice for implementing Algorithm <ref>.
In CKKS, we have ℳ = ℤ[X]/⟨ X^N+1⟩ and 𝒞 = ℤ_Q[X]/⟨ X^N+1⟩×ℤ_Q[X]/⟨ X^N+1⟩.
Moreover, CKKS allows one to encode and encrypt N/2 numbers [x_0,...,x_N/2 -1], x_i∈ℝ, as a single ciphertext, where ciphertext operations can be performed component-wise and simultaneously. As a result, under the setting of above CKKS parameters, a can encode and encrypt an n× n boundary matrix Δ in at least two different ways: as n ciphertexts c_0,...,c_n-1, where
c_i∈𝒞 represents the encryption of the i'th column of Δ, which requires n≤ N/2; or as a single ciphertext c, where c∈𝒞 represents the encryption of the “concatenated columns of Δ”-vector, which requires n≤√(N/2).
For simplicity, we assume that a encrypts Δ using the first method; obtains and sends c_i∈𝒞 to the . The can use
and compute c_0^', ..., c_n-1^'←_(f; c_0, ..., c_n-1), using ciphertext addition and multiplication operations[In practice, one would have to utilize other ciphertext operations like .], where f is the arithmetic circuit induced by Algorithm <ref>. The sends c_i^', i=0,...,n-1, back to the , who can use and decrypt c_i^' to x_i^'. Note that, by our previous arguments following Algorithm <ref>, (x_i^') would match the ith column of (Δ).
In order to get a more concrete sense of the implementation of Algorithm <ref> using CKKS, we consider CKKS
parameters at λ = 128-bit security level, and set N = 2^17 and Q = P· q_0·∏_i=1^50q_i, as a product of 52 primes with log_2Q = 3300, log_2P = 660, and log_2q_i≈δ = 51 < log_2q_0 < 2δ = 102; see Table 6 in <cit.>. This choice maximizes the depth L of circuits that HEAAN can evaluate, without bootstrapping, to L = 50 and the precision digits of data during computations is kept at 10.
Under this choice of parameters, a can encode and encrypt boundary matrices of size (n× n), where n ≤ N/2 = 2^16 (√(N/2) = 2^8) using the first (second) encoding approach. CKKS can handle circuits of depth up to L=50 and so one would have to bootstrap <cit.> once the depth limit is exhausted.
In our implementation, we use Algorithm <ref>, which is a slightly modified and optimized version of Algorithm <ref>.
Our implementation, using Intel(R) 16-Core(TM) i9-9900K 3.60GHz, can reduce a single encrypted 3x3 matrices in 4.5 seconds with 40 bootstrappings
using the (non-cryptographic) CKKS parameters N=2^5, Q≈ 2^3188; and
= = (5, 5, 2, 5), where the parameters are chosen such that the underlying in our computations uses one of the optimal parameters as reported in <cit.>. Note that Reducing 3x3 matrices takes 225 minutes using 128-bit secure CKKS parameters with N=2^17.
Note that if ciphertext slots are fully utilized then the amortized times would be 4.5/(2^4/(3+1)) = 1.125 and 225·60/(2^16/(3+1)) = 0.82 seconds, respectively.
§.§ Limitations and Potential Improvements
A major challenge in implementing using HE is the cubic co-factor n^3 in the depth of the underlying arithmetic circuit (even has a quadratic co-factor n^2, see Theorem <ref>.)
As pointed out in an implementation scenario in Section <ref>, HEAAN can handle circuits up to depth 50 but the depth of quickly reaches large numbers as n grows and exceed 50 even for small values of n. Therefore, the size of the boundary matrix may be too large in practice to be practically reducable. Indeed, the Vietoris-Rips and Čech filtrations have 2^m simplices in the worst case for a point cloud with m points <cit.> since they define scales for every simplex in the powerset of the vertices (although it would be unusual to compute with simplices of all dimensions).
Another challenge is to encode and encrypt (n× n) boundary matrices for large n. As noted in Section <ref>, currently suggested HEAAN parameters <cit.> at 128-bit security level limits n < N/2 = 2^16 or n < √(N/2) = 2^8, depending on the choice of encoding. Therefore, substantial improvements would be required before an efficient implementation of can be realized.
A possible improvement would be to reduce the size of the boundary matrix by the choice of filtration, which is an active field of research. For example, for a point cloud of size m in dimension d, the (weighted) alpha <cit.> and sparse Rips filtrations <cit.> create complexes of size m^𝒪(d/2) and 𝒪(m) respectively <cit.>. Very recent theoretical results also justify computing the PH of small subsets of a point cloud to construct a distribution of PDs representing the topology of the original cloud <cit.>. This approach has the potential to massively reduce the size of each boundary matrix, whose reductions can be carried out completely in parallel.
Another improvement would come from relaxing our theoretical bounds for parameters to reduce the depth in Theorem <ref>. Section <ref> provides some motivating evidence of the feasibility and potential consequences of such an approach.
§.§ Empirical Results
The output of Algorithm <ref> is an approximately binary matrix
R'=(Δ;,) ∈ [0,1]^n× n
which approximates the output of (Δ). The key bound in parameter selection is that throughout , the approximate binary vectors must never disagree with the true underlying binary vectors by more than 1/2n, to ensure the output of returns an approximately binary vector with the same implied birth-death pairings as the exact .
How prevalent are the cases in which the maximum error between the approximate and the exact reduced matrix exceeds 1/2n? This question focuses on the accumulation of error throughout due to approximating exact operations in plaintext, and is independent of the noise growth that is accumulated by HE operations.
We explored this question in a fashion similar to the parameter relaxation experiment conducted in <cit.> by systematically increasing the parameters and of with respect to their depth and complexity to determine a minimum depth cofactor D = D_L + D_C + 1 (as defined in Theorem <ref>) which resulted in 100% accuracy. Specifically, for each parameter choice, we randomly sampled the space of 10× 10, upper-triangular, binary matrices and compared the results of exact and approximate reductions, recording when all entries were within 1/2n and/or 1/2 of the exact-reduced binary matrix. We found that the minimum depth (119) and complexity (55300) parameter pair for which 100% of the approximately reduced matrices were within 1/2n of their exact counterparts was = (3, 3, 2, 6) and = (3, 3, 2, 12), as reported in Table <ref>. That said, it may be that some matrices will exhibit an error in excess of the 1/2n tolerance for these parameter choices, although we expect such examples to be rare if they exist. By reducing t_ from 12 to 11, we found only 81.2% of approximately reduced matrices had errors less than 1/2n and only 91.2% of matrices had maximum error was less than 1/2—and so would still yield the correct reduced matrix after rounding (Table <ref>). By additionally raising t_L from 6 to 7 (so the circuit depth is again 119 but complexity is 51800) we find 98.6% of approximately reduced matrices had errors less than 1/2n and 100% of matrices had maximum error was less than 1/2 (Table <ref>). These results in expected accuracy suggests there is moderate sensitivity to the choice of some parameters.
We found that the same parameters for shown to correctly reduce random 10× 10 matrices, also correctly reduces the 12× 12 example boundary matrix given in Figure <ref>. Indeed, the maximum error in any component of the approximate reduced boundary matrix is 2.04e-3, well within the required 1/2n = 1/24 ≈ 0.041 tolerance to guarantee correct computation of column low 1s (Figure <ref> (A)).
By relaxing some choices of accuracy parameters we observe failure cases where produces approximate binary matrices that do not cast to the exact reduced matrix. For instance, relaxing t_ from 5 to 6 returns a matrix that fails to be in reduced form, as both columns ab and bc have the same low 1s (Figure <ref> (B)). By increasing t_ substantially, this issue is remedied, however, the approximately reduced matrix does not agree with the exact reduction (Figure <ref> (C)). It is interesting to note that, in this case, the low 1s are all correct, and so the correct persistence diagram is computed.
Relaxing parameters also leads to failure, as shown in (Figure <ref> (D)), where large errors accumulate during reduction leading to values that fall far outside the allowed range of [0, 1].
§ CONCLUDING REMARKS AND FUTURE RESEARCH
We developed a new algorithm that enables key TDA computations in the ciphertext space using HE. We proved the correctness of our proposed algorithm, provided detailed correctness and complexity analysis, and an implementation of our algorithms using CKKS from the OpenFHE library <cit.>. We also presented some concrete directions for improvement and provided experimental results. To our knowledge, this is the first attempt to introduce secure computing for TDA. It would be interesting to extend and improve our results, and to implement secure TDA algorithms on realistic data sets.
The algorithm represents one of several fundamental components of TDA machinery which challenge existing technologies in the HE space. Another is the calculation of distances between PDs, which rely on combinatorial optimization algorithms to minimize the cost of matchings between persistence pairs in pairs of PDs <cit.>. Others include the numerous methods being broadly deployed to vectorize PDs for use with downstream ML models <cit.>. HE-compatible implementations could allow remote processing of encrypted PDs and would immediately enable the use of existing implementations of encrypted Euclidean distance calculations <cit.> and encrypted ML models that take as input finite-dimensional feature vectors <cit.>. We are hopeful these challenges will have implications beyond TDA-ML use cases by soliciting contributions from the broader HE community, and that the constraints imposed by HE will motivate new TDA approaches.
splncs04
Appendix
§ CORRECTNESS PROOFS
[Proof of Lemma <ref>]
First note that S(v)∈𝒟^n, since all entries are necessarily distinct by definition of Transformation <ref>, and so maxidx(S(v)) is defined.
Suppose that low(v)=k for some 0≤ k ≤ n-1. We have to show
S(v)[k]>S(v)[j] for all 0≤ j≤ n-1 and j k.
Case 1: 0≤ j <k. Note that v[k]=1 and that v[j]∈{0,1}. Therefore,
S(v)[k] = v[k]+k/n = 1 + k/n > v[j] + j/n = S(v)[j].
Case 2: 0≤ k<j≤ n-1. Note that v[j]=0 because low(v)=k is the largest index with v[k]=1. Therefore, S(v)[j] = v[j] + j/n = j/n and that
S(v)[k] = 1+k/n > (n-1)/n ≥ j/n = S(v)[j].
The same argument in the proof of Lemma <ref> would work for any transformation S that is strictly monotonically increasing on {0,1,…, n-1} and strictly bounded by 1. However, in the implementation of , which we use to approximate maxidx (and thus low), the rate of convergence is increasing with the distance between distinct values in v, and so a nonlinear choice of S may have the effect of decreasing the rate of convergence, at least for some input vectors. Further analysis would be required to find an optimal choice for S.
[Proof of Lemma <ref>]
Suppose that low(v)=k for some 0≤ k ≤ n-1. We have to show
S(v')[k]>S(v')[j] for all 0≤ j≤ n-1 and j k.
Case 1: 0≤ j <k. Note that v[k]=1, v'[k]>1-1/2n, v[j]∈{0,1},
and v'[j]<1+1/2n. Therefore,
S(v')[k] = v'[k]+k/n > (1 - 1/2n) +k/n
> (1+1/2n) + (k-1)/n > v'[j] + j/n = S(v')[j].
Case 2: 0≤ k<j≤ n-1. Note that v[j]=0 because low(v)=k is the largest index with v[k]=1. Moreover, v'[k]>1-1/2n and v'[j]<1/2n. Therefore,
S(v')[k] = v'[k]+k/n > (1-1/2n)+k/n ≥ 1-1/2n
= 1/2n + (n-1)/n > v'[j] + j/n = S(v')[j].
Let ε > 0 and let e_j denote the standard n-dimensional basis vector, with e_j[j]=1 and e_j[i]=0 for all i ≠ j. Fix parameters d, d', m, t for the algorithm so that |(x; d,d',m,t) - e_maxidx(x)| < 2^-α, for all x∈ [1/2, 3/2)^n. Then, for any binary vector v∈{0,1}^n,
| (v; d, d', m, t) - low(v) | < n(n-1)/2 (2^-α),
where , low, and are computed as described in .
To ease notation let b = (v; d, d', m, t). Assume the paramaters d, d', m and t have been chosen so that
| b[i] - e_k[i] | < 2^-α
for all 0 ≤ i < n-1, if k = maxidx(v).
Then
| (v; d, d', m, t) - low(v) | = | ∑_i = 0^n-1 ib[i] - ∑_i = 0^n-1 i e_j[i] |
= | ∑_i = 0^n-1 i(b[i] - e_j[i]) |
≤∑_i = 0^n-1| i(b[i] - e_j[i]) |
< 2^-α∑_i = 0^n-1 i
= n(n-1)/2(2^-α).
Let α > 0 and fix parameters d, d', m, t for the algorithm so that |(x; d,d',m,t) - e_maxidx(x)| < 2^-α, for all x∈ [1/2, 3/2)^n. Further assume v'∈ [0, 1]^n and v∈{ 0, 1 }^n are such that |v' - v| < 1/2n. Then
| (v'; d, d', m, t) - (v; d, d', m, t) | < n(n-1) 2^-α
Recall that the algorithm presented in Algorithm <ref> requires the input vectors to undergo two transformations, S and T_, before being fed into the algorithm. Let x' = T_(S(v')) and x = T_(S(v)). The problem now reduces to bounding
| (x'; d, d', m, t) - (x; d, d', m, t) |
where | x' - x| < 1/4n < 1/2n. By Lemmas <ref> and <ref>
maxidx(x') = maxidx(T_(S(v'))) = low(v)
= maxidx(T_(S(v))) = maxidx(x),
and so a triangle inequality obtained by adding and subtracting e_maxidx(x') and e_maxidx(x) yields
| (x'; d, d', m, t) - (x; d, d', m, t) |
< 2^-α + 0 + 2^-α = 2^-α + 1.
Letting
b = (x; d, d', m, t)
b' = (x'; d, d', m, t),
we have
| (v'; d, d', m, t) - (v; d, d', m, t) | = | ∑_i = 0^n-1 ib'[i] - ∑_i = 0^n-1 i b[i] |
= | ∑_i = 0^n-1 i(b'[i] - b[i]) |
≤ ∑_i = 0^n-1| i(b'[i] - b[i]) | < 2^-α + 1∑_i = 0^n-1 i
= n(n-1)/2 (2^-α + 1) = n(n-1) 2^-α.
§ PARAMETERS PROOFS
Fix n ≥ 2 and assume v∈{ 0, 1 }^n and v'∈ [0, 1]^n satisfy | v - v'| ≤ε/2n, for some 0 ≤ε < 1. Let x' = T_(S(v')) and denote the (i+1)-th smallest value of x' by x'_(i), so min{x'[i] | 0 ≤ i ≤ n-1} =: x'_(0) < x'_(1) < … < x'_(n-1) := max{x'[i] | 0 ≤ i ≤ n-1}. Let
c = x'_(n-1)/x'_(n-2)
be the ratio of the largest coordinate value of x' over the second largest value. Then
c ≥ 1 + 2-2ε/6n-4+ε.
Suppose low(v) = k. By Lemma <ref>, x'[k] has the highest value in the vector. Define the two sets
M_1 = {j | v[j] = 1 and j < k}
= {j | v'[j] ≥ 1 - ε/2n and j < k }
and
M_0 = {j | v[j] = 0} = {j | v'[j] ≤ε/2n}.
There are two cases to consider, either M_1 is empty or not.
Case 1: If M_1 is empty, let m = max M_0. Since S is an increasing function with respect to index, and T_ is strictly increasing we have that T_(S(v'))[m] = x'[m] is necessarily the second highest value. This is because
x'[m] = v'[m] + m/n + 1/2≥m/n + 1/2 > ε/2n + j/n + 1/2 > v'[j] + j/n + 1/2 = x'[j]
for 0 ≤ j < m ≤ n-1. The middle inequality follows because m,n ∈ℤ and m > j m-j ≥ 1 m/n≥1/n + j/n > ε/2n + j/n. Thus,
x'_(n-1)/x'_(n-2) = x'[k]/x'[m] > 1 - ε/2n +k/n + 1/ε/2n+n-1/n + 1
= 1 + 2k+2-2ε/4n-2+ε≥ 1 + 2-2ε/4n-2+ε.
Case 2: If M_1 is not empty, let m = max M_1. We first show that x'[i] > x'[j] for all i ∈ M_1, j ∈ M_0; that is, all transformed approximate 1's are larger than all transformed approximate 0's. Let j∈ M_0 and i∈ M_1 be arbitrary. Then
v'[j] + j/n < ε/2n + j/n ≤ε/2n + n-1/n
≤ 1 - 1/2n≤ 1 - ε/2n + i/n < v'[i] + i/n
for i ≥ 0. Since T_ is an increasing function, we have that x'[j] < x'[i] as desired. Thus, the second largest coordinate of x' is x'[m] > (1 - ε/2n + m/n + 1)/2, where m = max M_1. Necessarily m ≤ k-1, and so we have that
x'_(n-1)/x'_(n-2) = x'[k]/x'[m] > 1 - ε/2n +k/n + 1/1+k-1/n + 1
= 1 + 2 - ε/4n+2k-2≥ 1 + 2 - ε/6n-4.
Thus, as n and ε were chosen from the beginning, we know that the ratio between the largest and second largest value for any “approximate” binary vector is bounded below by
min{1 + 2-2ε/4n - 2 + ε, 1 + 2-ε/6n-4}. However, both 2-2ε/4n - 2 + ε and 2-ε/6n-4 are greater than or equal to 2-2ε/6n-4+ε, with the former being immediate for n ≥ 1 and the latter is true as
2 - ε/6n-4≥2-2ε/6n-4≥2-2ε/6n-4+ε, for 0 ≤ε < 1.
And so, it follows that
c ≥min{1 + 2-2ε/4n - 2 + ε, 1 + 2-ε/6n-4}≥ 1 + 2-2ε/6n-4+ε.
Assume v∈{ 0, 1 }^n and v'∈ [0, 1]^n satisfy | v - v'| ≤ε/2n, for some 0 ≤ε < 1. Consider Theorem 5 in <cit.>, which gives the parameters (d, d', m, t) for the function for a given desired accuracy of 2^-α. If α is chosen such that
α > log(3) + 2log(n) - log(δ) - 1
then |(v'; d, d', m, t) - low(v)| < δ.
We note that from Theorem <ref> it suffices to have
3/2n(n-1)2^-α < 3n^2/2^α+1 < δ
⇔ 2^α + 1 > 3n^2/δ
⇔α > log(3) + 2log(n) - log(δ) - 1
[Proof of Theorem <ref>]
Lemma <ref> provides the corresponding α needed for a desired δ-error in . That is, if one desires error δ in |(v') - low(v)|, then one may choose α > log(3) + 2log(n) - log(δ)-1.
c's lower bound has been found in Proposition <ref>. In particular, by Proposition <ref>, we know that c ≥ 1 + 2-2ε/6n-4+ε. But this is equivalent to saying that
-log(log(c)) ≤ -log(log(1 + 2-2ε/6n-4+ε))
log(α + 1 + log(n)) -log(log(c))
≤log(α + 1 + log(n)) -log(log(1 + 2-2ε/6n-4+ε))
Therefore, if we choose
t ≥log(α + 1 + log(n)) -log(log(1 + 2-2ε/6n-4+ε))/log m
we fulfill the former inequality in Theorem 5 (from <cit.>).
At this point, both α and t are determined. As min{d, d' } is dependent upon these choice of parameters, our choice of min{d, d' } is also determined.
§ CORRECTNESS PROOFS
[Proof of Theorem <ref>]
Recall Lemma <ref> and the following chain of if and only if implications:
lowcomp(l_, l_) = 1 ⇔ low() = low() ⇔ϕ≥ |_x' - _y'|
⇔ϕ^2 ≥ (_x' - _y')^2 ⇔ T_(ϕ^2) ≥ T_(_x' - _y')^2
There are two cases: lowcomp(l_, l_) = 1 or lowcomp(l_, l_) = 0.
Case 1: lowcomp(l_, l_) = 1. Then T_(ϕ^2) ≥ T_(_x' - _y')^2
which implies that
|(T_(ϕ^2), T_(_x' - _y')^2; d, d', m, t)
- comp(T_(ϕ^2), T_(_x' - _y')^2)| < η
|(T_(ϕ^2), T_(_x' - _y')^2; d, d', m, t) - 1| < η
1 - η < (T_(ϕ^2), T_(_x' - _y')^2; d, d', m, t) < 1
1 - η < _x', _y'; d, d', m, t) < 1.
Case 2: lowcomp(l_, l_) = 0. Then T_(ϕ^2) < T_(_x' - _y')^2
which implies that
|(T_(ϕ^2), T_(_x' - _y')^2; d, d', m, t)
- comp(T_(ϕ^2), T_(_x' - _y')^2)| < η
|(T_(ϕ^2), T_(_x' - _y')^2; d, d', m, t) - 0| < η
0 < (T_(ϕ^2), T_(_x' - _y')^2; d, d', m, t) < η
0 < (_x', _y'; d, d', m, t) < η.
§ PARAMETERS PROOFS
Consider positive a < b < c. Then the value of b which optimizes the problem
max_b ∈ (a, c)( min{b/a, c/b})
is b = √(ac). In other words, b is equal to the geometric mean of the endpoints. Furthermore, for this choice of b, we have that
max_b ∈ (a, c)( min{b/a, c/b}) = √(c/a).
Suppose b = √(ac). Then b/a = c/b = √(c/a). If b > √(ac), then b/a > c/b, implying that the minimum of the two ratios is c/b. But then c/b < c/√(ac) = √(c/a). Similarly, if b < √(ac), then b/a < c/b, so that the minimum of the two ratios is now b/a. It follows that b/a < √(ac)/a = √(c/a). Thus the value of b over (a,c) which maximizes the min{b/a,c/b} is b = √(ac) and the maximum value is √(c/a).
[Proof of Corollary <ref>]
The value of T_(ϕ^2) comes immediately from applying the preceding Proposition <ref>. This proposition also establishes c's lower bound, as
c = max{ T_(ϕ^2), T_((_' - _')^2) }/min{ T_(ϕ^2), T_((_' - _')^2) }
> min{T_(ϕ^2)/T_((2δ)^2) , T_((1-2δ)^2)/T_(ϕ^2)} = √(T_((1-2δ)^2)/T_((2δ)^2))
= √(1/2 + (1-2δ/n)^2/1/2 + (2δ/n)^2) = √(n^2 + 2(1-2δ)^2/n^2 + 2(2δ)^2).
Consider Theorem <ref>, which gives the parameters (d, d', m, t) for the function for a given desired accuracy of 2^-α. If α is chosen such that α > -log(η) then (and by extension, ) has η-error. That is,
|(_x', _y', ϕ; d, d', m, t) - lowcomp(l_, l_)| < η
If α > -log(η), then 2^-α < η. The result follows from Theorem <ref>.
[Proof of Theorem <ref>]
For
ϕ = n√(√((1/2 + (2δ/n)^2 ) (1/2 + (1-2δ/n)^2 )) - 1/2),
one may calculate
T_(ϕ^2) = √((1/2 + (2δ/n)^2 ) (1/2 + (1-2δ/n)^2 )).
By Corollary <ref>, c is bounded below by √(n^2 + 2(1-2δ)^2/n^2 + 2(2δ)^2). But this is equivalent to saying that
-log(log(c)) ≤ -log(log(√(n^2 + 2(1-2δ)^2/n^2 + 2(2δ)^2)))
log(α + 2) -log(log(c))
≤log(α + 2) -log(log(√(n^2 + 2(1-2δ)^2/n^2 + 2(2δ)^2))).
Therefore, if we choose
t ≥1/log m[ log(α + 1 + log(n))
-log(log(√(n^2 + 2(1-2δ)^2/n^2 + 2(2δ)^2))) ],
we fulfill the former inequality in Theorem 4.
Furthermore, Lemma <ref> determines what α must be to achieve the desired η-error in . At this point, with a fixed choice of m, Theorem 4 from <cit.> establishes the remaining parameters d, d', and t.
|
http://arxiv.org/abs/2307.02527v1
|
20230705180000
|
The AMIGA sample of isolated galaxies. XIV. Disc breaks and interactions through ultra-deep optical imaging
|
[
"P. M. Sánchez-Alarcón",
"J. Román",
"J. H. Knapen",
"L. Verdes-Montenegro",
"S. Comerón",
"R. M. Rich",
"J. E. Beckman",
"M. Argudo-Fernández",
"P. Ramírez-Moreta",
"J. Blasco",
"E. Unda-Sanzana",
"J. Garrido",
"S. Sánchez-Exposito"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
XIV. Disc breaks and interactions through ultra-deep optical imaging
Disc breaks and interactions through ultra-deep optical imaging
Sanchez-Alarcon, PM et al.
Instituto de Astrofísica de Canarias, c/ Vía Láctea s/n, E-38205, La Laguna, Tenerife, Spain
Departamento de Astrofísica, Universidad de La Laguna, E-38206, La Laguna, Tenerife, Spain
Instituto de Astrofísica de Andalucía (CSIC), Granada, Spain
Kapteyn Astronomical Institute, University of Groningen, PO Box 800, NL-9700 AV Groningen, the Netherlands
Department of Physics & Astronomy, University of California Los Angeles, 430 Portola Plaza, Los Angeles, CA 90095-1547, USA
Departamento de Física Teórica y del Cosmos Universidad de Granada, 18071 Granada, Spain
Instituto Universitario Carlos I de Física Teórica y Computacional, Universidad de Granada, 18071 Granada, Spain
ESA NEO Coordination Centre, Via Galileo Galilei, 00044 Frascati (RM), Italy
GMV, Isaac Newton 11, Tres Cantos, 28760 Madrid, Spain
Centro de Astronomía (CITEVA), Universidad de Antofagasta, Avda. U. de Antofagasta 02800, Antofagasta, Chile
In the standard cosmological model of galaxy evolution, mergers and interactions play a fundamental role in shaping galaxies. Galaxies that are currently isolated are thus interesting, allowing us to distinguish between internal and external processes affecting the galactic structure. However, current observational limits may obscure crucial information in the low-mass or low-brightness regime.
We use optical imaging of a subsample of the AMIGA catalogue of isolated galaxies to explore the impact of different factors on the structure of these galaxies. In particular, we study the type of disc break as a function of the degree of isolation and the presence of interaction indicators like tidal streams or plumes which are only detectable in the ultra-low surface brightness regime.
We present ultra-deep optical imaging in the r-band of a sample of 25 low-redshift (z < 0.035) isolated galaxies. Through careful data processing and analysis techniques, the nominal surface brightness limits achieved are comparable to those to be obtained on the 10-year LSST coadds (μ_r,lim ≳ 29.5 mag arcsec^-2 [3σ; 10"×10"]). We place special emphasis on preserving the low surface brightness features throughout the processing.
The extreme depth of our imaging allows us to study the interaction signatures of 20 galaxies, given that the presence of Galactic cirrus is a strong limiting factor in the characterisation of interactions for the remaining 5 of them. We detect previously unreported interaction features in 8 (40%±14%) galaxies in our sample.
We identify 9 galaxies (36%±10%) showing an exponential disc (Type I), 14 galaxies (56%±10%) with down-bending (Type II) profile and only 2 galaxies (8%±5%) with up-bending (Type III) profiles.
Isolated galaxies have considerably more purely exponential discs and fewer up-bending surface brightness profiles than field or cluster galaxies. We find clear minor merger activity in some of the galaxies with single exponential or down-bending profiles, and both of the galaxies with up-bending profiles show signatures of a past interaction.
We show the importance of ultra-deep optical imaging in revealing the presence of faint external features in galaxies which indicate a probable history of interaction. We confirm that up-bending profiles are likely produced by major mergers while down-bending profiles are probably formed by a threshold in star formation. Unperturbed galaxies, evolving slowly with a low star formation rate could induce the high rate of Type I discs in isolated galaxies.
The AMIGA sample of isolated galaxies
P. M. Sánchez-Alarcón 1,[email protected]
J. Román 1,2,3,[email protected]
J. H. Knapen 1,2
L. Verdes-Montenegro 3
S. Comerón 2,1
R. M. Rich 5
J. E. Beckman 1,2
M. Argudo-Fernández 6,7
P. Ramírez-Moreta 8,9
J. Blasco 6
E. Unda-Sanzana 10
J. Garrido 3
S. Sánchez-Exposito 3.
August 1, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Present-day galactic discs are snapshots of galaxy evolution resulting from galaxy formation and evolution through cosmic time. The study of galaxies with a variety of morphologies, evolutionary stages, and environments helps us to assemble a global picture to explain all the observed features. One of the simplest, yet most effective ways to classify galactic structures is through their surface brightness profiles. These profiles were initially characterised with a single exponential decay by <cit.> and <cit.>, but the current view is considerably more complex. Following the observation of breaks in profiles by <cit.> and <cit.> at the outer regions of the edge-on galaxies, work by <cit.> and <cit.> considered breaks over a range of galactocentric radii and in less-inclined galaxies. They established a classification based on the global shape of the disk profile: Type I or pure exponential for profiles with no breaks, Type II or down-bending profiles with outer slopes steeper than the inner ones <cit.> and Type III or up-bending profiles where the exponential decline is steeper in the inner part of the disc that in the outer part <cit.>. This classification has been helpful since galaxies show different average properties for each type, suggesting a different origin or evolution.
One of the most characteristic features in discs is the drastic change in age around the break radius, noticed as a "U-shape" in the colour profiles, while the mass surface density profile remains relatively constant <cit.>. This observational feature has been tentatively explained as the result of gas accretion plus a density threshold in the star formation, and subsequent redistribution of mass by radial migration <cit.> which has been described as inside-out formation of the disc <cit.>.
The environment appears to play a key role in the frequency of each break type <cit.>. In general, high-density environments favour up-bending Type III profiles <cit.>. This is not surprising, given that the environment is one of the most influential factors in shaping galactic morphology <cit.>. However, internal processes <cit.> or the accretion of cold gas <cit.> are also capable of transforming the characteristics of the galaxies. Correlations between the type of break (in particular the Type III fraction) with internal parameters <cit.>, and possible interactions <cit.> are also found. Adding more complexity, instrumental effects including background subtraction or scattered light can have a significant effect on the photometric profiles, especially at extremely low surface brightness <cit.>. This means that determining the specific impact of different processes is not a straightforward task. A possible approach is to study isolated galaxies, excluding or diminishing the impact of environmental processes <cit.>.
Determining and quantifying the isolation of a galaxy is not a simple task. The Analysis of the interstellar Medium of Isolated GAlaxies (AMIGA) Project <cit.> is an exhaustive study of galaxies isolated from major companions, based on the original catalogue of isolated galaxies (CIG) presented by <cit.> and later revised and quantified by <cit.> and <cit.>. Despite the efforts shown in these works to quantify minor interaction features, low signal-to-noise spectroscopic and imaging surveys may fail to identify the presence of faint surface brightness satellites around galaxies classified as isolated. In fact, numerical simulations based on the Λ-CDM cosmological paradigm predict an average of one low surface brightness feature per galaxy due to minor interactions at a surface brightness level of μ = 29 mag arcsec^-2 <cit.> and we can expect many galaxy features fainter than μ = 30 mag arcsec^-2 <cit.>.
The interest in low surface brightness science is confronted with significant challenges due to numerous observational limitations. Advances have been made in recent years in developing deep optical surveys <cit.>, characterising scattered light <cit.>, improving observational <cit.> and data processing techniques <cit.>, and more <cit.>. Indeed, increasingly sophisticated studies are able to reveal the presence of satellites <cit.> and tidal features <cit.> at increasingly lower surface brightness, but a long way remains to correctly detect and classify all the structures predicted by simulations <cit.>.
In this work, we carry out an ultra-deep imaging study of isolated galaxies from the AMIGA catalogue in order to shed light on the structural differences between these isolated galaxies and galaxies in higher-density environments. In particular, we are interested in: 1) the types of breaks in the discs of isolated galaxies in comparison with those in higher-density environments. This is something not yet done in the literature for isolated galaxies, and so far has only been carried out comparing higher-density environments such as clusters, and groups with simply the "field". 2) Using low surface brightness imaging to explore the presence of minor interactions in these isolated galaxies. 3) Explore possible correlations between the type of break, presence of minor interactions and density, that can help elucidate the dominant factors in shaping the disc of galaxies.
This work is structured as follows: In Section <ref> we describe the data sample and the reduction procedure followed. In Section <ref> we explain the methods used to measure the surface brightness profiles, classify the profiles by type and detect signs of interaction. In Section <ref> we present the results that are discussed in Section <ref>. The conclusions of our work are summarised in Section <ref>. We adopt the values of the cosmological constants H_0 = 70 km s^-1 Mpc^-1, T_0=2.725 K, and Ω_m=0.3. Galactic extinction is corrected following <cit.>. We use the AB photometric system.
§ SAMPLE, DATA AND PROCESSING
§.§ Sample selection and observations
We use the AMIGA sample <cit.>, based on the original CIG catalogue by <cit.>. The latest revision of the isolation parameters of the AMIGA catalogue was carried out by <cit.>, a work on which we base the selection of our sample.
Our selection criteria are as follows. The targets belong to the AMIGA catalogue. We require that the galaxies have a reliable determination of the distance <cit.> as well as a detection in H i, in order to also further explore possible correlations between the fainter optical morphology and their gas content. The photometric isolation parameters measured by <cit.> are the local number density of neighbours galaxies, η_k,p and the tidal strength, Q_Kar,p. We require galaxies to have a local number density of neighbouring galaxies η_k,p<2.7.
We observed 25 galaxies meeting these criteria with different telescopes. These galaxies have morphologies that range from Hubble types of 3≤T≤5, which is a similar sample of the complete AMIGA catalogue <cit.>. Here we briefly describe the instrumentation used: 1) The Isaac Newton Telescope (INT), located in the Observatorio del Roque de los Muchachos in La Palma, Spain, has a 2.5 m diameter primary mirror. We used the Wide Field Camera (WFC), a four-CCD mosaic covering 33 arcmin on a side with a pixel scale of 0333. 2) The VLT Survey Telescope (VST), located in Cerro Paranal, Chile. The VST has a 2.65 m primary mirror. We used OmegaCAM which has 32 CCDs covering a field-of-view of approximately 1 degree^2, with a pixel scale of 021. 3) The 0.7 m Jeanne-Rich Telescope (JRT) is located at the Polaris Observatory Association site, Pine Mountain, California. The camera covers approximately 40 arcmin on a side with a pixel scale of 1114.
Due to the high observational cost of ultra-deep optical imaging, and to optimise the detection of faint features we only obtained r band data. Observations were carried out mostly in dark time, although grey nights with little Moon were also used. In the worst cases, the Moon was far enough with a low phase to not produce any gradient in the image. Dithering patterns of tens of arcseconds were used in order to improve the flat field. The exposure time was set between one and five minutes for each individual frame.
The imposed surface brightness limits rule out the use of most general-purpose optical surveys. Only the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) <cit.> is able to fulfil our surface brightness requirements. HSC-SSP is a survey produced with the Subaru 8.2 m aperture telescope and the Hyper Suprime-Cam. We found four galaxies within the HSC-SSP footprint meeting our selection criteria, that are included in our sample. We used the second data release of the survey <cit.>. Although there are additional filters, for the purposes of our work we only use the r band data.
We set a requirement of μ_r,lim > 29.5 mag arcsec^-2 for the surface brightness measured as 3σ in 10^''×10^'' boxes following the nominal depth description by <cit.>. This limit is set to have a compromise between deep images to detect faint structures and observational costs. We add an exception to this limit to data from the JRT. This telescope has a small aperture but is built to be extremely efficient in the low surface brightness regime. Therefore, although the Poissonian noise with which the surface brightness limits are measured will tend to be higher for this telescope, the detectability of extremely low surface brightness features is comparable to data of higher nominal depth, as we will show in the results section. Additionally, on this telescope, we use a broad luminance band, in order to maximise detection. This type of band has been proven to be similar to r-band <cit.>. We therefore set a limit of μ_L,lim > 28.5 mag arcsec^-2 for data from the JRT.
§.§ Data Reduction
The observations were reduced following a procedure aimed at preserving the low surface brightness features. First, a bias subtraction (and dark if necessary) is performed, following an ordinary procedure of combining bias images (and dark). For the flat-fielding, we use the science images themselves. This procedure consists of using heavily masked science images, masked with specialised software such as SExtractor <cit.> and Noisechisel <cit.>, which are normalised in flux and subsequently combined using a resistant mean algorithm to produce a flat that is representative of the sky background during the observations. This procedure, building the flat from the science images, has significant advantages over the usual dome or twilight flats. The main one is a considerable decrease in the strength of gradients present in the images, allowing, as we will detail later, a less aggressive subtraction of the sky background, and therefore, higher reliability of the characterisation of the faintest sources. An additional advantage is that the fringing structures contained in the science images are perfectly corrected, allowing to improve the quality of the final images. Given that the sensitivity of the CCD cameras can vary over time, and that the time range between different observations is very wide (of the order of years), the flats are built and applied to data sets taken close in time, typically during the few days of each observation campaign.
Once the images are reduced, we proceed with their combination. First, the images are astrometrically calibrated, using the Astrometry.net software package <cit.> to obtain an approximate solution, and SCAMP <cit.> to obtain the final astrometry. The next step is the coaddition of the individual frames, which is the most crucial step in the process. We perform an iterative loop converging on what we consider the final coadd. The procedure is as follows. First, we obtain a seed coadd, which is the starting point of the iteration. This coadd is produced by subtracting a constant sky value for each of the individual frames to be combined. The consequence of this is that we preserve the lowest surface brightness structures that were not removed by the sky subtraction. However, the frequent gradients in the individual frames produce considerable fluctuations in the sky background which remain in the coadd. This coadd is heavily masked with Noisechisel, choosing parameters that maximised the masking of the real sources, trying to leave the smooth gradients of the sky background unmasked. This mask produced with the coadd is applied to the individual frames and we perform a smooth polynomial sky fitting to the individual masked frames. We use as sky-fitting surfaces Zernike polynomials <cit.>, always with values equal to or less than n ≤ 4, using for each n order all the azimutal components. This Zernike polynomials fitting produce smooth surfaces that do not impact in the oversubstraction around galaxies. Once all the gradients of the individual frames are fitted and subtracted, they are combined to produce a new coadd that is used again as a seed to get a new mask. This process is iterated a number of times. Depending on how strong the gradients are, the number of iterations and the degree of the polynomial are varied to obtain an optimal result. Most cases, given the high quality of flat-fielding, the polynomials have a low degree (n=2,3). In extreme cases, and mainly due to the presence of the Moon in the observations, we increase the degree of the polynomial (n=4) in order to obtain an optimal final coadd. In general, the reliability of our sky background subtraction allows us to preserve the lower surface brightness structures within our depth limits. We do not find signs of oversubtraction in the images and profiles, such as high-contrast regions close to the end of the galaxies, or systematically truncated profiles (see Sect.<ref>). The combination process is performed by photometrically calibrating the individual frames, measuring their signal to noise, and combining them by means of a weighted mean, thus optimising the signal-to-noise ratio of the final coadd.
In Table <ref> we show the final sample. The depth of the images was calculated following the method of <cit.>, appendix A, according to a standard metric of 3 sigmas in 10^''×10^'' boxes. We can see that all galaxies have a nominal limiting surface brightness above 29.5 mag arcsec^-2, except those observed with the JRT, which, as already mentioned, have a slightly lower nominal depth which is compensated by having flatter fields on large angular scales. The total integration time of our campaign, excluding data from the HSC-SSP survey, is 110 hours.
In Fig. <ref> we show the galaxy CIG 340 together with a comparison with SDSS <cit.> and Legacy Survey data <cit.>. The difference in depth is significant between that obtained in our work (30.3 mag arcsec^-2), SDSS (27.6 mag arcsec^-2) and the Legacy Survey (28.8 mag arcsec^-2), all measured at [3σ; 10^''×10^'']. While the morphology of CIG 340 appears similar in the SDSS and the Legacy Survey, in our data we detect new structure, with a clear tidal stream to the south of CIG 340 and diffuse light appearing in the direction transverse to the disc, showing a halo-like structure. This highlights the considerable jump in detection power from previously existing data, and the capacity of our observations to reveal the presence of past minor interactions.
§ ANALYSIS
§.§ Masking procedure
The masking of light coming from sources other than the target galaxy is one of the most delicate processes needed to obtain the cleanest and deepest possible surface brightness profiles. This task not only requires extracting the signal of astronomical sources from the emission of the background but also requires the attribution (segmentation and deblending) of the signal to one particular source when there is an overlap.
We combine two of the most popular software tools used in astronomical detection, SExtractor <cit.> and NoiseChisel and Segment <cit.> part of the GNUastro package. First, we run NoiseChisel & Segment in hot and cold configuration modes for the whole set of images. This allows us to select a region occupied by the galaxy of interest. Since this algorithm can detect very low signal-to-noise ratios (lower than ≲ 1) this region extends to very low surface brightness (≲ 27 mag arcsec^-2), including the outskirts of the galaxy. We execute SExtractor with the parameters set to optimise the detection of point-like sources (the configuration file can be found in Appendix <ref>). All the sources detected outside the galaxy region are then masked.
To improve the detection of the faintest parts of the sources we first smooth the image with the Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA <cit.>. We used an overestimation of the variance of the image in FABADA to obtain a slightly smoother result and thus larger masks. This allows the detection of even fainter point sources in the proximity of the outskirts of galaxy.
Since we did not mask smaller sources inside the large region occupied by the galaxy we run an extra step to mask these objects. We mask all the sources detected by SExtractor outside a smaller region, occupied by the galaxy, that corresponds to a level of five times the standard deviation of the image. NoiseChisel is then run again on the image with the mask applied. This last step allowed the detection of faint extended regions in the image. Given the depth of our data, a considerable amount of faint extended regions (e.g., stellar haloes of background sources, Galactic cirrus, reflections from bright stars, and residual light) appear in the images. All regions that are not spatially connected with the galaxy region and that remain unmasked by the automatic procedure described above are then masked. Finally we visually inspect all masks to improve the masking of sources blended with the target galaxy.
§.§ Radial profiles
Photometric profiles of galaxies allow regions of approximately equal isophotal magnitude to be averaged to obtain a higher combined signal-to-noise ratio. This allows to reach lower limiting surface brightnesses than with two-dimensional imaging. However, although most galaxies have a simple morphology that allows the different isophotal radii to be fitted by a single elliptical aperture with a given position angle and ellipticity, prominent features in galaxies, such as bars, rings, spiral arms, warps, produce radial variations in the position angle and ellipticity. Additionally, the lower the surface brightness limits of the image, the more galaxies tend to vary their morphology in their outskirts, as other structures such as outer discs or stellar haloes appear. Thus, the isophotes of galaxies can no longer be modelled by fixed ellipses. We fit elliptical apertures to the image leaving the parameters of the ellipses free in each radial bin <cit.>, thus describing the different structures of the galaxy without any prior assumptions for the whole galactic structure.
We use the implementation in Astropy <cit.> of the iterative ellipse-fitting method described by <cit.>. This implementation needs the parameters of a first ellipse to initialise the iterative fitting. We measure the initial parameters using the image moments from a cropped binary image created from the mask image; we select the values above four times the standard deviation of the masked image described in the previous section. This step allows the definition of the morphology in an efficient way.
We then initialise the elliptical isophote analysis. The elliptical isophote fitting algorithm adjusts ellipses to isophotes of equal intensity pixel values in the images and then computes corrections for the geometrical parameters of the current ellipse by essentially “projecting” the fitted harmonic amplitudes onto the image plane. With this method, we can measure the radial surface brightness profile of the galaxy with a robust mean of the pixels inside the fitted isophotes.
We redefine the morphology of the galaxy using the geometrical parameters of the ellipses as fitted by the algorithm. As a verification step, we produce two other profiles, one using fixed ellipses at the morphology parameters of the galaxy and the other with rectangular apertures separated by a width of five arcseconds along the major axis. These profiles allow us to verify the correct fitting of the previous method by highlighting significant differences in the profiles.
To further improve the reliability of the profiles, we perform additional local sky background corrections. First, we create a preliminary profile with the elliptical apertures parameters fixed. We then select an annular aperture 5 arcsec wide around the galaxy where we reach the level of the local background (often a plateau or an infinite drop). We calculate the sky background value as the mode of the distribution of pixels inside the annular region fitting a Gaussian distribution. This provides a robust sky background reference associated with the galaxy location. In some cases, the surroundings of the galaxy are contaminated by diffuse emission from some other regions, in which case the sky apertures are measured at a larger distance.
Figure <ref> shows as example the image of the galaxy CIG 11 (UGC 139). In the left panel we show the surface brightness distribution of the image. In the middle panel we show the same image with a grey layer showing the masked regions. We also show the apertures used for the profiles and the radius where the break is detected (if present). In the right panel we show the surface brightness profile in its three versions: from fixed ellipticity and position angle, with elliptical and rectangular apertures and from elliptical apertures with adaptive ellipticity and position angle. The position of the break (if present) is denoted with a dashed line. In the lower panels we show the variation of ellipticity and position angle as a function of radius. In Appendix <ref> we show the figures for the rest of the galaxies in our sample.
§.§ Reliability of the profiles
In order to obtain reliable measurements in the extreme low surface brightness regime, numerous factors have to be taken into account. Most are related to the processing of the data and the instrumentation, such as data reduction and processing, scattered light as described by the point spread function (PSF), or the presence of reflections and artefacts in the images related to instrumentation. Additionally, the presence of Galactic cirrus is ubiquitous and may produce confusion depending on the degree of contamination of the target.
As discussed in Sect. <ref>, the data processing was carried out using techniques designed to be respectful with the extremely low surface brightness features. This is noticeable in the absence of oversubtraction of the profiles (see Fig. <ref>) in the fainter regions, and allows us to achieve in most cases surface brightness profiles reliable below 30 mag arcsec^2. However, as noted by <cit.>, and <cit.>, the light scattered by the bright part of the galaxy itself through the PSF has a decisive impact on the photometric profiles in this extremely low surface brightness regime. In order to obtain better reliability, a proper PSF deconvolution of the galaxy has to be performed. However, given that our data originates from several instruments over a very wide range in time (of the order of years), and that no specific observations of bright stars were carried out we lack a PSF model for each epoch and telescope with which to do the proper PSF deconvolution. Following <cit.> we estimate that the photometric profiles will be unaffected to a surface brightness of around 28 mag arcsec^-2. Since disc breaks are found to take place at surfaces brightness levels no lower than 26 mag arcsec^-2 <cit.> our reliability limit is more than sufficient to explore them. However, the lack of adequate PSF models rules out a potential quantitative study of truncations or stellar halos in the outermost regions of the galaxies in our sample.
The presence of Galactic cirrus in our images is also problematic due to the extremely low brightness reached (cirrus indeed appears clearly in some of our images). The maximum possible surface brightness of these cirrus features is 26 mag arcsec^2 <cit.>. Considering that this problem affects mostly surface brightnesses below a value of approximately 26 mag arcsec^-2 we can again conclude that this does not have an impact on the study of disc breaks. It will, however, have a decisive impact on contaminating the outer regions of galaxies hiding possible interaction features. The presence of cirrus should therefore be taken into account in the study of the potential presence of minor interactions, as we will describe later on (Sect. <ref>).
§.§ Break identification and classification
We define a disc break as an abrupt change in the slope of the exponential disc of a galaxy. Following <cit.> and <cit.> we define three different types of profiles for our classification: Type I ≡ single exponential profile with no change in the slope; Type II ≡ down-bending break, with a steeper outer region; and Type III ≡ up-bending break, with a steeper inner region. We only search for the most prominent breaks, that are the consequence of a change in the global structure and not due to a local change (such as irregular morphology due to prominent H ii regions). To characterise the breaks we use two different approaches. First, given the small sample, we classify the break radii and type through a visual inspection of the surface brightness profiles. Second, we use the statistical approach of <cit.> where a change point analysis is applied to identify the break radius. This method looks for significant changes in the smooth derivative of the profile. We measure the slope of the surface brightness profile <cit.> using the four nearest points for each radial distance and we smooth the resulting slope with a median filter. We follow a classification similar to that of previous works <cit.> for the consistency of the comparison. Fig. <ref> shows the break radii found by the two different methods. We estimate the errors as the resolution of the profiles in the region of the break (distance between each point). In most cases, both approaches converge to the same solution although the automatic method finds the break 2.06 kpc closer to the galactic centre than visual inspection. We expect an offset, as explained in <cit.>, due to the smoothing of the derivative of the profile and the cumulative sum, which can induce small offsets in the radius thus we decide to adopt the value of the radii found by the visual inspection. In Table <ref> we indicate the classification of the different types of profiles in our sample, together with the values of the break radii, found visually, and the surface brightness levels where the breaks occur.
§.§ Identification of interactions
We visually inspect our sample in search of signatures of perturbations following the definitions of <cit.>, and references therein. The large field of view of our images corresponds to at least 100 kpc, enough to explore the presence of interactions with confidence. In order to provide a simple and general classification with which to introduce a perturbation parameter, we distinguish galaxies according to their morphology as follows. Tidal Stream (T): The galaxy shows a tidal stream, elongated in shape consistent with an in-falling satellite in current interaction. Halo-Perturbed (H): The galaxy has asymmetries or debris in the outermost part of the disc. Cirrus (C): In the case of strong Galactic cirrus contamination, a correct interpretation of the degree of galaxy perturbation or interaction is not possible. In this case, we discard the galaxy as non-tractable, excluding it from the statistics that imply using the presence of interactions. When a galaxy appears fairly symmetrical in morphology with no signs of disturbance, no classification is given.
We describe the presence of interactions as effects from mergers in the outer regions of galaxies. These are distinguishable from lopsidedness potentially produced by gas <cit.>, since our surface brightness limits allow us to explore the outer or halo regions of galaxies, beyond simply tracing the central morphology where star formation is dominant.
The result of this classification for each galaxy is shown in Table <ref>, and in Fig. <ref> we show representative examples of the interaction classification scheme. In the top-row panels, we show a symmetric non-classified (left), halo-perturbed (middle), and tidal stream (right) examples. In the bottom panels we show three different examples of galaxies with strong Galactic cirrus contamination that do not allow us to assure the presence or absence of potential interaction features in the galaxies.
§ RESULTS
We present below individual comments on each of the galaxies with a short discussion of the decisions made in the different classifications. In Table <ref> we show the results of the classification of profiles for each galaxy, together with the radius at which a break has been found (if any) and its surface brightness level.
* CIG 11 [UGC 139]: Although this galaxy could have been classified as Type III+II at 10 + 20 kpc <cit.>, we classify as a Type II. The reason is that extra flux originates by H ii regions seen in the inner disc region. This extra flux could be misinterpreted as Type III, although according to <cit.> Type III breaks occur further away. Thus, for the sake of a fair comparison, we classify this as a Type II break at 19 kpc. We also consider this galaxy as unperturbed due to its symmetric structure. Diffuse cirrus emission is present, but that does not prevent the detection of interactions.
* CIG 33 [NGC 237]: We consider this a clear Type III with a break radius of ∼12 kpc and a perturbed halo. The diffuse emission around the galaxy is asymmetric with a bump in the North-East region of the outskirts.
* CIG 59 [UGC 1167]: Exponential disc without break, with symmetric structure. Presence of instrumental reflection of a nearby star that when correctly masked does not contaminate the rest of the galaxy.
* CIG 94 [UGC 1706]: Type II break at ∼10 kpc, with absence of interaction signatures.
* CIG 96 [NGC 864]: Type III break at ∼16 kpc with perturbed halo. Extra flux at the North-East region of the disc outskirts.
* CIG 100 [UGC 1863]: Type II break at 5 kpc, symmetric structure. Possible Type III break around 18.5 kpc at 28 mag arcsec^-2. However, at this depth, we cannot distinguish between instrumental effects such as extended PSF contribution as explained in Sect. <ref>.
* CIG 154 [UGC 3171]: Exponential disc with symmetric structure. Hints of a larger spiral arm, at around ∼ 17 kpc, of the southern region. The presence of filamentary cirrus prevents us from determining the presence of any faint interaction feature in the outer regions of the galaxy.
* CIG 279 [NGC 2644]: Exponential disc with symmetric structure. The presence of diffuse cirrus emission would not prevent the detection of interaction features if present. Possible Type III at ∼13 kpc, however, the depth and presence of cirrus do not allow us to distinguish between the possible origins.
* CIG 329 [NGC 2862]: Type II break at ∼16 kpc, clear tidal stream at the end of the disc in the South-East region. Presence of instrumental reflection of a nearby star that correctly masked does not contaminate the rest of the galaxy.
* CIG 335 [NGC 2870]: Type II break at ∼15 kpc. Signatures of overdensities and perturbations in the halo region so we classify this as a perturbed halo. Fig. <ref> shows an enhanced image of the galaxy for greater clarity.
* CIG 340 [IC 2487]: Type II at ∼14 kpc, clear tidal stream in the south region.
* CIG 512 [UGC 6903]: Symmetric galaxy with a Type II break at ∼9 kpc. Oversubstractions effects could cause the significant drop in brightness seen at 20 kpc, which can be a result of the Subaru pipeline.
* CIG 568 [UGC 8170]: Symmetric galaxy with a Type II break at ∼21 kpc.
* CIG 613 [UGC 9048]: Type II break at ∼40 kpc. Galaxy also exhibits signatures of a very faint and warped tidal stream. Appendix <ref> shows an enhanced image with an arrow showing this stream.
* CIG 616 [UGC 9088]: Type I exponential disc, with an elliptical-like shape in the halo region with extra flux perpendicular to the major axis. Extra flux in clumps in the outermost regions indicating past interaction. Classified as perturbed halo.
* CIG 626 [NGC 5584]: Symmetric galaxy with a featureless disc profile, Type I.
* CIG 744 [UGC 10437]: Although this galaxy seems to have some fluctuations between 7 kpc and 15 kpc that could be classified as breaks, the constant global decrease in average surface brightness in the disc makes us to classify it as an exponential disc (Type I). We identify the local fluctuations coming from clumpy H ii regions of the arms. The absence of features indicating interactions makes us classify this galaxy as symmetric.
* CIG 772 [IC 1231]: Symmetric galaxy with Type II break at 17 kpc.
* CIG 800 [NGC 6347]: Type II break at ∼19 kpc. The galaxy is fully embedded in a region heavily contaminated by cirrus, making the classification of possible interaction features unfeasible. Furthermore, several stars lay in the line of sight, in the North-West region, and faint contamination due to the extended PSF can cause extra light at the end of the profile (see Sect. <ref>).
* CIG 838 [IC 1269]: Galaxy with an exponential Type I disc. From our optical imaging or the derived profile we do not detect any signs of interaction, even though the northern spiral arm shows asymmetries with respect to the other one. Some diffuse cirrus emission is present, but not enough to prevent the detection of faint interaction features.
* CIG 947 [NGC 7217]: Although this object can be misinterpreted as an early-type galaxy, high-resolution colour images, show spiral structures with two blue rings in the inner and outer regions. The radial profile exhibits a clear exponential disc. We see clear signatures of cirrus contamination (Fig. <ref>) preventing any detection of possible interaction features.
* CIG 971 [UGC 12082]: Exponential disc, Type I. The galaxy is fully embedded in a region heavily contaminated by cirrus, making the classification of possible interaction features unfeasible. Furthermore, the field is crowded with stars. This contamination can be also be seen in the rectangular radial profile as a plateau at ∼28 mag arcsec^-2 (as explained in Sect. <ref>).
* CIG 1002 [NGC 7451]: Type II break at 14 kpc. We detect a possible companion in the South-West region of the galaxy. However, there is no available spectra to confirm the association of the two galaxies. We do not see any interaction signature between them.
* CIG 1004 [NGC 7479]: Type II with a break radius of 15 kpc.
The difference between the fixed and free ellipticity surface brightness profiles comes from the presence of the extended stellar bar. We are also able to see the same break with the rectangular aperture. The galaxy is fully embedded in a region heavily contaminated by cirrus (see Fig.<ref>), making the classification of possible interaction features unfeasible from deep optical imaging. The asymmetry of the galaxy disc and spiral arms is well known, however, and may indicate a minor merger origin <cit.>.
* CIG 1047 [UGC 12857]: Type II break at 8 kpc. We see a warp in the southern part of the outer disc, so we classify this galaxy as a perturbed halo. Presence of diffuse cirrus emission that would not prevent the detection of interaction features if present.
In total, among 25 galaxies, we identify 9 Type I single exponential with no significant break, 14 Type II down-bending breaks, and 2 Type III up-bending breaks. The overall statistics of the sample are 36%± 10% Type I, 56%± 10% Type II, and 8%± 5% Type III breaks. We estimate the uncertainties assuming a binomial distribution, ϵ = √(f (1-f/N))*(100/N) [%] where f is the fraction of galaxies within each type and N is the total number of galaxies in our sample (N=25).
§.§ Break type vs. interactions
We find 5 galaxies with strong contamination by Galactic cirrus clouds that do not allow us to confirm or rule out the presence of interactions thus these 5 galaxies are excluded from this analysis. Among the remaining 20, we find 8 galaxies with signatures of interactions. Of these, 4 show asymmetries in the halo, 3 have tidal streams, and the remaining one shows both tidal streams and halo asymmetries. Therefore, 40%±14% of the isolated galaxies in our classified sample show the presence of interactions; out of these galaxies, 25%±10% show an asymmetric halo and 20%±8% show some tidal streams. We found 12 galaxies (60%±17%) with no interaction features.
Given the small sample size, we consider any type of interaction as a single class named perturbed (8 galaxies) (presence of interactions) while the remaining galaxies (that lack cirrus contamination) are classified as unperturbed (12 galaxies). In the right panel of Fig. <ref> we show the fraction of perturbed and unperturbed galaxies for each type of break. We found that for Type I breaks, 6 (83%± 37%) are unperturbed galaxies and 1 (17%± 17%) is perturbed; for Type II, 8 (58%± 22%) are unperturbed and 5 (42%± 19%) are perturbed. Finally, we find that the two galaxies with Type III are perturbed (100%± 71%). We estimate the errors assuming Poisson statistics since we have few of galaxies in each type.
The low number of galaxies in our sample is insufficient to allow us to make strong statements. However, we find certain indications that are undoubtedly interesting. First, both Type III galaxies (CIG 33 and CIG 96) appear strongly perturbed. This is noticeable in the images in the Appendix in which we can appreciate how these two galaxies have an expanded halo with clear signs of strong disturbance, possibly due to a recent major merger. Second, we find a significantly higher fraction of perturbed galaxies (42% ± 19%) among Type II break hosts than among Type I (17% ± 17%).
§.§ Break type vs. environment
In the left panel of Fig. <ref> we show the fraction of surface brightness profile types in our sample of isolated galaxies in comparison with the previous work by <cit.> who explored the surface brightness type in a sample of 700 disc galaxies at low redshift (z < 0.063) using SDSS data. These galaxies were classified according to their environment into field (low-density) and cluster (high-density). The fractions of Type I, II, and III discs found by <cit.> are 29 (6%±1%), 343 (66%±2%), and 149 (29%±2%) in the field sample and 27 (15%±3%), 98 (56%±4%), and 50 (29%±3%) in the cluster sample. For the sake of a fair comparison, we estimate the <cit.> uncertainties in the same way as we do for our results. We compare these statistics with the sample of isolated galaxies from our work. As illustrated in Fig. <ref>, left panel, we find a considerably higher fraction of isolated Type I galaxies and a lower fraction of isolated Type III galaxies than what was found by <cit.> for field galaxies and clusters. The results for Type II are similar for different environments.
A Kolmogorov–Smirnov statistical test proves that our results are significantly different from those of <cit.> (P-value < 0.01). The highest difference between our results and those of the <cit.> sample is found for Type I discs. We find six times more single exponential discs in our isolated sample. This difference holds when compared to other studies, such as that by <cit.>, who found around ∼ 10-15 % of Type I disc in field late-type spirals, around two to three times less often than us. We also find a much lower fraction, around seven times lower, of Type III discs than <cit.>.
In Fig. <ref> we show a correlation test between the degree of isolation and the disc type of the galaxies in our sample. We plot the local number density of neighbours galaxies η_k,p and the tidal strength Q_Kar,p obtained by <cit.> for the 18 galaxies in our sample that have these parameters calculated. An increasing value for η_k,p or Q_Kar,p indicates a higher environmental density. We show as a comparison these parameters for other galaxy catalogues, including galaxies located in regions of higher density. In particular, we show values for isolated pairs of galaxies <cit.>; galaxy triplets <cit.>; galaxies in compact groups <cit.>; and galaxies in Abell clusters <cit.> computed by <cit.>. We find that the galaxies in our sample tend to be located at low values of η_k,p and Q_Kar,p, confirming their location in regions of low environmental density, a consequence of the sample selection criteria. The colour code indicates the type of surface brightness profile, while the symbols show the two populations according to our classification, perturbed and unperturbed galaxies. The average values of the η_k,p and Q_Kar,p parameters for each break type are indicated by colour bars, while for the unperturbed and perturbed population, they are shown with symbols. We see that for the galaxies in our sample, the different disc types tend to be located on average in regions of similar density, with these regions in any case having a very low density.
§ DISCUSSION
We present deep optical imaging of a sample of 25 isolated galaxies from the AMIGA project in order to reach low surface brightness limits. The nominal surface brightness limits achieved are μ_r,lim > 29.5 mag arcsec^-2 [3σ; 10^''×10^''], and μ_L,lim > 28.5 mag arcsec^-2 [3σ; 10^''×10^''] for the three galaxies observed with the JRT. Because of our careful data processing, our images show an absence of oversubtracted regions and a high efficiency in the detection of extreme low surface brightness features.
The depth of our data is more than 1 mag arcsec^-2 (r band) deeper than that in preceding studies <cit.>. This gives us the possibility to look for the presence of minor interaction signatures at very low surface brightness. A representative example of this quantitative leap is shown in Fig. <ref> where we compare images of CIG 340 (IC 2487) with SDSS and Legacy Survey data. This comparison clearly shows that only in our data is it possible to detect the clear interaction that CIG 340 is undergoing with a low-mass satellite, appearing as a tidal stream with surface brightness around 26.8-27.3 mag arcsec^-2 and an extension of around ∼ 116 arcsec (35 kpc), not detectable in shallower optical data.
The potential detection of minor interactions has a decisive impact on the interpretation of galaxy properties, and the isolated galaxy CIG 340 is a perfect example. Shallow optical images of CIG 340 revealed a fairly symmetric disc, albeit with a small disc warp. Additionally, the H i-integrated spectra from single-dish observations showed a very symmetric profile <cit.>. More recent high-resolution interferometric observations by <cit.> revealed a striking asymmetry of the H i component of CIG 340, with 6% of the H i mass located in an extension of the disc to the north. These findings led <cit.> to propose two different hypotheses to explain them. On the one hand, this H i asymmetry could be caused by a minor interaction with a satellite, not detected in the optical images available at the time. On the other hand, it could be due to some internal secular process, for example the result of a long-lived dark matter halo asymmetry. More recent work by <cit.> proposed that the gravitational interaction of a background medium of dark matter particles in the surroundings of CIG 340 is capable of inducing a dynamical friction enough to cause the H i asymmetries observed by <cit.>. In light of the results of our work, we can affirm that the H i asymmetries observed in the isolated galaxy CIG 340 are most likely caused by a minor interaction, the signatures of which we have unveiled for the first time.
The galaxy CIG 96 (NGC 864) is another well-studied isolated galaxy. Recent work detected two H i asymmetries in the North-West and South-East regions of the galaxy <cit.>, not detected in the optical range. The main hypotheses to explain the HI features of CIG 96 are possible accreted companions and cold gas accretion. However, <cit.> ruled out the possibility of major mergers events due to the isolation criteria (discarded possible interactions in the last 2.7 Gyr). Despite the high degree of isolation of this galaxy, with our deep data we are able to detect a bigger external faint halo in the galaxy, which along with its Type III profile suggests that this galaxy might have experimented a recent merger event causing the extended emission of light <cit.>.
The possibility of achieving surface brightness levels as low as the ones shown here offers a new decisive parameter to explain some morphological features in galaxies. This would not only be useful to investigate the main reasons for H i asymmetries of galaxies, but also in other fields such as the possible induction of active galactic nuclei (AGN) by minor interactions in galaxies, among many others. These issues will be investigated in future works.
The types of discs in the isolated galaxies from our sample show significant differences with respect to the results of previously studied samples in other environments. We find a significantly higher fraction of Type I and a significantly lower fraction of Type III profiles than in works by <cit.> and <cit.> for denser environments. This is a striking result which we further discuss now.
Type III profiles can be produced by mergers of galaxies <cit.>. The low number of Type III profiles in our sample of isolated galaxies is in agreement with this statement since undoubtedly a lower density would imply a lower merger ratio. Additionally and importantly, the only two Type III galaxies found in our sample (CIG 33 and CIG 96) show disturbed morphology with a puffed-up external halo, compatible with a recent major merger <cit.>.
Type II galaxies are widely agreed to be the result of discs with breaks caused by a star formation threshold <cit.>. A cessation or decrease in star formation would tend to homogenise through stellar migration effects <cit.> the stellar populations and produce a Type I profile <cit.>. In our case this is hardly testable directly due to the absence of star formation measurements in our sample. However, an indirect hint that could indicate that this is the case is the considerably higher fraction of perturbed Type II (42%±19%) galaxies when compared to Type I (17%±17%) galaxies. While the rate of star formation may depend on various circumstances such as the rate of pristine gas inflow, it is known that satellite interactions are capable of triggering star formation <cit.>, which would be in accordance with our findings.
The low ratio of perturbations detected in Type I discs galaxies in our results and the lower specific star formation rate in isolated galaxies than in higher-density environments <cit.> suggest that unperturbed galaxies, evolving slowly with a low star formation rate could explain the high rate, of Type I discs in isolated galaxies.
§ CONCLUSIONS
We study a sample of 25 “isolated” galaxies from the AMIGA revised CIG catalogue, which lack major companions, to identify how internal or external processes impact the discs of galaxies. We conduct a diverse observational campaign using the INT, VST, JRT, and archival data from HSC-SSP to obtain unprecedentedly deep images. We measure the surface brightness profiles and classify the galaxies according to their disc break (Type I ≡ single exponential, Type II ≡ down-bending, Type III ≡ up-bending) and to the presence of interaction signatures (Tidal Stream, Halo perturbation, and unperturbed). The conclusions of our work are the following:
* Our images have similar depth as those to come from future surveys, like LSST, through careful data processing and background subtraction. The nominal surface brightness limit of the images is μ_r,lim > 29.5 mag arcsec^-2 [3σ; 10^''×10^'']. The data processing is optimized to preserve low surface brightness features.
* As a result of the depth obtained, we can trace interaction signatures in galaxies classified as isolated (see Figure <ref>). However, five galaxies are affected by cirrus and we could not explore the presence of signatures, ruling them out from this analysis. We find that 25% ± 10% of the galaxies in our sample show an asymmetric halo and 20% ± 10% show a tidal stream (see Figure <ref>). In total, 40% ± 14% show signs of interactions.
* We are able to produce reliable surface brightness profiles down to a critical surface brightness of ≳ 30 mag arcsec^-2 for all of the galaxies in our sample (see Appendix <ref>).
* We successfully classified the disc type in all the galaxies in our sample. We identify nine (36%± 5%) Type I discs with no significant break, fourteen (56%± 7%) Type II down-bending discs, and two (8%± 3%) Type III up-bending discs.
* The fraction of perturbed galaxies correlates with the type of disc. We identify 17%±17%, 42%±19% , and 100%±71% perturbed galaxies with Type I, II, and III discs, respectively.
* We find significantly higher Type I and lower Type III frequencies with respect to other studies <cit.> with more perturbed galaxies in Type II and III than in Type I. This is in agreement with a proposed formation scenario in which Type III discs are formed via interactions such as major mergers <cit.>, and Type II discs stem from a star formation threshold. The increased fraction of Type I discs with respect to that in other samples could be attributed to the low ratio of disturbance in our sample of isolated galaxies.
In the near future, the advent of the next generation optical and infrared surveys (e.g., LSST, Euclid) will increase the number of galaxies observed at surface brightness limits equivalent to those that we present in this work by several orders of magnitude. This will allow to further strengthen and refine our statements, provided the data reduction and analysis allows for the detection of low surface brightness.
We thank Ignacio Trujillo for helpful insights about this work and Aaron Watkins for providing us with the implementation of the automatic break detection method.
PMSA, JHK, and JR acknowledge financial support from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation under the grant "The structure and evolution of galaxies and their central regions" with reference PID2019-105602GBI00/10.13039/501100011033, from the ACIISI, Consejería de Economía, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund (ERDF) under grant with reference PROID2021010044, and from IAC project P/300724, financed by the Ministry of Science and Innovation, through the State Budget and by the Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community.
JR acknowledges funding from University of La Laguna through the Margarita Salas Program from the Spanish Ministry of Universities ref. UNI/551/2021-May 26, and under the EU Next Generation.
LVM acknowledges financial support from grants CEX2021-001131-S funded by MCIN/AEI/ 10.13039/501100011033, RTI2018-096228-B-C31 and PID2021-123930OB-C21 by MCIN/AEI/ 10.13039/501100011033, by “ERDF A way of making Europe” and by the "European Union" and from IAA4SKA (R18-RT-3082) funded by the Economic Transformation, Industry, Knowledge and Universities Council of the Regional Government of Andalusia and the European Regional Development Fund from the European Union.
SC acknowledges funding from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation under the grant “Thick discs, relics of the infancy of galaxies" with reference PID2020-113213GA-I00. MAF acknowledges support from FONDECYT iniciación project 11200107 and the Emergia program (EMERGIA20_38888) from Consejería de Transformación Económica, Industria, Conocimiento y Universidades and University of Granada.
PMSA and LVM acknowledge the Spanish Prototype of an SRC (SPSRC) service and support funded by the Spanish Ministry of Science, Innovation and Universities, by the Regional Government of Andalusia, by the European Regional Development Funds and by the European Union NextGenerationEU/PRTR.
The SPSRC acknowledges financial support from the State Agency for Research of the Spanish MCIU through the "Center of Excellence Severo Ochoa" award to the Instituto de Astrofísica de Andalucía (SEV-2017-0709) and from the grant CEX2021-001131-S funded by MCIN/AEI/ 10.13039/501100011033.
Based on observations made with the Isaac Newton Telescope operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. The WFC imaging was obtained as part of the programs C163, C106, and C106/13B.
Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme(s) 098.B-0775(A), 093.B-0894(A).
Based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center at National Astronomical Observatory of Japan.
aa
[Abell(1958)]Abell58 Abell, G. O. 1958, , 3, 211. doi:10.1086/190036
[Abell et al.(1989)]Abell89 Abell, G. O., Corwin, H. G., & Olowin, R. P. 1989, , 70, 1. doi:10.1086/191333
[Abraham & van Dokkum(2014)]Dragonfly Abraham, R. G. & van Dokkum, P. G. 2014, , 126, 55. doi:10.1086/674875
[Aihara et al.(2018)]HSC-SSP Aihara, H., Arimoto, N., Armstrong, R., et al. 2018, , 70, S4. doi:10.1093/pasj/psx066
[Aihara et al.(2019)]HSC-SSP-2 Aihara, H., AlSayyad, Y., Ando, M., et al. 2019, , 71, 114. doi:10.1093/pasj/psz103
[Akhlaghi & Ichikawa(2015)]gnuastro Akhlaghi, M. & Ichikawa, T. 2015, , 220, 1. doi:10.1088/0067-0049/220/1/1
[Akhlaghi(2019)]segment Akhlaghi, M. 2019, arXiv:1909.11230
[Alonso et al.(2004)]2004MNRAS.352.1081A Alonso, M. S., Tissera, P. B., Coldwell, G., et al. 2004, , 352, 1081. doi:10.1111/j.1365-2966.2004.08002.x
[Arakelian & Magtesian(1981)]1981Afz....17...53A Arakelian, M. A. & Magtesian, A. P. 1981, Astrofizika, 17, 53
[Argudo-Fernández et al.(2013)]Argudo-Fernandez13 Argudo-Fernández, M., Verley, S., Bergond, G., et al. 2013, , 560, A9. doi:10.1051/0004-6361/201321326
[Astropy Collaboration et al.(2013)]astropy13 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, , 558, A33. doi:10.1051/0004-6361/201322068
[Astropy Collaboration et al.(2018)]astropy18 Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, , 156, 123. doi:10.3847/1538-3881/aabc4f
[Abazajian et al.(2009)]2009ApJS..182..543A Abazajian, K. N., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2009, , 182, 543. doi:10.1088/0067-0049/182/2/543
[Azzollini et al. (2008)]Azzollini08 Azzollini, R.; Trujillo, I.; Beckman, J. E. 2008, ApJ, 679L, 69A
[Bakos et al.(2008)]Bakos08 Bakos, Judit; Trujillo, Ignacio; Pohlen, Michael 2008, ApJ, 683L, 103B
[Bakos & Trujillo(2012)]Bakos12 Bakos, Judit; Trujillo, Ignacio 2012, arXiv1204.3082B
[Baldry et al.(2006)]2006MNRAS.373..469B Baldry, I. K., Balogh, M. L., Bower, R. G., et al. 2006, , 373, 469. doi:10.1111/j.1365-2966.2006.11081.x
[Balogh et al.(2004)]2004ApJ...615L.101B Balogh, M. L., Baldry, I. K., Nichol, R., et al. 2004, , 615, L101. doi:10.1086/426079
[Barton et al.(2000)]2000ApJ...530..660B Barton, E. J., Geller, M. J., & Kenyon, S. J. 2000, , 530, 660. doi:10.1086/308392
[Barrera-Ballesteros et al.(2015)]2015A A...582A..21B Barrera-Ballesteros, J. K., García-Lorenzo, B., Falcón-Barroso, J., et al. 2015, , 582, A21. doi:10.1051/0004-6361/201424935
[Bertin & Arnouts(1996)]Bertin1996 Bertin, E. & Arnouts, S. 1996, , 117, 393. doi:10.1051/aas:1996164
[Bertin(2006)]2006ASPC..351..112B Bertin, E. 2006, Astronomical Data Analysis Software and Systems XV, 351, 112
[Borlaff et al.(2014)]2014A A...570A.103B Borlaff, A., Eliche-Moral, M. C., Rodríguez-Pérez, C., et al. 2014, , 570, A103. doi:10.1051/0004-6361/201424299
[Borlaff et al.(2018)]Borlaff18 Borlaff, A., Eliche-Moral, M. C., Beckman, J. E., et al. 2018, , 615, A26. doi:10.1051/0004-6361/201732090
[Borlaff et al.(2019)]Borlaff19 Borlaff, A., Trujillo, I., Román, J., et al. 2019, , 621, A133. doi:10.1051/0004-6361/201834312
[Bosma(2017)]2017ASSL..434..209B Bosma, A. 2017, Outskirts of Galaxies, 434, 209. doi:10.1007/978-3-319-56570-5_7
[Dark Energy Survey Collaboration et al.(2016)]2016MNRAS.460.1270D Dark Energy Survey Collaboration, Abbott, T., Abdalla, F. B., et al. 2016, , 460, 1270. doi:10.1093/mnras/stw641
[Debattista et al.(2006)]Debattista06 Debattista, Victor P.; Mayer, Lucio; Carollo, C. Marcella; Moore, Ben; Wadsley, James; Quinn, Thomas 2006, ApJ, 645, 209D
[Domínguez-Gómez et al.(2023)]Cavity Domínguez-Gómez, J., Pérez, I., Ruiz-Lara, T., et al. Nature (2023). https://doi.org/10.1038/s41586-023-06109-1
[Duc et al.(2015)]2015MNRAS.446..120D Duc, P.-A., Cuillandre, J.-C., Karabal, E., et al. 2015, , 446, 120. doi:10.1093/mnras/stu2019
[de Vaucouleurs(1958)]1958ApJ...128..465D de Vaucouleurs, G. 1958, ApJ, 128, 465D
[Eliche-Moral et al.(2015)]Eliche-Moral15 Eliche-Moral, M. C., Borlaff, A., Beckman, J. E., et al. 2015, , 580, A33. doi:10.1051/0004-6361/201424692
[Erwin et al.(2005)]Erwin05 Erwin, Peter; Beckman, John E.; Pohlen, Michael 2005, ApJ, 626L, 81E
[Erwin et al.(2012)]Erwin12 Erwin, Peter; Gutiérrez, Leonel; Beckman, John E. 2012, ApJ, 744L, 11E
[Espada et al.(2011)]2011A A...532A.117E Espada, D., Verdes-Montenegro, L., Huchtmeier, W. K., et al. 2011, , 532, A117. doi:10.1051/0004-6361/201016117
[Fliri & Trujillo(2016)]2016MNRAS.456.1359F Fliri, J. & Trujillo, I. 2016, , 456, 1359. doi:10.1093/mnras/stv2686
[Freeman(1970)]Freeman70 Freeman, K. C. 1970, , 160, 811. doi:10.1086/150474
[Gilhuly et al.(2022)]2022ApJ...932...44G Gilhuly, C., Merritt, A., Abraham, R., et al. 2022, , 932, 44. doi:10.3847/1538-4357/ac6750
[Gutiérrez et al.(2011)]Gutierrez11 Gutiérrez, L., Erwin, P., Aladro, R., et al. 2011, , 142, 145. doi:10.1088/0004-6256/142/5/145
[Haigh et al.(2021)]2021A A...645A.107H Haigh, C., Chamba, N., Venhola, A., et al. 2021, , 645, A107. doi:10.1051/0004-6361/201936561
[Herpich et al.(2017)]2017MNRAS.470.4941H Herpich, J., Stinson, G. S., Rix, H.-W., et al. 2017, , 470, 4941. doi:10.1093/mnras/stx1511
[Hickson(1982)]Hickson82 Hickson, P. 1982, , 255, 382. doi:10.1086/159838
[Huang & Fan(2022)]2022ApJS..262...39H Huang, Q. & Fan, L. 2022, , 262, 39. doi:10.3847/1538-4365/ac85b1
[Huchra & Thuan(1977)]1977ApJ...216..694H Huchra, J. & Thuan, T. X. 1977, , 216, 694. doi:10.1086/155511
[Infante-Sainz et al.(2020)]2020MNRAS.491.5317I Infante-Sainz, R., Trujillo, I., & Román, J. 2020, , 491, 5317. doi:10.1093/mnras/stz3111
[Jablonka et al.(2010)]2010A A...513A..78J Jablonka, P., Tafelmeyer, M., Courbin, F., et al. 2010, , 513, A78. doi:10.1051/0004-6361/200913320
[Jedrzejewski(1987)]Jedrzejewski Jedrzejewski, R. I. 1987, , 226, 747. doi:10.1093/mnras/226.4.747
[Jiang et al.(2014)]2014ApJS..213...12J Jiang, L., Fan, X., Bian, F., et al. 2014, , 213, 12. doi:10.1088/0067-0049/213/1/12
[Johnston et al.(2001)]2001ApJ...557..137J Johnston, K. V., Sackett, P. D., & Bullock, J. S. 2001, , 557, 137. doi:10.1086/321644
[Johnston et al.(2008)]2008ApJ...689..936J Johnston, K. V., Bullock, J. S., Sharma, S., et al. 2008, , 689, 936. doi:10.1086/592228
[Jones et al.(2018)]2018A A...609A..17J Jones, M. G., Espada, D., Verdes-Montenegro, L., et al. 2018, , 609, A17. doi:10.1051/0004-6361/201731448
[Karabal et al.(2017)]2017A A...601A..86K Karabal, E., Duc, P.-A., Kuntschner, H., et al. 2017, , 601, A86. doi:10.1051/0004-6361/201629974
[Karachentsev(1972)]Karachentsev72 Karachentsev, I. D. 1972, Soobshcheniya Spetsial'noj Astrofizicheskoj Observatorii, 7, 1
[Karachentseva(1973)]1973SoSAO...8....3K Karachentseva, V. E. 1973, Soobshcheniya Spetsial'noj Astrofizicheskoj Observatorii, 8, 3
[Karachentseva et al.(1979)]Karachentseva79 Karachentseva, V. E., Karachentsev, I. D., & Shcherbanovsky, A. L. 1979, Astrofizicheskie Issledovaniia Izvestiya Spetsial'noj Astrofizicheskoj Observatorii, 11, 3
[Kipper et al.(2020)]Kipper20 Kipper, R., Benito, M., Tenjes, P., et al. 2020, , 498, 1080. doi:10.1093/mnras/staa2486
[Knapen et al.(2000)]Knapen2000 Knapen, J. H., Shlosman, I., & Peletier, R. F. 2000, , 529, 93. doi:10.1086/308266
[Knapen et al.(2015)]2015MNRAS.454.1742K Knapen, J. H., Cisternas, M., & Querejeta, M. 2015, , 454, 1742. doi:10.1093/mnras/stv2135
[Kormendy & Kennicutt(2004)]2004ARA A..42..603K Kormendy, J. & Kennicutt, R. C. 2004, , 42, 603. doi:10.1146/annurev.astro.42.053102.134024
[Laine et al.(2002)]Laine02 Laine, S., Shlosman, I., Knapen, J. H., et al. 2002, , 567, 97. doi:10.1086/323964
[Laine & Gottesman(1998)]1998MNRAS.297.1041L Laine, S. & Gottesman, S. T. 1998, , 297, 1041. doi:10.1046/j.1365-8711.1998.01513.x
[Laine et al.(2014)]Laine14 Laine, S., Knapen, J. H., Muñoz-Mateos, J.-C., et al. 2014, , 444, 3015. doi:10.1093/mnras/stu1642
[Lang et al.(2010)]2010AJ....139.1782L Lang, D., Hogg, D. W., Mierle, K., et al. 2010, , 139, 1782. doi:10.1088/0004-6256/139/5/1782
[Laurikainen & Salo(2001)]2001MNRAS.324..685L Laurikainen, E. & Salo, H. 2001, , 324, 685. doi:10.1046/j.1365-8711.2001.04347.x
[Lisenfeld et al.(2011)]2011A A...534A.102L Lisenfeld, U., Espada, D., Verdes-Montenegro, L., et al. 2011, , 534, A102. doi:10.1051/0004-6361/201117056
[Lotz et al.(2008)]Lotz08 Lotz, J. M., Jonsson, P., Cox, T. J., et al. 2008, , 391, 1137. doi:10.1111/j.1365-2966.2008.14004.x
[Maltby et al(2012)]Maltby12 Maltby, David T.; Gray, Meghan E.; Aragón-Salamanca, Alfonso et al. 2012, MNRAS, 419, 669M
[Martin et al.(2022)]2022MNRAS.513.1459M Martin, G., Bazkiaei, A. E., Spavone, M., et al. 2022, , 513, 1459. doi:10.1093/mnras/stac1003
[Martínez-Delgado et al.(2015)]2015AJ....150..116M Martínez-Delgado, D., D'Onghia, E., Chonis, T. S., et al. 2015, , 150, 116. doi:10.1088/0004-6256/150/4/116
[Martínez-Delgado(2019)]Martinez-Delgado19 Martínez-Delgado, D. 2019, Highlights on Spanish Astrophysics X, 146
[Martínez-Delgado et al.(2023)]2023A A...671A.141M Martínez-Delgado, D., Cooper, A. P., Román, J., et al. 2023, , 671, A141. doi:10.1051/0004-6361/202245011
[Martínez-Lombilla & Knapen(2019)]2019A A...629A..12M Martínez-Lombilla, C. & Knapen, J. H. 2019, , 629, A12. doi:10.1051/0004-6361/201935464
[Martín-Navarro et al(2012)]Martin-Navarro12 Martín-Navarro, Ignacio; Bakos, Judit; Trujillo, Ignacio et al. 2012, MNRAS, 427, 1102M
[Martínez-Serrano et al(2009)]MS09 Martínez-Serrano, F. J.; Serna, A.; Doménech-Moral, M.; Domínguez-Tenreiro, R. 2009, ApJ, 705L, 133M
[Melnyk et al.(2015)]2015MNRAS.451.1482M Melnyk, O., Karachentseva, V., & Karachentsev, I. 2015, , 451, 1482. doi:10.1093/mnras/stv950
[Mesa et al.(2021)]2021MNRAS.501.1046M Mesa, V., Alonso, S., Coldwell, G., et al. 2021, , 501, 1046. doi:10.1093/mnras/staa3720
[Mihos et al.(2015)]2015ApJ...809L..21M Mihos, J. C., Durrell, P. R., Ferrarese, L., et al. 2015, , 809, L21. doi:10.1088/2041-8205/809/2/L21
[Mihos et al.(2017)]2017ApJ...834...16M Mihos, J. C., Harding, P., Feldmeier, J. J., et al. 2017, , 834, 16. doi:10.3847/1538-4357/834/1/16
[Mihos(2019)]2019arXiv190909456M Mihos, J. C. 2019, arXiv:1909.09456
[Muñoz-Mateos et al.(2015)]Mateos15 Muñoz-Mateos, J. C., Sheth, K., Regan, M., et al. 2015, , 219, 3. doi:10.1088/0067-0049/219/1/3
[Patterson(1940)]1940BHarO.914....9P Patterson, F., S. 1940, BHar, O.914, 9P
[Pfeffer et al.(2022)]2022MNRAS.509..261P Pfeffer, J. L., Bekki, K., Forbes, D. A., et al. 2022, , 509, 261. doi:10.1093/mnras/stab2934
[Planck Collaboration et al.(2020)]Planck18 Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, , 641, A6. doi:10.1051/0004-6361/201833910
[Pranger et al.(2017)]Pranger17 Pranger, F., Trujillo, I., Kelvin, L. S., et al. 2017, , 467, 2127. doi:10.1093/mnras/stx199
[Pohlen & Trujillo(2006)]Pohlen06 Pohlen, M.; Trujillo, I. 2006, A&A, 454, 759P
[Pohlen et al.(2002)]Pohlen02 Pohlen, M.; Dettmar, R. -J.; Lütticke, R.; Aronica, G. 2002, A&A, 392, 807P
[Ramírez-Moreta et al.(2018)]Ramirez-Moreta18 Ramírez-Moreta, P., Verdes-Montenegro, L., Blasco-Herrera, J., et al. 2018, , 619, A163. doi:10.1051/0004-6361/201833333
[Román & Trujillo(2018)]2018RNAAS...2..144R Román, J. & Trujillo, I. 2018, Research Notes of the American Astronomical Society, 2, 144. doi:10.3847/2515-5172/aad8b8
[Román et al.(2020)]Roman20 Román, J., Trujillo, I., & Montes, M. 2020, , 644, A42. doi:10.1051/0004-6361/201936111
[Román et al.(2021)]2021A A...656A..44R Román, J., Castilla, A., & Pascual-Granado, J. 2021, , 656, A44. doi:10.1051/0004-6361/202142161
[Román et al.(2023)]2023arXiv230503073R Román, J., Rich, R. M., Ahvazi, N., et al. 2023, arXiv:2305.03073. doi:10.48550/arXiv.2305.03073
[Roškar et al. (2008)]Roskar08 Roškar, Rok; Debattista, Victor P.; Stinson, Gregory S.; Quinn, Thomas R. et al. 2008, ApJ, 675L, 65R
[Ruiz-Lara et al.(2016)]2016MNRAS.456L..35R Ruiz-Lara, T., Pérez, I., Florido, E., et al. 2016, , 456, L35. doi:10.1093/mnrasl/slv174
[Sánchez-Alarcón & Ascasibar (2023)]fabada Pablo M Sánchez-Alarcón, Yago Ascasibar, RAS Techniques and Instruments, Volume 2, Issue 1, January 2023, Pages 129–141, doi.org/10.1093/rasti/rzad006
[Sánchez-Blázquez et al. (2009)]SB09 Sánchez-Blázquez, P.; Courty, S.; Gibson, B. K.; Brook, C. B. 2009, MNRAS, 398, 591
[Sandin(2014)]2014A A...567A..97S Sandin, C. 2014, , 567, A97. doi:10.1051/0004-6361/201423429
[Schlafly & Finkbeiner (2011)]2011ApJ...737..103S Schlafly, E. F. & Finkbeiner, D. P. 2011, , 737, 103. doi:10.1088/0004-637X/737/2/103
[Schwarzkopf & Dettmar(2001)]2001A A...373..402S Schwarzkopf, U. & Dettmar, R.-J. 2001, , 373, 402. doi:10.1051/0004-6361:20010548
[Scott et al.(2014)]Scott14 Scott, T. C., Sengupta, C., Verdes Montenegro, L., et al. 2014, , 567, A56. doi:10.1051/0004-6361/201423701
[Staudaher et al.(2019)]2019MNRAS.486.1995S Staudaher, S. M., Dale, D. A., & van Zee, L. 2019, , 486, 1995. doi:10.1093/mnras/stz935
[Slater et al.(2009)]2009PASP..121.1267S Slater, C. T., Harding, P., & Mihos, J. C. 2009, , 121, 1267. doi:10.1086/648457
[Sulentic et al.(2006)]2006A A...449..937S Sulentic, J. W., Verdes-Montenegro, L., Bergond, G., et al. 2006, , 449, 937. doi:10.1051/0004-6361:20054020
[Tang et al.(2020)]2020ApJ...897...79T Tang, Y., Chen, Q., Zhang, H.-X., et al. 2020, , 897, 79. doi:10.3847/1538-4357/ab98fd
[Thomas et al.(2005)]2005ApJ...621..673T Thomas, D., Maraston, C., Bender, R., et al. 2005, , 621, 673. doi:10.1086/426932
[Trujillo & Fliri(2016)]Trujillo16 Trujillo, I. & Fliri, J. 2016, , 823, 123. doi:10.3847/0004-637X/823/2/123
[Trujillo et al.(2021)]2021A A...654A..40T Trujillo, I., D'Onofrio, M., Zaritsky, D., et al. 2021, , 654, A40. doi:10.1051/0004-6361/202141603
[van der Kruit(1979)]Kruit79 van der Kruit, P. C. 1979, A&AS, 38, 15V
[van der Kruit & Searle(1981)]Kruit81 van der Kruit, P. C.; Searle, L. 1981, A&A, 95, 105V
[van der Kruit & Searle(1981b)]Kruit81b van der Kruit, P. C.; Searle, L. 1981, A&A, 95, 116V
[Varela et al.(2004)]2004A A...420..873V Varela, J., Moles, M., Márquez, I., et al. 2004, , 420, 873. doi:10.1051/0004-6361:20035697
[Verdes-Montenegro et al.(2005)]2005A A...436..443V Verdes-Montenegro, L., Sulentic, J., Lisenfeld, U., et al. 2005, , 436, 443. doi:10.1051/0004-6361:20042280
[Verley et al.(2007a)]2007A A...470..505V Verley, S., Odewahn, S. C., Verdes-Montenegro, L., et al.
[Verley et al.(2007b)]Verley-Iso Verley, S., Leon, S., Verdes-Montenegro, L., et al. 2007, , 472, 121. doi:10.1051/0004-6361:20077481
2007, , 470, 505. doi:10.1051/0004-6361:20077307
[Wang et al.(2018)]2018MNRAS.479.4292W Wang, J., Zheng, Z., D'Souza, R., et al. 2018, , 479, 4292. doi:10.1093/mnras/sty1687
[Watkins et al.(2016)]Watkins16 Watkins, A. E., Mihos, J. C., & Harding, P. 2016, , 826, 59. doi:10.3847/0004-637X/826/1/59
[Watkins et al.(2019)]Watkins19 Watkins, A. E., Laine, J., Comerón, S., et al. 2019, , 625, A36. doi:10.1051/0004-6361/201935130
[York et al.(2000)]SDSSYork2000 York, D. G., Adelman, J., Anderson, J. E., et al. 2000, , 120, 1579. doi:10.1086/301513
[Zernike(1934)]Zernike34 Zernike, . von F. 1934, Physica, 1, 689. doi:10.1016/S0031-8914(34)80259-5
[Zheng et al.(2015)]Zheng15 Zheng, Zheng; Thilker, David A.; Heckman, Timothy M. 2015A, pJ, 800, 120Z
§ SEXTRACTOR CONFIGURATION PARAMETERS.
The parameters that were not set to their default values are:
§ IMAGES, MASKS, AND SURFACE BRIGHTNESS RADIAL PROFILES.
|
http://arxiv.org/abs/2307.01375v1
|
20230703220251
|
Quantum theory of single-photon nonlinearities generated by ensembles of emitters
|
[
"Kurt Jacobs",
"Stefan Krastanov",
"Mikkel Heuck",
"Dirk R. Englund"
] |
quant-ph
|
[
"quant-ph"
] |
United States Army Research Laboratory, Adelphi, Maryland 20783, USA
Department of Physics, University of Massachusetts at Boston, Boston, Massachusetts 02125, USA
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Electrical and Photonics Engineering, Technical University of Denmark, 2800 Lyngby, Denmark
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
The achievement of sufficiently fast interactions between two optical fields at the few-photon level would provide a key enabler for a broad range of quantum technologies. One critical hurdle in this endeavor is the lack of a comprehensive quantum theory of the generation of nonlinearities by ensembles of emitters. Distinct approaches applicable to different regimes have yielded important insights: i) a semiclassical approach reveals that, for many-photon coherent fields, the contributions of independent emitters add independently allowing ensembles to produce strong optical nonlinearities via EIT; ii) a quantum analysis has shown that in the few-photon regime collective coupling effects prevent ensembles from inducing these strong nonlinearities. Rather surprisingly, experimental results with around twenty photons are in line with the semi-classical predictions. Theoretical analysis has been fragmented due to the difficulty of treating nonlinear many-body quantum systems. Here we are able to solve this problem by constructing a powerful theory of the generation of optical nonlinearities by single emitters and ensembles. The key to this construction is the application of perturbation theory to perturbations generated by subsystems. This theory reveals critical properties of ensembles that have long been obscure. The most remarkable of these is the discovery that quantum effects prevent ensembles generating single-photon nonlinearities only within the rotating-wave regime; outside this regime single-photon nonlinearities scale as the number of emitters. The theory we present here also provides an efficient way to calculate nonlinearities for arbitrary multi-level driving schemes, and we expect that it will prove a powerful foundation for further advances in this area.
Quantum theory of single-photon nonlinearities generated by ensembles of emitters
Dirk R. Englund
August 1, 2023
=================================================================================
§ INTRODUCTION
The realization of optical nonlinearities strong enough to perform logic operations at the single photon level is a long-standing goal in quantum information science and technology. Such “giant” nonlinearities would have a range of applications including fast all-optical classical and quantum information processing <cit.> and the production of complex non-classical states for quantum sensing <cit.>.
Atoms and other emitters will induce nonlinearities for electromagnetic fields when the frequencies of the fields are sufficiently off-resonant with the atomic transitions that the effect of the light on the emitters is perturbative <cit.>. Under this condition, the atomic polarization is given by a power series in the field amplitudes. Each term in this power series is an induced nonlinearity of a given order. There is a fundamental limitation to the strength of these nonlinearities. Since the effect of the light on the emitter must be perturbative the emitter/field coupling, g, must be small compared to the detuning, Δ, between the light sources and the emitter transitions to which they couple. As we will see below, terms in the perturbation expansion corresponding to an n^th-order nonlinearity are proportional to g ϵ^n with ϵ≡ g/Δ≪ 1. All nonlinearities generated by a single atom are therefore much smaller than the emitter/field coupling rate, g, and decrease exponentially with n, the order of the nonlinearity. For an ensemble of N independent emitters, a semiclassical analysis, valid for sufficiently strong coherent fields, shows that the non-linearities induced by each emitter simply sum together to produce N times the nonlinearity of a single emitter. This property allows ensembles to circumvent the above bound and generate “giant" nonlinearities <cit.>. By employing the technique of electromagnetically induced transparency (EIT) the nonlinearities can be generated without inducing significant dissipation for the fields <cit.>.
In 1997 Imamoḡlu et al. asked whether an ensemble of N atoms could similarly be used to generate giant nonlinearities for a single photon in a cavity <cit.>. Due to the difficulty of treating the quantum dynamics of a dissipative ensemble of emitters coupled to a cavity mode all analyses performed at the time involved reasoning from a simplified version of the system. It was nevertheless concluded that the answer was negative, due at least partially to the sharpness of the EIT linewidth as compared to that of an optical cavity <cit.>. Since neither of these linewidths is an essential part of the emitter/field interaction this work left open the fundamental question as to whether ensembles could generate single-photon nonlinearities.
In 2006 Hartmann, Brandao, and Plenio (HBP) accomplished an exact quantum treatment of the generation of an optical Kerr nonlinearity by an ensemble <cit.>. They did so by removing the complication caused by the atomic and cavity damping, considering the symmetric subspace of the ensemble (appropriate so long as the atomic damping is sufficiently small), and performing a Dyson expansion to determine the perturbative effect of the field on this subspace. They obtained a very different result than that derived in the semi-classical regime; due to the collective coupling to the ensemble, the coupling rate between the ensemble ground state and the first collective excited state is given by g̃ = √(N)g, where N is the number of emitters and g is the coupling for a single emitter. Since the perturbative regime requires that g is much less than the detuning, the maximum allowed value for g reduces as 1/√(N). This reduction of g exactly balances the increase in the Kerr nonlinearity as the number of emitters is increased. The maximum rate of the nonlinearity generated by the ensemble is thus no more than that which can be generated by a single emitter.
While the 2006 paper by HPB is famous, the results discussed above do not appear to be widely known <cit.>. In 2013 Venkataraman, Saha, and Gaeta (VSG) performed an experiment in which they realized a Kerr nonlinearity using an ensemble with an average of 20 photons, and showed that their results were consistent with the standard semiclassical analysis. In 2019 Trivedi et al. considered the generation of nonlinearities by an ensemble in a cavity at the single-photon level using a recently developed numerical technique <cit.>. While their results were not inconsistent with those of HBP, they did not determine how the nonlinearity scales with the size of the ensemble.
Here we introduce a comprehensive, fully quantum mechanical theory for the generation of nonlinearities by single emitters and ensembles of identical independent emitters. This theory provides insight into the physics underlying the generation of nonlinearities and allows us to immediately answer a number of the outstanding questions. We expect that the theory will provide a crucial foundation for answering important questions that remain. As we will show, our theory also furnishes a powerful method for calculating the nonlinearities generated by emitters, and thus for designing driving schemes to engineer nonlinear processes.
In addition to the theory itself and the associated methods, our primary results are as follows. First, we confirm that in the few-photon regime the nonlinearities generated by ensembles are severely limited by the collective coupling, and we are able to show how the very different semi-classical behavior emerges when the fields are in coherent states with sufficiently high amplitude. Second, we show surprisingly that ensembles of independent emitters can generate nonlinearities that increase in strength with the number of emitters. Outside the rotating-wave regime (when the emitter/field coupling is sufficiently strong, but still small compared to the transition frequencies) the transition frequencies themselves take the place of the detunings in the perturbation theory. In this case, the emitters effectively act independently, the nonlinearities scale with the number of emitters, and very large nonlinearities can be generated. The mechanism that enables this scaling is essentially the same as that which does so in the semi-classical regime.
Given the above results, it is clear that the ability of ensembles to generate giant nonlinearities at low photon numbers in the rotating wave regime will depend on precisely where (at what photon number) the transition to semi-classical behavior occurs. We do not resolve this question here, but we expect that it can be answered with the tools we have developed.
§.§ Structure of this article
In Section <ref> we introduce the Hamiltonian for an ensemble of independent multilevel emitters interacting with one or more cavity modes, as well as the form this Hamiltonian takes under the rotating-wave approximation. In Sections <ref> and <ref> we briefly review multi-parameter time-independent perturbation theory (TIPT), present the recursion relations that determine the expansion coefficients, and introduce some useful short-hand notation. In Section <ref> we show how TIPT can be extended to perturbations that involve operators of an external subsystem. We then show, in Section <ref>, that the expansion for the perturbed ground-state “eigenvalue" in this extended version of TIPT gives precisely the effective Hamiltonian for the external perturbing system. In our case it is the field mode(s) that are perturbing the emitter and for which the effective nonlinear Hamiltonian is induced. In Sections <ref> and <ref> we discuss, respectively, limits on the atom/field coupling rates and the frequency-matching conditions for the generated nonlinearities. In Section <ref> we show how to derive the full effective master equation for the field; due to decay of the emitter levels the emitter induces dissipation for the field in addition to the effective nonlinear Hamiltonian. In Section <ref> we elucidate the fact that the initial state of the emitter and field is important in generating the nonlinearities, and this restricts the speed at which the field can be changed. In Section <ref>, as an example we use our method to calculate the cross-Kerr nonlinearity generated by the Schmidt-Imamoḡlu scheme employing a single 4-level emitter. With that we complete our development of the theory of the generation of nonlinearities by a single emitter.
In Section <ref> we are finally ready for our ultimate goal, that of calculating the nonlinearities generated by ensembles. In Section <ref>, to provide insight, we use our method to do this for the simplest example, an ensemble of undriven two-level systems. In Section <ref> we show how to apply the method developed for single emitters to ensembles of emitters. We then use it to show in general how the self-Kerr nonlinearity generated by an ensemble relates to that generated by a single emitter. Next we turn to the pivotal question, that of how the bound on the coupling rates scales with the size of the ensemble? In Section <ref> we answer this for ensembles in the bare coupling regime, and in Section <ref> for the RWA regime. In Section <ref> we show that in the RWA regime the scaling of the Kerr nonlinearity for two-level systems is quite different to that for the Schmidt-Imamoḡlu scheme. In Section <ref> we show that when the field modes are in coherent states with sufficiently high amplitudes the size of the nonlinearities scales linearly with the size of the ensemble, thus explaining the emergence of this behavior in the semi-classical regime. In Section <ref> we show how to apply our method to calculate nonlinearities for travelling-wave fields and write these as nonlinear susceptabilities. In Section <ref> we point out that the ability of ensembles to generate nonlinearities for weak fields will depend on exactly where the emitter/field system makes the transition to the semiclassical regime. While we do not explore this question further here, we discuss some recent experimental results in this context. Section <ref> concludes with a discussion of open questions.
§ GENERATION OF NONLINEARITIES BY A SINGLE EMITTER
§.§ A Multilevel emitter coupled to field modes
A general emitter has a set of discrete states, |ñ⟩, n=0, 1, 2, …, with energies ℰ̃_n. By convention |0̃⟩ is the ground state. Transitions between emitter states are induced by exchanging energy with the field. While all field modes are coupled to all emitter transitions, only coupling between modes and transitions that are sufficiently close in energy need to be included. For our purposes we need only to have each field mode coupled to a single transition, although the method we introduce can certainly be applied in the general case. If we have L modes with mode operators a_l, l = 1, …, L, each of which is coupled to a transition |ñ_l⟩↔|k̃_l⟩, under the usual dipole approximation the Hamiltonian is <cit.>
H = H̃_0 + ∑_l ħ ( g_l a_l + g_l^* a_l^† ) (σ_l + σ_l^†) + H_f
with
H̃_0 = ∑_ñℰ̃_n |ñ⟩⟨ñ|,
H_f = ∑_l ħω_l a_l^† a_l
σ_l = |ñ_l⟩⟨k̃_l|,
in which σ_l is called the transition operator for the transition |ñ_l⟩↔|k̃_l⟩. We use the convention that σ_l always denotes a lowering operator, meaning that the energy of |k̃_l⟩ is greater than that of |ñ_l⟩. The reason that we use tildes on some operators and states will become clear below. In short, tildes denote the original emitter Hamiltonian and its eigenbasis and will be removed to denote the emitter Hamiltonian that includes classical driving terms and its eigenbasis.
If a mode coupled to a transition is in a coherent state with amplitude α, in which |α|^2 ≫ 1, then the mode operators a and a^† can be replaced by α and α^* (the reason for this is detailed in Section <ref>) so that the mode is eliminated from the dynamics. This results in a coupling between the upper and lower states proportional to α and is referred to as a classical drive. Here we are interested in the situation in which classical drives are applied to various transitions of an emitter while other transitions may be coupled to modes as above. Alowing the first M modes to be classical drives with amplitude α_l, and defining the Hermitian operators
x̂_j = σ_j + σ_j^† ,
this more general situation is described by the Hamiltonian
H̃_D = H̃_0 + ∑_l = 1^M ħ D_l x̂_l + ∑_l=M+1^Lħ ( g_l a_l + g_l^* a_l^† ) x̂_l + H_f ,
where D_l = g_lα_l + g_l^* α_l^*.
To write H̃_D in the form appropriate for treating the interaction with the field modes as a perturbation, we now diagonalize the emitter part of the Hamiltonian using a unitary change of basis, U, so that H̃_D becomes
H_B = H_0 + ∑_l=0^M-1ħ ( g_l a_l + g_l^* a_l^† ) Λ_m_l + H_f ,
where
H_0 = ∑_n E_0^(n)|n_0⟩⟨n_0| = U^†[ H̃_0 + ∑_m ∈𝒮_ħ D_m x̂_m ] U
Λ_m_l = U^†x̂_m_l U
and U is defined by Eq.(<ref>). The reason that we have labelled the above Hamiltonian with the subscript “B" will be explained below.
While H_B describes a driven emitter interacting with a number of field modes, it is not the Hamiltonian usually considered for optical emitters. For such emitters the transition frequencies are very much larger than the Rabi frequencies or coupling rates to the optical modes. In this case, so long as the frequencies of the classical drives and quantum modes satisfy a “consistency" condition (see below) it is possible to move into the interaction picture so that the emitter Hamiltonian that remains contains the detunings between the fields and the transitions rather than the energies of the levels. Making the rotating-wave approximation then eliminates the time dependence in the interaction Hamiltonian.
To implement the above procedure we return to the undiagonalized version of the emitter/field Hamiltonian, Eq.(<ref>). We also need to determine the reference Hamiltonian for the emitter, H_Ref, to move into the interaction picture with respect to. To proceed we note that the time dependence induced in the transition operator σ_l and field operator a_l by moving to the interaction picture w.r.t an emitter Hamiltonian H̃_Ref≡∑_ñℰ_n |ñ⟩⟨ñ| and the field Hamiltonian H_f are
σ_l(t) = σ_l exp[-iη_l t], η_l = (ℰ_n_l - ℰ_k_l)/ħ ,
a_l(t) = a_l exp[-iω_l t] .
We need to choose the energies ℰ_n so that the frequency of each transition, η_l, is equal to that of the mode to which it is coupled, ω_l. This is not always possible, but if it is we will refer to the set of mode frequencies {ω_l} as consistent.
Moving into the interaction picture w.r.t H_Ref and H_f, and assuming that the set of mode frequencies is consistent, the interaction picture Hamiltonian is
H̃_I = ΔH̃ + ∑_l ħ ( g_l a_l e^-iω_l + ) (σ_l e^-iω_l + )
= ΔH̃ + ∑_l ħ ( g_l a_l σ_l^† + g_l^* a_l^†σ_l )
+ ∑_l ħ ( g_l a_l σ_l e^-i2ω_l + g_l^* a_l^†σ_l^† e^i2ω_l )
with
ΔH̃ = ∑_ñΔℰ_n |ñ⟩⟨ñ| = ∑_ñ (ℰ̃_n - ℰ_n) |ñ⟩⟨ñ| .
The last term in H̃_I is time-dependent. Since the transition frequencies are much larger than all other timescales in the dynamics (the interaction rates g_l and the detunings Δℰ_n), the rapidly oscillating terms all but cancel themselves out and contribute little to the evolution. The “rotating-wave" approximation involves discarding these terms.
We now allow some of the field modes to contain large coherent states and thus provide classical driving as before. Choosing the first M modes to be the classical drives the emitter-field Hamiltonian is
H̃_I = ΔH̃ + ∑_l=0^Mħ ( β_lσ_l^† + ) + ∑_l=M+1^Lħ ( g_l a_lσ_l^† + ) ,
in which β_l = g_l α_l. The Hamiltonian H̃_I is the large-transition-frequency version of H̃_D above, in which an arbitrary emitter is driven by classical fields in a time-independent way and interacts with a number of quantum mechanical field modes. This Hamiltonian describes all current schemes for the generation of optical nonlinearities by driven emitters.
There are two differences between H̃_D and H̃_I. In H̃_I the energy levels of the emitter are replaced by the detunings, Δℰ_n, between the driving fields (or modes) and the emitter transitions. The second difference is that the interaction between the emitter and the field modes has a different form. This will turn out to have a profound effect on the behavior we study here. Note that for the classical driving to have the time-independent form in H̃_I the frequencies of these driving fields must be consistent in the manner defined above. Only if they are can we treat the driving as being time-independent in the frame of the interaction picture. On the other hand, to treat the driving as time-independent in the Schroedinger picture (H̃_D) all we need is that the driving actually be time-independent (meaning that the (complex) amplitudes of the driving fields are constant).
To put the Hamiltonian H̃_I in the form appropriate for treating the interaction with the field modes as a perturbation, we now diagonalize the emitter part of the Hamiltonian using a unitary transformation U. The result is
H_R = Δ H + ∑_l K_l^† a_l + K_l a_l^†
in which
Δ H = ∑_n E_0^(n)|n_0⟩⟨n_0|
= U^†[ ΔH̃ + ∑_l ħ ( β_lσ_l^† + β_l^* σ_l ) ] U
K_l = g_l U^†σ_l U
and U is defined by Eq.(<ref>).
We now have two different Hamiltonians that describe the interaction of multi-level emitters with the electromagnetic field. In the first, H_B, given in Eq.(<ref>), the energy levels for the emitter are the emitter's actual, or “bare" energy levels, which is the origin of the choice of subscript “B". In the second, H_R, given in Eq.(<ref>), these energy levels are “relative" to those of the driving fields, which is the reason for the subscript “R".
The regime in which an emitter will generate a distinct set of nonlinearities for the field modes is that in which the size of the coupling with each of the field modes is much smaller than the energy separation (the relative separation in the case of H_R) between the two levels of the transitions to which they couple. That is, when
|g_l| ≪ E_0^(n_l) - E_0^(k_l) .
In this case, the interaction with the field is a perturbation on the dynamics of the emitter. We will use time-independent perturbation theory to determine the resulting dynamics of the field modes.
§.§ Time-independent perturbation theory (TIPT)
Recall that time-independent perturbation theory (hereafter TIPT) allows one to calculate the eigenvalues and eigenvectors of a Hamiltonian
H = H_0 + λ V
in terms of the eigenvectors and eigenvalues of H_0 and as a power series expansion in the parameter λ. For this expansion to be valid the magnitude of the off-diagonal elements of λ V in the basis of H_0 must be smaller than the separation between the corresponding energy levels of H_0. If the size of the energy level separations are on the order of Δ, and the size of the elements of V_l are on the order of v, then the power series that results is actually a power series in the small parameter λ v/Δ. For future reference it is useful to define the following two terms:
* Expansion parameter: This is a real parameter that multiplies a perturbative term in the Hamiltonian. For H above, λ is an expansion parameter.
* Perturbation parameter: This is a real parameter that gives the relative size of the perturbation with respect to the base Hamiltonian H_0. One usualy assumes that this parameter is smaller than unity because if it is not the perturbation expansion is not guaranteed to be valid. The perturbation parameter is typically on the order of λ |V|/|H_0|, where |·| denotes an operator norm. For a given expansion parameter, λ, we will denote the corresponding perturbation parameter by ϵ_λ.
To determine the power series for the eigenvectors and eigenvalues of H = H_0 + λ V one assumes that they may be written in the form
E^(n) = E^(n)_0 + ∑_j≥ 1 E^(n)_j λ^j ,
|n⟩ = |n_0⟩ + ∑_j≥ 1λ^j |n_j⟩ ,
where E^(n)_0 and |n_0⟩ are the eigenvectors and eigenvalues of H_0. The expansion coefficients E^(n)_j and |n_j⟩ are obtained from the equation H |n⟩ = E^(n)|n⟩ by multiplying on the left by the eigenstates of H_0 and equating the coefficients of the powers of λ <cit.>. This procedure provides recursion relations for the expansion coefficients from which explicit expressions can be obtained.
Here, since we want to consider an emitter interacting with up to four electromagnetic modes (so as to explicitly include the generation of third-order nonlinearities) we will need to calculate the eigenvalues and eigenvectors of a Hamiltonian H with four perturbative terms:
H = H_0 + λ V + ν X + η Y + ξ Z .
Here H_0 is a Hamiltonian whose eigenvectors and eigenvalues are known and λ, ν, η, and ξ are expansion parameters associated with the perturbative terms V, X, Y, and Z, respectively. Looking ahead briefly, these perturbative Hamiltonians will have the form
V = A^† a + A a^† , X = B^† b + B b^†
Y = C^† c + C c^† , Z = D^† d + D d^† ,
in which a, b, c, and d are mode operators for four different electromagnetic modes, and the operators A, B, C, and D will in general be lowering operators (or sums of more than one lowering operator) each of which couple two levels of the emitter.
To compute the eigenvalues and eigenvectors of H, we use multi-parameter time-independent perturbation theory (TIPT). This method extends the usual TIPT to the case where multiple perturbations are present <cit.>. To do so we assume that the eigenvalues and eigenvectors of H may be written in the form
E^(n) = ∑_j,k,l,q E^(n)_jklqλ^j ν^k η^l ξ^q ,
|n⟩ = ∑_j,k,l,qλ^j ν^k η^l ξ^q |n_jklq⟩ ,
where the sums over all the indices j, k, l, q, run from zero to infinity, and E^(n)_0000 = E^(n)_0 and |n_0000⟩ = |n_0⟩ are, respectively, the eigenvalues and eigenvectors of H_0. To determine the eigenvectors and eigenvlues of H we use the eigenvalue equation H |n⟩ = E^(n)|n⟩ and substitue in the expansion forms for E^(n) and |n⟩ given above. As in single-paramter TIPT we derive recursion relations for the expansion coefficients E^(n)_jklq and |n_jklq⟩ by multiplying the eigenvalue equation on the left by ⟨m_0|. Solving the recursion relations we obtain expressions for the expansion coefficients (see Table <ref>).
We then use the eigenvalue and eigenvector expansions to calculate quantities of interest for the system described by the Hamiltonian H.
When using TIPT we can either choose the expansion parameters to be interaction rates between the primary and the perturbing systems, or we can include all physical content in the perturbation operators, V, X, etc., relegating the expansion parameters to a purely formal role as dimensionless quantities. If we choose the latter, which is conventional in some circles, then we set the expansion parameters to unity in the above expressions for the eigenvectors and eigenvalues, so that
E^(n) = ∑_j,k,l,q E^(n)_jklq ,
|n⟩ = ∑_j,k,l,q |n_jklq⟩ .
In the example we treat in Section <ref>, we choose the expansion parameters to be interaction rates and thus use Eqs.(<ref>) and (<ref>).
§.§ Calculating the TIPT expansion coefficients
As noted above, recursion relations for the TIPT expansion coefficients can be derived from the eigenvalue equation for H. The recursion relations for a three-parameter expansion, in which the perturbation Hamiltonians are V, X, and Y, are
E^(n)_jkl = ∑_q≠n [ V_nq⟨ q_0 |n_j-1,k,l⟩ + X_nq⟨ q_0 | n_j,k-1,l⟩ + Y_nq⟨ q_0 | n_j,k,l-1⟩] - ∑_q,p,r ≠ 000
q,p,r ≠ jkl^j,k,l E^(n)_qpr⟨ n_0 | n_j-q,k-p,l-r⟩
⟨ m_0 | n_j,k,l⟩ = 1/Δ_nm∑_q≠m [ V_mq⟨ q_0 |n_j-1,k,l⟩ + X_mq⟨ q_0 | n_j,k-1,l⟩ + Y_mq⟨ q_0 | n_j,k,l-1⟩] - ∑_q,p,r≠ 000
q,p,r≠ jkl^j,k,l( E^(n)_qpr/Δ_nm) ⟨ m_0 | n_j-p,k-q,l-r⟩ , m ≠ n ,
⟨ n_0 | n_j,k,l⟩ = - 1/2∑_q,p,r≠ 000
q,p,r ≠ jkl^j,k,l∑_m⟨ n_q,p,r |m_0 ⟩⟨ m_0 | n_j-q,k-p,l-r⟩
where |n_000⟩≡ |n_0⟩ and we have defined
Δ_nm≡ E^(n)_0 - E^(m)_0 .
The summation subscript x,y,z ≠ abc means that the single term in which x=a, y=b, z=c is excluded from the sum. If not specified, all sums run from 0 though N-1 where N is the dimension of the emitter. We always set
V_nn = X_nn = Y_nn = 0
since any non-zero diagonal elements of the perturbation operators can always be absorbed into H_0. A result of this convention is that
E^(n)_100 = E^(n)_010 = E^(n)_001 = 0
⟨ n_0 |n_100⟩ = ⟨ n_0 |n_010⟩ = ⟨ n_0 |n_001⟩ = 0 .
We give the recursion relations for three instead of four parameters here because i) the four-parameter recursion relations do not fit neatly on the page, and ii) given the three parameter relations it is simple to generalize to any number.
There is a great deal of symmetry in the expansion coefficients. For example, the expansion coefficient E^(n)_010 is the same as E^(n)_100 but with V replaced by X. Similarly, the 2-parameter expansion coefficient E^(n)_jk is the same as the 4-parameter expansion coefficient E^(n)_jk00, and is the same as E^(n)_0j0k up to the replacements V → X and X → Z. Thus specifying the expansion coefficient E^(n)_jk gives us all expansion coefficients for which exactly two indices are nonzero.
We now introduce a compact notation that will help to present all distinct coefficients for expansion terms to 4^th order for up to four fields (Table <ref>). Consider a product of a set of symbols A, B, C, … in which each symbol has two subscripts. An example is the product A_ab B_bc C_cd. We define 𝖠𝖫𝖫𝖯 A_ab B_bc C_cd⋯ as the sum of products of all permutations of the symbols A, B, C, …, where the subscripts do not permute with their symbols (the subscripts stay in the same place). The following examples provide clarification of these rules:
𝖠𝖫𝖫𝖯 A_ab B_bc C_cd = A_ab B_bc C_cd + A_ab C_bc B_cd
+ B_ab A_bc C_cd + B_ab C_bc A_cd
+ C_ab A_bc B_cd + C_ab B_bc A_cd
𝖠𝖫𝖫𝖯 A_ab B_bc B_cd = A_ab B_bc B_cd + B_ab A_bc B_cd
+ B_ab B_bc A_cd .
Using this notation we give all distinct expressions for the expansion coefficients up to 4^th order in Table <ref>. Since we write these explicitly for the ground state (n=0), we use
Δ_k≡ E^(0)_0 - E^(k)_0 .
All coefficients for the ground state for up to four fields are obtained by taking the coefficients in Table <ref> and permuting their subscripts, and/or setting one or more of the subscripts to zero.
§.§ Applying TIPT to perturbations by external systems
We now consider applying TIPT to the situation in which an interaction between a “primary" system and another system is perturbative (small) compared to the energy separation between the states of the primary system but not compared to the energy separations of the states of the secondary system. In this case, rather than diagonalizing the Hamiltonian of the primary system alone, TIPT will allow us to block-diagonalize the Hamiltonian. Each block corresponds to (is a perturbed version of) one of the subspaces defined by the original eigenstates of the primary system. This situation is depicted in Fig. <ref>. The unperturbed subspaces, each defined by a state of the primary system, are separated in energy, and applying TIPT with respect to the primary system dertermines corresponding perturbed subspaces that are not mixed together by the joint Hamiltonian. The joint Hamiltonian thus acts non-trivially only within the subspaces. We show the block-diagonalization of the Hamiltonian written as a matrix in Fig.<ref>.
In our case the primary system will be an emiter and the secondary system (or systems) will be modes of the electromagnetic field. Thus the subspaces defined by the states of the emitter will be separated in energy so long as the modes are not too highly populated. (Note that for the rotating-wave interaction all the states of the modes are effectively degenerate, a degeneracy which is removed by the interaction.)
We will find that the action of the full Hamiltonian on each subspace is nonlinear for the field modes and is different on each subspace. It is this action that gives the effective nonlinearities for the fields. Note that since the eigenstates making up each subspace are weakly entangled “polariton” states of the emitter and the field modes, to generate the nonlinear evolution for the fields alone would require turning on the interaction adiabatically to transform the field states to the polariton states, allowing the effective nonlinearity to act, and then turning of the interaction adiabatically.
We now turn to the mathematical details. Consider a single perturbation parameter in which the perturbation is due to an interaction with another system. In this case the Hamiltonian is
H = H_0 ⊗ I + λ V
in which H_0 is the Hamiltonian of the emitter, I is the identiy operator for the perturbing system, and the interaction operator V is in general a sum of tensor products of operators of the emitter and the perturbing system:
V = ∑_j W^j⊗Λ_j .
Using the matrix representation of the tensor product, V is now a matrix indexed by the states of the emitter in which each element, V_jk, is an operator that acts on the perturbing system (equivalently V_jk is a matrix with the dimensions of the perturbing system):
V = ( [ V̂_11 V̂_12 ⋯; V̂_21 V̂_22 ⋯; ⋮ ⋮ ⋱ ]) .
If the perturbation is merely the product of a single Hemitian operator of the emitter, W, and an operator for the perturbing system, Λ̂, then the elements of V are simply V̂_jk = W_jkΛ̂ as depicted in Fig.<ref>. Time-independent perturbation theory, as derived above in Sections <ref> and <ref>, can be applied directly to the general situation given in Eq.(<ref>) by taking the elements V_jk to be operators instead of numbers. The recursion relations for TIPT remain valid because in deriving them we were careful to respect the order in which the matrix elements of V are multiplied together. Similarly the expressions for the terms in the expansions for the eigenvalues (Table <ref>) and eigenvectors (Appendix <ref>) are valid when the matrix elements V_jk are operators.
To understand the meaning of the expansion for the eigenvectors when the matrix elements of V are operators we begin by writing out the first few terms in this expansion (these terms are given in Appendix <ref>):
|n⟩ = ∑_jλ^j |n_j⟩ =
[ 1 - λ^2/2∑_l≠ nV_nlV_ln/Δ_nl^2 + …] |n_0⟩
+ λ∑_l≠ n[ V_ln/Δ_nl + λ∑_q≠l,nV_lqV_qn/Δ_nlΔ_nq + …] |l_0⟩
Since the matrix elements of V are field-mode operators they operate on the field states. As such, the above expressions need some explanation because we have not included any field states for these operators to act on. We can think of the state |n_0⟩ (and similarly |l_0⟩) in the expansion above as representing any of the tensor product states |n_0⟩|j⟩ where |j⟩ is a field state. Thus |n_0⟩ represents the entire subspace {|n_0⟩|j⟩ : j = 0, 1, …, ∞}. If we specify a particular field state on the right-hand side of Eq.(<ref>) by replacing |n_0⟩ by |n_0⟩|j⟩, then the left-hand side is the corresponding perturbed state of the joint system. The state |n⟩ on the left-hand side of Eq.(<ref>) thus represents all the states in the perturbed subspace corresponding to the unperturbed subspace |n_0⟩. Denoting the states in the perturbed subspace |n⟩ by |n,j⟩ we have
|n,j⟩ =
[ 1 - λ^2/2∑_l≠ nV_nlV_ln/Δ_nl^2 + …] |n_0⟩|j⟩
+ λ∑_l≠ n[ V_ln/Δ_nl + λ∑_q≠l,nV_lqV_qn/Δ_nlΔ_nq + …] |l_0⟩|j⟩ .
We will use |n⟩⟨n| to denote a projector onto the perturbed subspace represented by |n⟩. By multiplying both sides of the above equation on the left by ⟨m_0|⟨k| we obtain the coefficients of the state |n,j⟩ in the original product basis |m_0⟩|k⟩. Denoting the matrix elements of the mode operator V_nl by V_nl^(kj) the result is
⟨m_0|⟨k| n,j⟩ = {[ 1 - λ^2/2∑_q∑_l≠ n V_nl^(kq)V_ln^(qj)/Δ_nl^2 + … , m = n; λ∑_r[ V_mn^(rj)/Δ_nl + λ∑_q≠m,n V_mq^(kr)V_qn^(rj)/Δ_nlΔ_nq + …] , m ≠ n ].
By construction the action of the Hamiltonian on the perturbed subspace defined by |n⟩ is given by E^(n):
H |n⟩ = Ê^(n)|n⟩ ,
which also preserves the subspace (the action of H is block-diagonal). To remind ourselves that Ê^(n) is an operator on the field modes we have added a “hat" to it. This operator therefore gives the effective Hamiltonian for the field for subspace |n⟩, so that in general there is a different effective Hamiltonian for each subspace.
Finally, we will find it useful to examine the special case in which the emitter/field interaction is simply a product of a Hermitian operator of the emitter, which we will denote here by V, and a Hermitian operator Λ of the field so that
H = H_0 ⊗ I + V ⊗Λ .
For each eigenstate of Λ, which we will denote by |λ⟩, the Hamiltonian becomes
H = H_0 + λ V ,
which is simply the usual single-system perturbation in which the value of the expansion parameter is the eigenvalue of Λ. In this case the states of the perturbed subspace originating from the emitter state |n_0⟩ are the product states |n(λ)⟩|λ⟩ where |n(λ)⟩ is the perturbed emitter state when the scalar expansion parameter is equal to λ (Eq.(<ref>)). We will use this fact in Sections <ref> and <ref>.
§.§ The effective nonlinear Hamiltonian for the field
For the purposes of using an emitter to generate effective nonlinearities for up to four field modes we couple each mode to a different emitter transition, and all the couplings are perturbative from the point of view of the emitter. The Hamiltonian thus has the form given in Eqs. (<ref>), (<ref>), and (<ref>). At zero temperature (which for optical fields is equivalent to room temperature) the joint system will be in the subspace defined by the perturbed ground state eigenvector, |0⟩. As explained above, the action of the Hamiltonian on this subspace is given by the field operator Ê^(0) so that the effective Hamiltonian for the field is
H_eff = Ê^(0) = ∑_j,k,l,q (λ^j ν^k η^l ξ^q) Ê^(0)_jklq .
Each term in the expansion on the RHS is a product of the field interaction operators. More specifically, a term Ê^(0)_jklq contributes a product of m = j+k+l+q mode operators, and thus a specific nonlinearity. The first-order terms (m=1) vanish as described above. The second-order terms (m=2) give either linear interactions between the modes or frequency shifts of individual modes. The terms of third and higher order (m ≥ 3) generate nonlinearities. Terms that are m^th-order in the perturbation expansion correspond to (m-1)^th-order nonlinearities. Thus the expansion terms we give in Table <ref> are sufficient to describe the generation of all nonlinearities up to third order with up to four fields. The reason that the order of the nonlinearity is one less than the order of the expansion is that the former is defined with reference to the differential equation for the electric field rather than the Hamiltonian (and we are now stuck with it).
Examining the expressions for the expansion terms E_jklq in Table <ref> we see that the value of each of the subscripts (j,k,l,q) tells us how many times the elements of the corresponding interaction operator (one of V, X, Y, Z) appears in the products that make up that term. Since each element of V, X, Y, Z contains, respectively, a mode operator for mode a, b, c, and d, the subscripts tell us how many mode operators for each of the modes appear in the nonlinearity generated by that term. They do not, however, tell us whether the mode operators are annihilation or creation operators or a mix of both. That depends on exactly which elements of the interaction operators contribute to the term and whether these are upper or lower diagonal (raising or lowering). In Table <ref> we give examples of the nonlinear terms that can be generated by each expansion term. We have written these for the ground-state subspace, but those for the other subspaces are obtained merely by replacing Ê^(0)_ijkl with Ê^(n)_ijkl.
§.§ Size of the coupling rates
As we discussed in Section <ref>, perturbation theory is only valid so long as the power series in the perturbation parameters converges. For j,k,l,q all greater than unity, each term in the power series expansion for the “eigenvalues" (e.g., the expansion for Ê^(0) in Eq.(<ref>) above) has the form
(λ^j ν^k η^l ξ^q) Ê^(n)_jklq∼( λ^j |V|^j/Δ^j-1) ( ν^k |X|^k/Δ^k-1) ( η^l |Y|^l/Δ^l-1) ( ξ^q |Z|^q/Δ^q-1)
∼( λ^j ⟨ a^† a⟩^j/2/Δ^j-1) ( ν^k ⟨ b^† b ⟩^k/2/Δ^k-1) ( η^l ⟨ c^† c⟩^l/2/Δ^l-1) ( ξ^q ⟨ d^† d⟩^q/2/Δ^q-1)
in which Δ is on the order of the detunings (or the transition frequencies if in the bare coupling regime) and |V|, |X|, etc., are on the order of the typical matrix elements of the respective interaction operators, V, X, etc. A sufficient condition for the convergence of the power series is that each of the products in the above expression are less than unity. This will be true if λ satisfies
λ≪Δ/√(⟨ a^† a⟩) ,
and similarly for the coupling rates ν, η, ξ. These conditions on the coupling rates are sufficient for the perturbation series to converge but constitute bounds on the coupling rates only if they prove also to be necessary for this convergence. For single emitters this appears always to be the case, but for ensembles as we will see later the situation is more complex.
§.§ Selection of nonlinearities by resonance conditions and the rotating-wave approximation
By choosing the right level structure and driving configuration one can tailor the coefficients of the power series for Ê^(0) to choose which nonlinearities will be generated. Below we apply our theory to the scheme of Schmidt and Imamoḡlu in which a 4-level emitter is driven so as to generate a cross Kerr nonlinearity without the associated self-Kerr nonlinearities for each of the modes.
Driving the emitter to tailor the interaction operators V, X, etc. is not the only mechanism that selects which nonlinearities will be active. Let us denote the frequency of mode a_l by ω_l. A nonlinear term of the form
𝒩 = a_1^j_1 a_1^† k_1 a_2^j_2 a_2^† k_2⋯ a_n^j_n a_n^† k_n
has time dependence given by e^-iΩ t in the interaction picture where
Ω = (j_1-k_1) ω_1 + (j_2-k_2) ω_2 + ⋯ + (j_n-k_n) ω_n .
If Ω is much larger than the rate of the evolution generated by the nonlinear term itself, then the oscillation at frequency Ω will cause this effect to average to zero. Since the frequencies of optical modes are typically much larger than the rates of emitter-generated nonlinearities, the latter only survive under the condition that Ω = 0. We note that for the terms a^† a (frequency shift), (a^† a)^2 (self-Kerr), and a^† a b^† b (cross-Kerr), the oscillation frequency Ω is always zero regardless of the frequencies of the modes. Conversely, the oscillation frequency of a^† a^2 can only be zero if the frequency of the field mode is zero, so this term is not typically active.
§.§ Full nonlinear dynamics of the field: the master equation
We can now derive the effective Hamiltonian for the field, but we also need to include the effect of spontaneous decay of the emitter energy levels. For this we need to construct the effective master equation for the field. The spontaneous decay of the emitter is described by a Lindblad master equation for the emitter density matrix, ρ_e <cit.>:
ρ̇_e = -i/ħ[H̃,ρ_e ] + ∑_nγ_n/2( 2 σ̃_n0ρ_eσ̃^†_n0 - σ̃^†_n0σ̃_n0ρ_e - ρ_eσ̃^†_n0σ̃_n0)
in which the transition operators are σ̃_n0 = |0̃⟩⟨ñ| and γ_n is the decay rate from level |ñ⟩ to the ground state. Here we restrict our analysis to levels that decay directly to the ground state. Decay to other levels has a more complex effect on the effective dynamics which we will discuss in Section <ref>.
As discussed above, so long as the emitter and the field remain in the subspace defined by the perturbation expansion for emitter state |n⟩ (a subspace that contains essentially all field states) the effective dynamics for the field is given by the field operator generated by the expansion for the eigenvalue of |n⟩. From the point of view of practicality, since the field and emitter can be assumed to be in the ground perturbed subspace at zero temperature, it is best to take this subspace, represented by the perturbed emitter state |0⟩, as the one that the emitter and field will (approximately) remain in. To construct the master equation we need to determine the action of the transition operators σ_n0 on the subspace |0⟩. To do this we first transform these operators to the driven basis and then project them onto this subspace. The effective transition operators are thus
σ_n^eff = |0⟩⟨0| U |0̃⟩⟨ñ| U^†|0⟩⟨0| ,
where |0⟩⟨0| denotes the projector onto the subspace |0⟩ (further details are given in Appendix <ref>). Since the effective dynamics remains in the subspace |0⟩, we obtain the action of the transition operators in this subspace by discarding the outer projectors |0⟩ and ⟨0| in the expression above for σ_n^eff. This gives us the effective transition operators purely as operators that act only on the field. Denoting these effective transition operators by Σ_n we have
Σ_n = ⟨0| U |0̃⟩⟨ñ| U^†|0⟩ .
Under the assumption that the driving does not couple the bare ground state of the emitter, |0̃⟩, to the other levels we have U |0̃⟩ = |0_0⟩ (note that the latter is defined in Eq.(<ref>) and to second order the inner product ⟨0| 0_0 ⟩ for one field is
⟨0| 0_0 ⟩ = 1 + λ^2 ⟨ 0_2 | 0_0 ⟩ + …
and for two fields it can be written as
⟨0| 0_0 ⟩ = 1 +
( [ λ; ν ])^t( [ ⟨ 0_02 | 0_0 ⟩ ⟨ 0_11 | 0_0 ⟩/2; ⟨ 0_11 | 0_0 ⟩/2 ⟨ 0_20 | 0_0 ⟩ ])
( [ λ; ν ]) + …
In this case the inner product ⟨ñ| U^†|0⟩ has no zeroth-order component, and so for two fields is
⟨ñ| U^†|0⟩ = ∑_j=1^∞∑_k=1^∞λ^j ν^k ⟨ñ| U^†|0_jk⟩ .
Since the leading-order term in the transition operators is first order in the field operators, and all terms in the master equation contain a product of two transition operators, to construct the master equation up to 4^th order we need “only" calculate the transition operators to 3^rd order, which in turn means that we only need to determine ⟨0| U |0⟩ to 2^nd order. Once we have obtained the effective transition operators, Σ_n, to the desired order, the effective master equation for the field density matrix, ρ, is
ρ̇ = -i/ħ[H_eff^(0),ρ] + ∑_nγ_n/2( 2 Σ_nρΣ^†_n - Σ^†_n Σ_n ρ - ρΣ^†_n Σ_n )
where H_eff is the effective Hamiltonian generated by the emitter, Eq.(<ref>).
§.§ Preparing the appropriate initial state
We saw in Section <ref> that the effective Hamiltonian for the field is given by Ê^(0) so long as the joint system is in a state that lies in the subspace denoted by the perturbed eigenvector |0⟩. If we assume that the emitter and the field begin in their joint ground state, then since this state is (the lowest energy state) in the subspace |0⟩, the effective Hamiltonian for the field is Ê^(0). However, this nonlinear evolution is hardly useful unless we can prepare the field in other initial states. Certainly the subspace |0⟩ contains (almost) all field states, so in theory we can prepare any initial field state while remaining in this space. One way to do this is to apply a time-dependent Hamiltonian that changes the state of the field slowly compared to the energy gap between the populated part of the subspace |0⟩ and the subspace |1⟩. In this case the joint system will remain in the subspace |0⟩ by the adiabatic theorem. We now show that the use of this adiabatic evolution, or some even more sophisticated procedure, is in fact necessary; if one applies a field Hamiltonian that transforms the initial ground state to some other state too quickly, then the resulting joint state has a component outside the subspace |0⟩.
The situation described by the Hamiltonian H_B in Section <ref>, in which the emitter/field interaction is a product of an operator of the emitter and a field mode, allows us to obtain considerable insight into the joint eigenstates. We can write H_B (Eq.(<ref>)) as
ℋ = H_0 + G Λ + H_f
with G and Λ are operators of the emitter and the field mode, respectively. We define the eigenvalues and eigenstates of the mode operator Λ by Λ|λ⟩ = λ|λ⟩. For each subspace defined by the mode state |λ⟩ the Hamiltonian of the emitter is ℋ_λ = H_0 + λ G and we define the eigenvectors of ℋ_λ as
ℋ_λ|n^(λ)⟩ = Ê^(n)(λ) |n^(λ)⟩ .
The eigenvectors of H_0 + G Λ are now given by |n^(λ)⟩|λ⟩. In particular, we have
(H_0 + G Λ) |n^(λ)⟩|λ⟩ = Ê^(n)(λ) |n^(λ)⟩|λ⟩
= ( ∑_j Ê^(n)_j λ^j ) |n^(λ)⟩|λ⟩
= Ê^(n)(Λ) |n^(λ)⟩|λ⟩ .
Here ∑_j Ê^(n)_j λ^j is the perturbation expansion for the emitter eigenvalues when the perturbation is λ G. Now consider the effective Hamiltonian for the field when, for each eigenstate of Λ, the emitter is in its corresponding eigenstate |n^(λ)⟩:
H_f^(n) = Ê^(n)(Λ) + H_f .
If we write the eigenstates of this Hamiltonian as
|f_m^(n)⟩ = ∑_λ c_mλ^(n)|λ⟩ , m = 0,1,2, …
then we see that the states
|J_m^(n)⟩ = ∑_λ c_mλ^(n)|n^(λ)⟩|λ⟩
are the eigenstates of the joint system. Having constructed these eigenstates, we can write the subspace |0⟩ more explicitly as the space spanned by the set of states
{|J_m^(0)⟩ : m = 0,1,2,…}.
Having established the above form for the joint eigenstates, we can examine what happens if we change the state of the field suddenly. A simple but instructive example is that of displacing the eigenvalues of Λ. If we start in the joint ground state and displace the eigenstates by an amount x the new joint state is
|J_0^(0)⟩_x = ∑_λ c^(0)_0λ|0^(λ)⟩|λ + x⟩ .
This new state no longer lies fully within the emitter ground-state subspace. To determine how much of the higher emitter states are mixed in we need the overlaps
⟨ n^(λ+x)|0^(λ)⟩ = (∑_j ⟨n_j| (λ + x)^j) (∑_k λ^k |0_k⟩)
= (λ + x) ⟨ n |0_1⟩ + λ⟨ n_1 |0⟩ + 𝒪(λ^2)
= (λ + x) V_n0/Δ_n0 - λV_n0/Δ_n0 + 𝒪(λ^2)
= x V_n0/Δ_n0 + 𝒪(λ^2)
where we have used the fact that V_0n = V_n0^* and Δ_0n = -Δ_n0. Thus when the field is changed rapidly there are components of the joint state that contain emitter excited states, where the size of these components is first-order in the perturbation and proportional to the change in the state of the field. The components that contain the emitter excited-state subspaces will induce nonlinear evolution generated by Ê^(n) for n>0, which may be different to that generated by the ground state.
§ QUANTUM TREATMENT OF THE SCHMIDT-IMAMOḠLU CROSS-KERR SCHEME
We give the steps involved in calculating the nonlinearities generated by a single emitter in Table <ref>. As an example of using this method we treat the Schmidt-Imamoḡlu EIT scheme that generates the cross-Kerr nonlinearity <cit.>. This nonlinearity could potentially be used to realize number-resolving QND measurements of photons. In this measurement scheme, first suggested by Imoto, Haus, and Yamamoto <cit.>, a mode containing a sufficiently large coherent state interacts via the cross-Kerr effect with a mode containing only a few photons. The cross-Kerr effect generates a photon-number-dependent phase shift of the coherent state, which can then be measured with homodyne detection. For the purpose of the measurement, so long as the coherent state is shot-noise limited, its amplitude can be increased to offset the weakness of the cross-Kerr nonlinearity. (In the Supplement we give the simplest example, calculating the nonlinearities generated by an undriven two-level system.)
The key problem with the use of bulk crystal nonlinearities for this QND measurement scheme is that the self-Kerr nonlinearity, which is always active in a material with 3^rd-order nonlinearities, swamps the phase-shift signal due to the cross-Kerr nonlinearity <cit.>. What is needed therefore is a way to create a cross-Kerr nonlinearity, given by a^† a b^† b, in which at least one of the modes has no self-Kerr. A particularly effective way to do this was devised by Schmidt and Imamoḡlu <cit.>. Their scheme, which we depict in Fig.<ref>, uses a four-level emitter driven in a way that takes advantage of EIT, in which the spontaneous decay from one of the levels is effectively eliminated through destructive interference.
In the Schmidt-Imamoḡlu scheme the emitter is coupled to two quantum-mechanical field modes so the Hamiltonian in the interaction picture, after applying the rotating-wave approximation, has the form
H = H̃_0 + λṼ + νX̃
with
Ṽ = ħσ_10^† a + ħσ_10 a^† ,
X̃ = ħσ_32^† b + ħσ_32 b^† ,
and where a and b are the mode operators for the respective field modes. Here λ and ν are the coupling rates between the respective cavity modes and the emitter, and so V and X have dimensions of angular momentum. The Hamiltonian of the driven emitter and the interaction operators, in the interaction picture, are
H̃_0/ħ = ( [ 0 0 0 0; 0 0 Ω 0; 0 Ω 0 0; 0 0 0 Δ ]) ,
Ṽ/ħ = ( [ 0 a^† 0 0; a 0 0 0; 0 0 0 0; 0 0 0 0 ]) ,
X̃/ħ = ( [ 0 0 0 0; 0 0 0 0; 0 0 0 b^†; 0 0 b 0 ])
Here Δ is the detuning of the field mode b from the upper transition. We first transform to the basis that diagonalizes H_0, for which the transformation is
U = ( [ 1 0 0 0; 0 -1/√(2) 1/√(2) 0; 0 1/√(2) 1/√(2) 0; 0 0 0 1 ])
The transformed Hamiltonian and interaction operators are
H_0 = ħ( [ 0 0 0 0; 0 -Ω 0 0; 0 0 Ω 0; 0 0 0 Δ ]) ,
with
V = ħ/√(2)( [ 0 -a^† a^† 0; -a 0 0 0; a 0 0 0; 0 0 0 0 ]) ,
X = ħ/√(2)( [ 0 0 0 0; 0 0 0 b^†; 0 0 0 b^†; 0 b b 0 ]) .
We can now calculate the sizes of the cross-Kerr and two self-Kerr nonlinearities using the expressions for Ê^(0)_22 and Ê^(0)_40≡Ê^(0)_4 in Table <ref>. (The expression for Ê^(0)_04 is merely Ê^(0)_40 with V replaced by X.) Examining the expression for Ê^(0)_22 we find that almost all the terms vanish because X has no elements that couple to state |0⟩. We have
Ê^(0)_22 = ∑_l,q,p ≠ 0
q ≠ l,p V_0l X_lq X_qp V_p0/Δ_lΔ_qΔ_p = V_01 X_13 X_31 V_10/Δ_1Δ_3Δ_1
+ V_01 X_13 X_32 V_20/Δ_1Δ_3Δ_2 + V_02 X_23 X_31 V_10/Δ_2Δ_3Δ_1 + V_02 X_23 X_32 V_20/Δ_2Δ_3Δ_2
= - ħ/ΔΩ^2 a^† a b^† b .
For Ê^(0)_40 the first term vanishes and we have
Ê^(0)_40 = - (∑_l≠ 0V_0lV_l0/Δ_l) (∑_l≠ 0V_0lV_l0/Δ_l^2)
= ħ/4( 1/-Ω + 1 /Ω) ( 1/Ω^2 + 1/Ω^2) ( a^† a)^2
= 0 .
The self-phase modulation for the mode b is automatically zero because X has no elements that couple |0⟩ to the other states.
The Schmidt-Imamoḡlu scheme generates no frequency shifts or second-order nonlinearities because the second- and third-order expansion coefficients are all zero. The effective Hamiltonian to fourth order in the perturbation parameters λ/Ω, λ/Δ, ν/Ω, and ν/Δ, is therefore
H_SI = ħω_a a^† a + ħω_b b^† b + λ^2 ν^2 Ê^(0)_22
= ħω_a a^† a + ħω_b b^† b - ħκ a^† a b^† b
where the rate of the cross-Kerr nonlinearity is
κ = λ(λ/Δ) (ν/Ω)^2 .
To complete the description of the effective dynamics, given by the master equation in Eq.(<ref>), we need to calculate the effective transition operators using Eq.(<ref>):
Σ_2 = ⟨0| 0_0⟩⟨2̃|0⟩ ,
Σ_3 = ⟨0| 0_0⟩⟨3̃|0⟩ .
These transition operators have the respective damping rates γ_2 and γ_3. Calculating the exact eigenvalues for the emitter matrix H + V shows that the ground state contains no component of |2̃⟩ so ⟨2̃|0⟩ = 0. As a result Σ_2 = 0 so the effective dynamics is unaffected by the decay of level |2̃⟩. This is precisely the EIT effect, induced by the driving that couples levels |1̃⟩ and |2̃⟩, eliminating absorbsion by the latter. On the other hand, to determine ⟨3̃|0⟩ = ⟨ 3_0|0⟩ we have to calculate the coefficients ⟨ 3_0|0_jk⟩ in the expansion
⟨ 3_0|0⟩ = ∑_j,k⟨ 3_0|0_jk⟩λ^j ν^k .
We find that up to 4^th order the only non-zero coefficient is ⟨ 3_0|0_11⟩ = (a^† b + b^† a)/(ΔΩ), and thus
⟨3̃|0⟩ = λν/ΔΩ (a^† b + b^† a ) .
Since we only wish to calculate the effective master equation to 4^th order, and ⟨3̃|0⟩ is already 2^nd order, we need ⟨0| 0_0⟩ only to zeroth order. The result is that
Σ_3 = λν/ΔΩ (a^† b + b^† a )
Substituting this into the master equation, Eq.(<ref>), and making the rotating-wave approximation (which eliminates all terms in which the number of creation and annihilation operators are different) we obtain
ρ̇ = -i/ħ[H_SI,ρ] - γ/2( 2 Σ_3ρΣ^†_3 - Σ^†_3 Σ_3 ρ - ρΣ^†_3 Σ_3 )
with
γ = γ_3 ( λ/Δ)^2 ( ν/Ω)^2 .
§ GENERATION OF NONLINEARITIES BY ENSEMBLES
Up to this point all our analysis has been concerned with the generation of nonlinearities by a single emitter. We now consider applying the tools we have developed to an ensemble of N identical independent emitters. This more complex scenario exhibits new phenomena. In particular the behavior of the two kinds of emitter/field interactions described in Section <ref>, those given by H_B (Eq.(<ref>)) and H_R (Eq.(<ref>)), is very different for large N. The first of these interactions is able to generate nonlinearities that scale with the number of emitters at the few-photon level, whereas the second is not. The fact that H_B is able to generate single-photon giant nonlinearities also reveals how EIT schemes in the regime of H_R are able to generate nonlinearities that scale with N in the semiclassical limit.
To understand how nonlinearities are generated by an ensemble of identical systems all interacting in the same way with a field mode, insight is provided by the simplest example, an ensemble of two-level emitters. We begin by considering this example, for the case in which the emitter/field interaction has the rotating-wave form.
§.§ An ensemble of two-level emitters in the
rotating-wave regime
In the rotating wave regime, for which it is natural to use the interaction picture, the Hamiltonian for N two-level systems (from now on, “qubits") coupled to a field mode is
H_Q = ħΔ/2∑_j=1^N σ_z^(j) + ħλ∑_j=1^N ( σ^(j)† a + σ^(j) a^†) ,
where as usual a is the mode annihilation operator. Given the fact that the field interacts with each emitter in an identical way, if we start with all the emitters in the ground state the field can only excite completely symmetric superpositions of the emitter states (joint states that are symmetric under any permutation of the emitters). The structure of this symmetric subspace has already been worked out as part of the theory of angular momentum: each qubit corresponds to a spin one-half system by virtue of having two levels; the symmetric space is the (N+1)-dimensional space with total angular momentum N/2. The states in this space can be labelled by the number of qubits in the excited state. Denoting this number by n the states are |n⟩ with n = 0, 1, …, N. The action of the operator
J_+ = ∑_j σ^(j)†
on this space is that of a (nonlinear) raising operator:
J_+ |n⟩ = √((N-n)(n+1))|n+1⟩ .
As the number N increases, the lowest lying energy levels become more and more linear because the spacing between energy levels becomes increasingly uniform. This is a reflection of the fact that the dynamics of a large ensemble is approximately linear when the excitation of the atoms is low.
Writing the Hamiltonian explicitly in the symmetric subspace we have H = H_0 + λ V with
H_0 = ħΔ∑_n=0^N n |n⟩⟨n| ,
V = ħ ( J_+ a + J_- a^†) ,
where J_- ≡ J_+^†. We can now apply the TIPT machinery we developed in Section <ref> directly to this Hamiltonian to determine the nonlinearities generated by the ensemble of qubits. To distinguish the expansion coefficients for the ensemble from those for a single emitter, we will denote the former using the blackboard bold font (𝔼). To fourth order the expansion coefficients for the ensemble are
𝔼^0_2 = ∑_l≠ 0V_0lV_l0/Δ_l = J_- |1⟩⟨1| J_+ |0⟩/-Δ = - ħ(N/Δ) a^† a
𝔼^0_3 = 0
𝔼^0_4 = ∑_l,k,q ≠ 0
q ≠ k,l V_0kV_kqV_qlV_l0/Δ_q Δ_k Δ_l - ∑_k,l≠ 0V_0kV_k0 V_0lV_l0/Δ_k^2 Δ_l
= - V_01V_12V_21V_10/-2 Δ^3 - V_01V_10 V_01V_10/ -Δ^3
= ħ N /Δ^3 (a^† a)^2 + ħ N(N-1) /Δ^3 a^† a
The essential result here is that even though the symmetric subspace becomes increasingly linear as N increases, it is nevertheless non-linear enough to generate a nonlinearity, 𝔼^(0)_4, that increases with N. While the dynamics of the ensemble may be essentially linear when interacting resonantly with other systems (for low excitations), for perturbative interactions it becomes effectively more nonlinear as N increases.
The effective Hamiltonian for the field mode, in the Schrödinger picture, is
H_eff = ħλ{[ ω - N ϵ_λ + N (N-1) ϵ_λ^3 ] a^† a + N ϵ_λ^3 (a^† a)^2 } ,
with ϵ_λ≡λ/Δ. The self-Kerr nonlinearity is the coefficient of (a^† a)^2 (divided by ħ), and is thus
κ_s = λ N ϵ_λ^3 = N λ^4 /Δ^3 .
We see that the size of the Kerr nonlinearity increases with N, and while the frequency shift also increases with N, there is an additional contribution to this shift at next-to-leading order in ϵ_λ that scales as N^2.
Now recall that the interaction operator between the field and the emitters in the symmetric subspace is given by J^+, and that the resulting matrix element that couples the ground state to the first symmetric excited state contains a factor of √(N). This is the effect discovered by HBP <cit.>, and given the bound in Eq.(<ref>) it strongly suggests that to remain in the perturbative regime for the ensemble of emitters requires that
λ≪Δ/√(N ⟨ a^† a ⟩) .
This would effectively eliminate the linear increase in the nonlinearity in Eq.(<ref>) because λ must be reduced as N increases. We will show in Section <ref> that, remarkably, the √(N) factor in the ground-state coupling for ensembles does not necessarily result in the bound given by Eq.(<ref>).
First, however, we show how to calculate nonlinearities for ensembles of any multi-level emitter.
§.§ Applying TIPT to the symmetric subspace
of an ensemble of arbitrary emitters
We now show how to perform the above analysis for an ensemble of emitters with an arbitrary number of levels. To do so, one must calculate the matrix elements of the perturbative interaction in the symmetric subspace of the ensemble. By inspection we see that n^th order terms in the perturbation expansion for a single emitter involve excited states that can be reached from the ground state by taking n/2 steps, in which each step is a matrix element coupling two states. Translating this to an ensemble, n^th-order terms in the perturbation expansion involve only matrix elements that connect the ground state to states in which at most n/2 emitters are in excited states. To calculate the perturbation expansion to 4^th order we therefore need only the matrix elements of the interaction Hamiltonian in the subspace spanned by the ground state and symmetric states in which there are only one or two emitters in an excited state.
Consider an ensemble of N emitters, each with M levels. As before we will denote the levels of a single emitter by |j⟩, j = 0,1, …, M-1, with |0⟩ being the ground state. We will denote the state of the whole ensemble in which every emitter is in its ground state by |⟩. Let us also denote the symmetric states in which n_j atoms are in excited state |j⟩ by |n_j⟩, and those in which n_j atoms are in state |j⟩ and n_k states are in state |k⟩ by |n_j,n_k⟩. The interaction Hamiltonian between emitter n and the field is V_n = V, so that the total interaction Hamiltonian is
𝒱 = ∑_n=0^N-1 V_n
where the matrix elements of each V_n are V_n^(jk) = V_jk. Because this interaction Hamiltonian is symmetric under any permutation of the emitters, starting from the ground state it can only generate symmetric states.
By applying an arbitrary 𝒱 to the ground state a few times one can readily calculate the following matrix elements of 𝒱:
⟨1_j|𝒱|⟩ = √(N) V_j0
⟨(n+1)_j|𝒱|n_j⟩ = √((N-n)(n+1)) V_j0
⟨1_k|𝒱|1_j⟩ = V_kj
⟨1_k, 1_j|𝒱|1_j⟩ = √(N-1) V_k0
⟨1_k,1_j|𝒱|2_j⟩ = √(2) V_kj
⟨1_l,1_j|𝒱|1_k,1_j⟩ = V_lk .
The above matrix elements include all those between symmetric states with no more than two atoms in an excited state. From Eq.(<ref>) we see that the coupling between the ground state and any level to which it is coupled by the single-emitter interaction, V, is increased by a factor of √(N). This is the effect noted by HBP <cit.>, and it impacts the scaling of the effective nonlinearities in the RWA regime as we will show below. Note, however, that the coupling between any two levels that are not the ground state remain independent of N, and are at most modified only by a small factor (e.g. Eqs.(<ref>), (<ref>), and (<ref>)),
The matrix elements given in Eqs.(<ref>) through (<ref>) are sufficient to calculate the nonlinearities generated by any ensemble up to 4^th order. As an example, consider the self-Kerr nonlinearity generated for a single mode by an ensemble of arbitrary emitters. Using the following notation for the matrix elements of 𝒱, 𝒱_xy≡𝒱_(x)(y)≡⟨x|𝒱|y⟩
we have
𝔼^(0)_4 = ∑_p,q,r
q ≠ p,r𝒱_0r𝒱_rq𝒱_qp𝒱_p0/Δ_rΔ_qΔ_p - ∑_p,q|𝒱_p0|^2 |𝒱_q0|^2 /Δ_p^2Δ_q
= ∑_j𝒱_0(1_j)𝒱_(1_j)(2_j)𝒱_(2_j)(1_j)𝒱_(1_j)0/Δ_2_jΔ_1_j^2 + ∑_j,k
j≠ k𝒱_0(1_j)𝒱_(1_j)(1_j,1_k)𝒱_(1_j,1_k)(1_j)𝒱_(1_j)0/Δ_1_j,1_kΔ_1_j^2
+ ∑_j,k
j≠ k𝒱_0(1_k)𝒱_(1_k)(1_j,1_k)𝒱_(1_j,1_k)(1_j)𝒱_(1_j)0/Δ_1_kΔ_1_j,1_kΔ_1_j + ∑_j,k,l
k ≠ j,l𝒱_0(1_l)𝒱_(1_l)(1_k)𝒱_(1_k)(1_j)𝒱_(1_j)0/Δ_1_lΔ_1_kΔ_1_j
- ∑_j,k | 𝒱_(1_j)0|^2 | 𝒱_(1_k)0|^2/Δ_1_j^2 Δ_1_k
= N(N-1) ∑_j[ V_0j^2 V_j0^2/Δ_j^3
+ ∑_k ≠ j V_0j V_0k V_k0 V_j0/Δ_j^2 (Δ_j + Δ_k) + V_0k V_0j V_k0 V_j0/Δ_jΔ_k (Δ_j + Δ_k)] + N ∑_j,k,l
k ≠ j,l V_0l V_lk V_kj V_j0/Δ_lΔ_kΔ_j - N^2 ∑_j,k | V_j0|^2 | V_k0|^2 /Δ_j^2 Δ_k
If we apply the condition that V_k0 and V_j0 commute, which is true for both the RWA and product (bare-coupling) interactions, then we can rearrange this as
𝔼^(0)_4 = N(N-1)∑_j,k[ V_0j V_0k V_k0 V_j0/Δ_j^2 Δ_k ]
+ N∑_j,k,l
k ≠ j,l V_0l V_lk V_kj V_j0/Δ_lΔ_kΔ_j - N^2 ∑_j,k | V_j0|^2 | V_k0|^2 /Δ_j^2 Δ_k
= N Ê^(0)_4
+ N(N-1) ∑_j,k V_0j( [ V_0k, V_j0] /Δ_j^2 Δ_k ) V_k0 .
We see from this expression that if [V_0k,V_j0] = 0 (in addition to [V_k0,V_j0] = 0) then the second term vanishes and the nonlinearity generated by the ensemble is exactly N times the nonlinearity generated by a single emitter:
𝔼^(0)_4 = N Ê^(0)_4 , [ V_0k, V_j0] = 0 .
This commutator, [ V_0k, V_j0], is zero for the bare emitter/field interaction but not for the RWA interaction. The procedure for calculating nonlinearities for an ensemble of emitters is summarized in Table <ref>.
§.§ Bound on the emitter/field coupling for the
“bare-coupling" regime
As discussed above, well-defined nonlinearities can only be generated so long as the coupling between the emitter levels and the field(s) are in the perturbative regime, which places a bound on the size of this coupling, and in turn on the size of the nonlinearities. For an ensemble, as we have seen, the coupling between the ensemble ground and first symmetric excited states is √(N) times the coupling between the ground state of a single emitter and each of its excited states. The crucial question that we must answer is whether the perturbative bound applies to the single emitter coupling or to the resulting ensemble coupling. The latter would place a much more restrictive bound on the emitter coupling and prevent the size of the nonlinearities scaling with the size of the ensemble. Here we answer this question for the situation of “bare-coupling” between the ensemble and the field(s) in which the perturbation parameter is the coupling rate divided by the transition frequency, the interaction operator for each field mode is Hermitian, and the Hamiltonian is given by Eq.(<ref>).
As in Section <ref> we consider a single field mode coupled to the ensemble. Denoting the Hermitian operator of the mode that couples to the emitters by Λ, we examine the subspaces defined by each of the eigenvectors of Λ. To this end we can write H_B (Eq.(<ref>)) as
ℋ = H_0 + G Λ + H_f
where H_0 and G are operators of the emitters. We define the eigenvalues and eigenstates of the mode operator Λ by Λ|λ⟩ = λ|λ⟩. For each subspace defined by the eigenvector |λ⟩ the Hamiltonian of the emitter is
ℋ_λ = H_0 + λ G = ∑_j H_0^(j) + λ∑_j G_j = ∑_j H_λ^(j)
where
H_λ^(j) = H_0^(j) + λ G_j
is a Hamiltonian for emitter j, with H_0^(j) and G_j the Hamiltonian and interaction operator for that emitter.
From Eqs.(<ref>) and (<ref>) it is clear that in each of the eigenspaces of the mode interaction operator, Λ, the perturbation problem breaks up into N entirely separate perturbation problems, one for each emitter. If the eigenvectors of each emitter are denoted by |m_λ^(j)⟩, then the tensor product of these eigenvectors,
|m_λ⟩≡∏_j ⊗|m_λ^(j)⟩
is a perturbed eigenvector of the ensemble. Not all eigenstates of the ensemble are of the above form. However, the tensor product of the ground state for each emitter is the ground state of the ensemble. Further, the perturbation expansion for each emitter is valid under the condition (see Eq.(<ref>))
λ≪Δ/√(⟨ n ⟩)
where Δ is the frequency scale of the emitter transitions, ⟨ n ⟩ is the mean number of photons in the field that couples at rate λ, and we are assuming that the matrix elements of G are order unity. Consequently the perturbative expansion for the ground state of the ensemble is also valid under this condition. Recall that if we perform the perturbative expansion in the symmetric subspace of the ensemble, as detailed in Sections <ref> and <ref>, then the condition for validity is
λ≪Δ/√(N ⟨ n ⟩) .
It is natural to assume that this latter relation is both necessary and sufficient for the perturbative expansion to be valid. But strictly speaking it is only a sufficient condition. It is possible, although unlikely, that the perturbative expansion happens to give the correct states even when the perturbative parameter is not small. Our analysis above shows that this is precisely what happens for the bare emitter/field interaction. For the ground state for each value of λ the perturbation expansion is not confined to the regime of Eq.(<ref>); it is valid whenever the condition in Eq.(<ref>) is satisfied. As a result, nonlinearities generated by ensembles in the “bare coupling" regime scale as N and such ensembles can generate giant nonlinearities.
§.§ Bound on the emitter/field coupling for the RWA regime
In the section above we were able to show that for the “bare-coupling" regime the perturbative expansion remained valid so long as it was valid for each emmitter independently, despite the fact that the coupling between the ensemble ground state and the first symmetric excited states was no longer perturbative. We now turn to the question of whether this is true for the RWA regime. Unfortunately the argument that we used for the bare-coupling regime does not hold for the RWA regime. Since we have not found a way to answer this question analytically we resort to a numerical calculation.
We use as our example the simplest ensemble, that of N two-level emitters that we analyzed for the case of the RWA regime in Section <ref>. For this ensemble we calculate the eigenvalues and eigenvectors for the ground-state subspace for both the RWA regime and the bare-coupling regime. We also calculate the eigenvalues of the effective Hamiltonians for the field that are generated in each of these regimes, but only to fourth order. As the coupling strength is increased, the difference between the 4^th-order Hamiltonian and the actual effective Hamiltonian that is generated, determined by the eigenvalues in the ground-state subspace, will increase. If it is the coupling between the ensemble ground-state and the symmetric states that determines the perturbative regime, then this error will increase with N, but if it is only the emitter/field coupling for single emitters that determines this regime, as we have shown is the case for bare-coupling, then this error will not change with N. We measure the error as the sum of the absolute value of the fractional difference between the eigenvalues of the exact effective Hamiltonian and the effective Hamiltonian calculated to fourth order.
In Fig. <ref>a we plot the above-defined error as a function of the coupling rate, λ, for the bare coupling regime, and for two values of N, N = 1 and N=10. We see that both plots are identical, confirming our proof in Section <ref>. In Fig. <ref>b we plot the same error quantities for the RWA regime. We see that the RWA regime behaves quite differently to the bare coupling regime, with the error increasing when we go from N=1 to N=10. We can estimate this increase by examining the next term in the expansion for the effective Hamiltonian after 4^th order. Since this term is 6^th order in λ and proportional to N, the increase in the error with N should be approximately that resulting from increasing λ by a factor of N^1/6. The error for the latter is plotted as the green curve in Fig. <ref>. We can conclude that for the RWA regime the bound on λ is determined by the collective coupling, and is given by Eq.(<ref>).
Finally, it is important to note that even when the perturbation expansion is governed by the collective interaction, not all the coupling rates between the field and the emitter transitions have a bound that depends on N. It is only the coupling rates for transitions that involve the ground state that are restricted in this way. This means, for example, that in the Schmidt-Imamoḡlu scheme, the coupling rate λ is bounded as (see Eq.(<ref>))
λ≪Ω/√(N ⟨ a^† a ⟩)
but μ is bounded only as
μ≪Δ/√(⟨ b^† b ⟩)
This fact impacts the scaling of the effective nonlinearity, as will be seen in the next section.
§.§ Scaling of the Kerr nonlinearity in the RWA regime: two-level systems vs. the Schmidt-Imamoḡlu 4-level scheme
We have seen that in the RWA regime, the bounds on all coupling rates for transitions that involve the ground state scale as 1/√(N), where N is the size of the ensemble. The bounds on the coupling rates for all other transitions, however, are essentially unaffected by the size of the ensemble. We can now substitute the bounds for the coupling rates into the expressions for the sizes of the induced nonlinearities to see how the resulting bounds on the nonlinearities scale with the number of emitters. For the ensemble of two-level emitters the bound on the self-kerr nonlinearity is
κ_s = N λ^4 /Δ^3 ≪Δ/N ⟨ a^† a ⟩^2 ,
where we have used Eqs. (<ref>) and (<ref>).
For an ensemble of 4-level emitters employing the Schmidt-Imamoḡlu scheme, we first calculate the term that gives the cross-Kerr nonlinear Hamiltonian for an ensemble, being
λ^2 μ^2 𝔼^(0)_22 = ∑_p,q,r
q ≠ p,r𝒱_0r𝒳_rq𝒳_qp𝒱_p0/Δ_rΔ_qΔ_p
= ∑_j=1,2
k=1,2𝒱_0(1_k)𝒳_(1_k)(1_3)𝒳_(1_3)(1_j)𝒱_(1_j)0/Δ_1_3Δ_1_kΔ_1_j
= -N ħλ^2 μ^2 /ΔΩ^2 a^† a b^† b .
The bound on the rate of the cross-Kerr nonlinearity is then
κ = N λ^2 μ^2 /ΔΩ^2≪Δ/⟨ a^† a ⟩⟨ b^† b ⟩ ,
where we have used Eqs. (<ref>) and (<ref>). We see that the scaling of the maximal size of an effective Kerr nonlinearity depends on the nature of the ensemble. For the Schmidt-Imamoḡlu scheme the bound on the kross-Kerr nonlinearity is independent of N, meaning that ensembles can generate the maximal strength of nonlinearity that can be produced by a single emitter. The maximal self-Kerr nonlinearity that can be generated by a ensemble of two-level systems, however, decreases with the size of the ensemble. Thus two-level systems do become more linear as their number increases so long as we are in the deep quantum regime. Driven multi-level ensembles need not, as the Schmidt-Imamoḡlu scheme shows.
§.§ The RWA regime: emergence of giant nonlinearities for semiclassical fields
We saw in Section <ref> that in the bare coupling regime ensembles can generate nonlinearities that scale linearly with the number of emitters. The reason for this was that the operator by which each field mode couples to the system is Hermitian. This is not the case in general for the RWA regime. We now consider what happens when a field mode is in a coherent state with large photon number, and in particular the action of the creation operator on that state. Defining |α⟩ as a coherent state with amplitude α, we have
⟨α| (a - α) (a^† - α^*) |α⟩ = 1 .
This implies that
a^†|α⟩ = α^* |α⟩ + |α⟩_⊥ = α^* ( |α⟩ + |α⟩_⊥/α^*) .
where the state |α⟩_⊥ is normalized. (It is also orthogonal to |α⟩.) Thus
lim_|α| →∞ a^†|α⟩ = α^*|α⟩ .
The RWA interaction between a field mode with annihilation operator a and the emitter transition j↔ k is
H_jk = ħλ( a σ^† + a^†σ) .
For large α the action of this interaction on |α⟩ is
H_jk|α⟩≈ħλ |α| ( σ^† e^-iθ + σ e^iθ) |α⟩ .
where θ is the phase of α. This interaction thus acts like it is the product of a Hermitian operator for the mode (whose eigenstates are |α⟩ for |α| ≫ 1) and the Hermitian operator σ_θ = σ^† e^-iθ + σ e^iθ for the emitter. The interaction Hamiltonian for the ensemble now has the same structure as that of the bare coupling regime, and by the analysis in Section <ref> the nonlinearities generated by ensembles in the RWA regime, and thus EIT, scale with the size of the ensemble in the semiclassical regime.
§ CALCULATING NONLINEARITIES FOR TRAVELING-WAVE FIELDS
So far we have written the nonlinearities generated by emitters in terms of the coupling rates between the emitters and field modes, assumed to be discrete modes of a cavity. But we often wish to consider the generation of nonlinearities for traveling-wave fields such as laser beams or pulses. For this we need to be able to calculate the Rabi frequency from the laser power (for a continuous beam) or the number of photons in a laser pulse. Recall that the Rabi frequency is required to determine whether the emitter/field system is in the perturbative regime.
The coupling energy (matrix element) between two emitter energy levels j and k is given by μ_jk E where μ_jk is the dipole matrix element between the two levels and E is the electric field amplitude at the location of the emitter. The Rabi frequency (the coupling rate) for the transition is this coupling energy divided by ħ:
Ω_jk = μ_jk E/ħ .
The intensity of an electromagnetic plane wave with amplitude E is I = ε_0 c |E|^2/2. The Rabi frequency induced by a laser beam with power P and cross sectional area A is therefore
Ω_jk = μ_jk/ħ√(2P/ε_0 c A).
For a narrow-band pulse of length L containing n_p photons the average power is P = ħω n_p c/L and so the average Rabi frequency is
Ω_jk = μ_jk/ħ√(2ħω n_p/ε_0 L A) = μ_jk√(2 ω n_p/ε_0 ħ V)
where V = LA is the pulse volume.
The single-emitter coupling rate, g, is then
g_jk = Ω_jk/√(n_p) = μ_jk√(2 ω/ε_0 ħ V)
§.§ Nonlinear susceptibility
So far, we have characterized the nonlinearities generated by ensembles as terms in a Hamiltonian for one or more cavity modes. For situations in which the fields are traveling waves in free space, the nonlinearities are instead often quoted as nonlinear susceptibilities and are written in terms of the number density of emitters. It is simple to translate between the rate constant that multiplies a nonlinear term in a Hamiltonian for cavity modes and the resulting nonlinear susceptibility. To do so you first replace each of the emitter/field interaction rates that we use in our analysis (λ, ν,) with the absolute value of the dipole moment of the corresponding emitter transition, divided by ħ. Thus if mode a interacts with transition j ↔ k with rate λ, then you replace
λ→|μ_jk| /ħ ,
where μ_jk is the dipole moment for the transition j ↔ k. You then multiply the whole expression for the nonlinear rate constant by ħ/(ε_0 V) where V is the mode volume. In fact, as we have shown above, the resulting nonlinear susceptibility will only be exactly proportional to n_d, the number of emitters per unit volume, for nonlinearities generated using the “bare-coupling” regime or in the semi-classical limit for the RWA regime, because it is only in those cases that the nonlinearities are exactly proportional to the number of emitters. In other cases, there are corrections at higher orders in the perturbation parameter(s).
As an example, converting the rate constant for the cross-Kerr nonlinear Hamiltonian generated by the Schmidt-Imamoḡlu scheme, given in Eq. (<ref>), the nonlinear susceptibility generated by this scheme is
χ^(3) = n_d |μ_12|^2 |μ_34|^2 /ε_0 ħ^3 ΔΩ^2 ,
in which the density of emitters, n_d = N/V.
Nonlinear susceptibilities are defined in terms of the equation of motion for the electric field. This equation is <cit.>
∇×∇× + 1/c^2∂^2/∂ t^2
= -1/ε_0 c^2∂^2/∂ t^2
where the (nonlinear) polarization is
= ε_0 ∑_m χ^(m)^m .
Here the constant χ^(m) is the m^th-order nonlinear susceptibility. To determine χ^(m) from the nonlinear Hamiltonian one simply uses the Hamiltonian to derive Eq.(<ref>).
From Eqs.(<ref>) and (<ref>) a material whose only susceptibility is χ^(3) has a refractive index of
n_r = √(1 + χ^(3) |E|^2/ε_0) .
For a self-Kerr nonlinearity E is the amplitude of the field experiencing the refractive index, whereas for a cross-Kerr nonlinearity E is the amplitude of a travelling wave with a different direction and/or frequency than that experiencing the refractive index.
§ POTENTIAL OF ENSEMBLES TO GENERATE GIANT NONLINEARITIES
We have seen that for ensembles in the regime of the rotating-wave approximation, and outside the semi-classical regime, the limitation on the coupling rate, λ, between the ground state and any excited states limits the strength of any nonlinearities to those that can be generated by a single emitter with the maximum allowed value for λ. Since the expressions for the size of the nonlinearities for a single emitter are always equal to one of the coupling rates multiplied by one or more perturbation parameters, these nonlinearities can never be larger than the coupling rate. One can only obtain strong coupling rates by using tightly confined cavities, and even then the nonlinear rate will be at least an order of magnitude smaller than the coupling rate. The only way to break this limit on the size of the nonlinearities is to take advantage of the scaling with the ensemble size, N, which requires either working in the semi-classical regime or the bare coupling regime.
Note that we have not determined
the amplitude required by a field or field to place the system in the semi-classical regime for the purposes of generating effective nonlinearities. Quite different behavior will be accessible for very weak fields if only 5 photons are required as opposed to 10^5 or 10^15. We will not explore this question further here but it is certainly an important topic for future work.
We now compare our theoretical results to an experiment by Venkataraman, Saha, and Gaeta (VSG) in which they realize a scheme for generating a cross-Kerr nonlinearity using a three-level emitter <cit.>. They obtain especially strong coupling between an ensemble of rubidium atoms and two narrow band laser beams (one a pulse and the other continuous) by confining a rubidium vapor inside a hollow optical fiber. The beams propagate down the hollow core of the fiber which has a diameter of 6 μ. The length of the pulse is L = 1.5. The density of rubidium atoms is n_d = 2× 10^19 ^-3 so that N = 8.5× 10^8 atoms interact with the pulse. The pulse, which couples the ground state to the first excited state, contains 20 photons, giving it an average power of 1. The power of the continuous beam is P = 10 μ. The phase shift imparted to the continuous beam by the pulse via the cross-Kerr nonlinearity can be determined from the refractive index which is calculated from the cross-Kerr nonlinear susceptibility, χ^(3), using Eq.(<ref>). In <cit.> the authors find good agreement between their measured phase shift and the cross-Kerr susceptibility calculated using the standard semi-classical analysis. Calculating the Rabi frequency induced by the 20-photon pulse for a single emitter (Eq.(<ref>)) we obtain Ω = 8.9. The Rabi frequency generated by the collective coupling, on the other hand, is √(N)Ω = 260. Since the detuning between the transitions and the fields is 700, the single-emitter coupling is in the perturbative regime, but the collective coupling is not. Thus the agreement between the experimental results and the semi-classical theory, achieving a nonlinearity enhanced by a factor of N, implies that as few as 20 photons is sufficient, at least on a timescale of L/c = 5, for the system to reach the semi-classical limit.
§ CONCLUSION
Optical nonlinearities are essential for many applications, including quantum information processing using photonic qubits. Understanding the scaling of nonlinearities with the number of emitters, N, is essential for constructing schemes that harness these nonlinearities. To this end, we introduced a method for calculating the optical nonlinearities generated by driven multi-level emitters and ensembles of independent emitters. We have shown that the scaling of these nonlinearities with N is remarkably subtle: different regimes and different level structures lead to different scaling behavior, which in turn determines the size of the nonlinearities that can be generated.
In particular, in the rotating-wave approximation (RWA) regime, the size of the nonlinearities scales with N when the fields are in the semi-classical regime, but is limited for single-photon fields due to the effect of collective coupling. This result implies that the ability of ensembles to generate "giant" nonlinearities for very weak fields will depend on the field strength at which the field/ensemble system makes the transition to the semi-classical regime. This question is an interesting topic for future work.
Outside the RWA regime, ensembles can generate nonlinearities for single-photon fields that scale as N. We suggest that this prediction be tested using, for example, Rydberg atoms <cit.>, solid-state cavity-QED systems <cit.> or superconducting circuits <cit.>.
The theory that we have developed here should allow a much more detailed understanding than previously possible of the size of the nonlinearities that can be generated by small and large numbers of emitters. This understanding can be expected to facilitate the manipulation of nonlinearities for quantum and classical technologies in a wide range of settings.
§ OPERATORS DEFINED BY INNER PRODUCTS OF SUBSYSTEM “STATES"
When considering a system consistng of two subsystems it is common to employ the state of one of the systems as a projector onto a subspace. Specifically, let us denote a set of basis states for subsystem A by |k⟩_A, k = 0,1,…, K, and a set for subsystem B by |l⟩_B, l = 0,1,…,L. If subsystem A is in state |n⟩_A this defines a subspace consisting of all states of the form |n⟩_A ⊗|l⟩_B for l = 0,1,…,L. The projector onto this subspace is
P_n = |n⟩_A ⟨n|_A ⊗ I_B
where I_B is the identity operator for system B.
Consider an operator C of the joint system,
C = ∑_kk'∑_ll' c_kl,k'l'|k⟩_A ⟨k'|_A ⊗|l⟩_B ⟨l'|_B
If we apply the projector P_n to C we get
P_n C P_n = |n⟩_A ⟨n|_A ⊗⟨n|_A C |n⟩_A
where
⟨n|_A C |n⟩_A = ∑_ll' c_nl,nl'|l⟩_B ⟨l'|_B
is an operator that acts only in the space of B. While in terms of matrix operations the expression ⟨n|_A C |n⟩_A is not well-defined its meaning is clear in selecting out a sub-matix of C. If we are only interested in the action of that subblock on B (rather than the state of A), then we can discard the projector |n⟩_A ⟨n|_A on the right hand side of Eq.(<ref>) above to leave us with ⟨n|_A C |n⟩_A. This is what we do to arive at Eq.(<ref>). Note that the state |0⟩ in Eq.(<ref>) represents a subspace although it is not purely a state of the emitter. It is given by the perturbation series in terms of the states of the emitter and operators on the mode(s). Using the perturbation series for |0⟩ we can write the projector |0⟩⟨0| explicitly in terms of emitter-state projectors and field operators.
§ SOME ELEMENTS OF THE TIPT EIGENVECTOR EXPANSION
Up to third order for two fields we have
⟨ l_0 | n_1⟩ = V_ln/Δ_nl , l ≠ n
⟨ l_0 | n_2⟩ = ∑_q≠l,nV_lqV_qn/Δ_nlΔ_nq , l ≠ n
⟨ l_0 | n_3⟩ = ∑_q≠l,n∑_k≠q,nV_lq V_qkV_kn/Δ_nlΔ_nqΔ_nk - 1/2∑_q≠nV_lnV_nq V_qn/Δ_nlΔ_nq^2
- ∑_q≠ n V_ln V_nq V_qn/Δ_nqΔ_nl^2 , l ≠ n
⟨ l_0 | n_11⟩ = ∑_q≠l,nV_lq X_qn + X_lq V_qn/Δ_nlΔ_nq , l ≠ n
⟨ l_0 | n_21⟩ = ∑_q≠l,n∑_p≠q,n𝖠𝖫𝖫𝖯 V_lqV_qp X_pn/Δ_nlΔ_nqΔ_np
- ∑_q≠n𝖠𝖫𝖫𝖯 V_ln V_nq X_qn/Δ_nlΔ_nq( 1/Δ_nl + 1/ 2Δ_nq) , l ≠ n
and for the special case of l=n these are
⟨ n_0 | n_2⟩ = - 1/2∑_l≠ nV_nlV_ln/Δ_nl^2
⟨ n_0 | n_3 ⟩ = - 1/2∑_k,q≠n V_nkV_kqV_qn/Δ_nkΔ_nq( 1/Δ_nk + 1/Δ_nq)
⟨ n_0 | n_11⟩ = - 1/2∑_k≠ n V_nkX_kn + X_nkV_kn/Δ_nk^2
⟨ n_0 | n_21⟩ = - 1/2∑_l,q≠n^N
𝖠𝖫𝖫𝖯 V_nl V_lq X_qn/Δ_nlΔ_nq( 1/Δ_nl + 1/Δ_nq)
40
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Langford et al.(2011)Langford, Ramelow, Prevedel, Munro, Milburn, and Zeilinger]Langford11
author author N. K. Langford, author S. Ramelow,
author R. Prevedel, author W. J. Munro, author
G. J. Milburn, and author
A. Zeilinger, title title Efficient quantum computing using coherent photon conversion, @noop journal journal Nature volume 478, pages 360 (year
2011)NoStop
[Brod and Combes(2016)]Brod16
author author D. J. Brod and author J. Combes, title title Passive cphase gate via cross-kerr
nonlinearities, https://doi.org/10.1103/PhysRevLett.117.080502
journal journal Phys. Rev. Lett. volume 117, pages 080502 (year
2016)NoStop
[Niu et al.(2018a)Niu, Chuang, and Shapiro]Niu18
author author M. Y. Niu, author I. L. Chuang, and author J. H. Shapiro, title title Qudit-basis universal quantum
computation using ^(2) interactions, https://doi.org/10.1103/PhysRevLett.120.160502 journal
journal Phys. Rev. Lett. volume 120, pages 160502 (year 2018a)NoStop
[Niu et al.(2018b)Niu, Chuang, and Shapiro]Niu18b
author author M. Y. Niu, author I. L. Chuang, and author J. H. Shapiro, title title Hardware-efficient bosonic quantum
error-correcting codes based on symmetry operators, https://doi.org/10.1103/PhysRevA.97.032323 journal journal Phys. Rev. A volume 97, pages 032323 (year 2018b)NoStop
[Heuck et al.(2020)Heuck,
Jacobs, and Englund]Heuck20
author author M. Heuck, author K. Jacobs, and author D. R. Englund, title title Controlled-phase gate using
dynamically coupled cavities and optical nonlinearities, https://doi.org/10.1103/PhysRevLett.124.160501 journal
journal Phys. Rev. Lett. volume 124, pages 160501 (year 2020)NoStop
[Li et al.(2020)Li,
Zhang, Tang, Dong,
Guo, and Zou]Li20
author author M. Li, author Y.-L. Zhang,
author H. X. Tang, author C.-H. Dong, author
G.-C. Guo, and author
C.-L. Zou, title title Photon-photon quantum phase gate in a photonic molecule with
^(2) nonlinearity, https://doi.org/10.1103/PhysRevApplied.13.044013 journal
journal Phys. Rev. Applied volume
13, pages 044013 (year 2020)NoStop
[Krastanov et al.(2021)Krastanov, Heuck, Shapiro, Narang, Englund, and Jacobs]Krastanov21
author author S. Krastanov, author M. Heuck,
author J. H. Shapiro, author P. Narang, author
D. R. Englund, and author
K. Jacobs, title title Room-temperature photonic logical qubits via second-order
nonlinearities, @noop journal journal
Nature Comm. volume 12, pages 191
(year 2021)NoStop
[Wang and Terhal(2021)]Wang21
author author Y. Wang and author B. M. Terhal, title title Preparing dicke states in
a spin ensemble using phase estimation, https://doi.org/10.1103/PhysRevA.104.032407 journal journal Phys. Rev. A volume 104, pages 032407 (year 2021)NoStop
[Johnsson et al.(2020)Johnsson, Mukty, Burgarth, Volz, and Brennen]Johnsson20
author author M. T. Johnsson, author N. R. Mukty,
author D. Burgarth, author T. Volz, and author
G. K. Brennen, title
title Geometric pathway to scalable quantum sensing, https://doi.org/10.1103/PhysRevLett.125.190403 journal
journal Phys. Rev. Lett. volume 125, pages 190403 (year 2020)NoStop
[Zhou et al.(2018)Zhou,
Zhang, Preskill, and Jiang]Zhou18
author author S. Zhou, author M. Zhang,
author J. Preskill, and author L. Jiang, title title Achieving the heisenberg limit in quantum
metrology using quantum error correction, @noop journal journal Nature Communications volume 9, pages 78 (year 2018)NoStop
[Zhang and Duan(2014)]Zhang14
author author Z. Zhang and author L. M. Duan, title title Quantum metrology with dicke
squeezed states, @noop journal journal
New Journal of Physics volume 16, pages 103037 (year 2014)NoStop
[Bloembergen and Shen(1964)]Bloembergen64
author author N. Bloembergen and author Y. R. Shen, title title Quantum-theoretical
comparison of nonlinear susceptibilities in parametric media, lasers, and
raman lasers, https://doi.org/10.1103/PhysRev.133.A37 journal journal Phys. Rev. volume
133, pages A37 (year 1964)NoStop
[Oudar and Shen(1980)]Oudar80
author author J.-L. Oudar and author Y. R. Shen, title title Nonlinear spectroscopy by
multiresonant four-wave mixing, https://doi.org/10.1103/PhysRevA.22.1141 journal journal Phys. Rev. A volume 22, pages 1141 (year 1980)NoStop
[Boyd et al.(1981)Boyd,
Raymer, Narum, and Harter]Boyd81
author author R. W. Boyd, author M. G. Raymer,
author P. Narum, and author D. J. Harter, title
title Four-wave parametric interactions in a strongly driven
two-level system, https://doi.org/10.1103/PhysRevA.24.411
journal journal Phys. Rev. A volume 24, pages 411 (year 1981)NoStop
[Szymanowski and Keitel(1994)]Szymanowski94
author author C. Szymanowski and author C. H. Keitel, title title Enhancing the index of
refraction under convenient conditions, https://doi.org/10.1088/0953-4075/27/23/020 journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 27, pages 5795 (year
1994)NoStop
[Harris et al.(1990)Harris,
Field, and Imamo ğğlu]Harris90
author author S. E. Harris, author J. E. Field, and author A. Imamo ğğlu, title title Nonlinear optical processes using electromagnetically induced
transparency, https://doi.org/10.1103/PhysRevLett.64.1107
journal journal Phys. Rev. Lett. volume 64, pages 1107 (year
1990)NoStop
[Fleischhauer et al.(2005)Fleischhauer, Imamoglu, and Marangos]FIM05
author author M. Fleischhauer, author A. Imamoglu, and author J. P. Marangos, title title Electromagnetically
induced transparency: Optics in coherent media, https://doi.org/10.1103/RevModPhys.77.633 journal journal Rev. Mod. Phys. volume 77, pages 633 (year 2005)NoStop
[Imamo g̅ḡlu et al.(1997)Imamo g̅ḡlu,
Schmidt, Woods, and Deutsch]Imamoglu97
author author A. Imamo g̅ḡlu, author H. Schmidt, author G. Woods, and author M. Deutsch, title title
Strongly interacting photons in a nonlinear cavity, https://doi.org/10.1103/PhysRevLett.79.1467 journal journal Phys. Rev. Lett. volume 79, pages 1467 (year 1997)NoStop
[Grangier et al.(1998)Grangier, Walls, and Gheri]Grangier98
author author P. Grangier, author D. F. Walls, and author K. M. Gheri, title title Comment on “strongly
interacting photons in a nonlinear cavity”, https://doi.org/10.1103/PhysRevLett.81.2833 journal journal Phys. Rev. Lett. volume 81, pages 2833 (year 1998)NoStop
[Gheri et al.(1999)Gheri,
Alge, and Grangier]Gheri99
author author K. M. Gheri, author W. Alge, and author P. Grangier, title title Quantum analysis of the photonic blockade
mechanism, https://doi.org/10.1103/PhysRevA.60.R2673 journal journal Phys. Rev. A volume
60, pages R2673 (year 1999)NoStop
[Werner and Imamo g̅ḡlu(1999)]Werner99
author author M. J. Werner and author A. Imamo g̅ḡlu, title title Photon-photon interactions in cavity electromagnetically induced
transparency, https://doi.org/10.1103/PhysRevA.61.011801
journal journal Phys. Rev. A volume 61, pages 011801 (year
1999)NoStop
[Greentree et al.(2000)Greentree, Vaccaro, de Echaniz,
Durrant, and Marangos]Greentree00
author author A. D. Greentree, author J. A. Vaccaro, author S. R. de Echaniz, author A. V. Durrant, and author J. P. Marangos, title title Prospects for photon
blockade in four-level systems in the n configuration with more than one
atom, https://doi.org/10.1088/1464-4266/2/3/306 journal journal Journal of Optics B: Quantum and
Semiclassical Optics volume 2, pages
252 (year 2000)NoStop
[Hartmann et al.(2006)Hartmann, Brandao, and Plenio]Hartmann06
author author M. J. Hartmann, author F. G. Brandao, and author M. B. Plenio, title title Strongly interacting
polaritons in coupled arrays of cavities, @noop journal journal Nature Physics volume
2, pages 849 (year 2006)NoStop
[Abadie and et al. (LIGO
Scientific Collaboration)(2011)]Abadie11
author author J. Abadie and author et al. (LIGO
Scientific Collaboration), title title A
gravitational-wave observatory operating beyond the quantum shot-noise
limit, @noop journal journal Nature
Phys. volume 7, pages 962 (year 2011)NoStop
[Venkataraman et al.(2013)Venkataraman, Saha, and Gaeta]Venkataraman13
author author V. Venkataraman, author K. Saha, and author A. L. Gaeta, title title Phase modulation at the few-photon
level for weak-nonlinearity-based quantum computing, https://doi.org/10.1038/nphoton.2012.283 journal journal Nature Photonics volume 7, pages 138 (year 2013)NoStop
[Trivedi et al.(2019)Trivedi, Radulaski, Fischer, Fan, and Vuččkovi ćć]Trivedi19
author author R. Trivedi, author M. Radulaski,
author K. A. Fischer, author S. Fan, and author
J. Vuččkovi ćć, title
title Photon blockade in weakly driven cavity quantum
electrodynamics systems with many emitters, https://doi.org/10.1103/PhysRevLett.122.243602 journal
journal Phys. Rev. Lett. volume 122, pages 243602 (year 2019)NoStop
[Cohen-Tannoudji et al.(1989)Cohen-Tannoudji, Dupont-Roc, and Grynberg]Tannoudji89
author author C. Cohen-Tannoudji, author J. Dupont-Roc, and author G. Grynberg, @noop title Photons and Atoms:
Introduction to Quantum Electrodynamics (publisher
Wiley-VCH, address New York, year
1989)NoStop
[Jacobs(2014)]Jacobs14
author author K. Jacobs, @noop title Quantum measurement
theory and its applications (publisher Cambridge University
Press, address Cambridge, year 2014)NoStop
[Parusiński and Rond(2020)]Parusinski20
author author A. Parusiński and author G. Rond, title title Multiparameter perturbation
theory of matrices and linear operators, https://doi.org/10.1090/tran/8061 journal journal Trans. Amer. Math. Soc. volume 373, pages 2933 (year 2020)NoStop
[McCauley et al.(2020)McCauley, Cruikshank, Bondar, and Jacobs]McCauley20b
author author G. McCauley, author B. Cruikshank, author D. I. Bondar, and author K. Jacobs, title title Accurate lindblad-form
master equation for weakly damped quantum systems across all regimes, https://doi.org/10.1038/s41534-020-00299-6 journal
journal npj Quantum Information volume
6, pages 74 (year 2020)NoStop
[Schmidt and Imamoglu(1996)]Schmidt96
author author H. Schmidt and author A. Imamoglu, title title Giant Kerr
nonlinearities obtained by electromagnetically induced transparency, @noop journal journal Opt. Lett. volume 21, pages 1936 (year
1996)NoStop
[Imoto et al.(1985)Imoto,
Haus, and Yamamoto]Imoto85
author author N. Imoto, author H. A. Haus, and author Y. Yamamoto, title title Quantum nondemolition measurement of
the photon number via the optical kerr effect, https://doi.org/10.1103/PhysRevA.32.2287 journal journal Phys. Rev. A volume 32, pages 2287 (year 1985)NoStop
[Balybin et al.(2022)Balybin, Matsko, Khalili, Strekalov, Ilchenko, Savchenkov,
Lebedev, and Bilenko]Balybin22
author author S. N. Balybin, author A. B. Matsko,
author F. Y. Khalili, author D. V. Strekalov, author
V. S. Ilchenko, author
A. A. Savchenkov, author
N. M. Lebedev, and author
I. A. Bilenko, title
title Quantum nondemolition measurements of photon number in
monolithic microcavities, https://doi.org/10.1103/PhysRevA.106.013720 journal journal Phys. Rev. A volume 106, pages 013720 (year 2022)NoStop
[Hartmann and Plenio(2007)]Hartmann07
author author M. J. Hartmann and author M. B. Plenio, title title Strong photon
non-linearities and photonic mott insulators, @noop journal journal Phys. Rev. Lett. volume 99, pages 103601 (year
2007)NoStop
[Boyd(2008)]Boyd08
author author R. Boyd, @noop title Nonlinear Optics, edition 3rd ed. (publisher Academic Press, address New York, year 2008)NoStop
[Graham et al.(2019)Graham,
Kwon, Grinkemeyer, Marra,
Jiang, Lichtman, Sun,
Ebert, and Saffman]Graham19
author author T. M. Graham, author M. Kwon,
author B. Grinkemeyer, author Z. Marra, author
X. Jiang, author M. T. Lichtman, author Y. Sun, author M. Ebert, and author M. Saffman, title title Rydberg-mediated
entanglement in a two-dimensional neutral atom qubit array, https://doi.org/10.1103/PhysRevLett.123.230501 journal
journal Phys. Rev. Lett. volume 123, pages 230501 (year 2019)NoStop
[Kim et al.(2017)Kim,
Aghaeimeibodi, Richardson, Leavitt, Englund, and Waks]Waks17
author author J.-H. Kim, author S. Aghaeimeibodi,
author C. J. K. Richardson,
author R. P. Leavitt, author D. Englund, and author
E. Waks, title title Hybrid integration of solid-state quantum emitters on a silicon
photonic chip, https://doi.org/10.1021/acs.nanolett.7b03220
journal journal Nano Letters volume 17, pages 7394 (year
2017)NoStop
[Fahey et al.(2023)Fahey,
Jacobs, Turner, Choi,
Hoffman, Englund, and Trusheim]Fahey23
author author D. P. Fahey, author K. Jacobs,
author M. J. Turner, author H. Choi, author
J. E. Hoffman, author
D. Englund, and author
M. E. Trusheim, title
title Steady-state microwave mode cooling with a diamond nv
ensemble, @noop journal journal Phys.
Rev. Applied (in press)
(year 2023)NoStop
[Zhong and Goldner(2019)]Zhong19
author author T. Zhong and author P. Goldner, title title Emerging rare-earth doped
material platforms for quantum nanophotonics, https://doi.org/doi:10.1515/nanoph-2019-0185 journal
journal Nanophotonics volume 8, pages 2003 (year 2019)NoStop
[Blais et al.(2020)Blais,
Girvin, and Oliver]Blais20
author author A. Blais, author S. M. Girvin, and author W. D. Oliver, title title Quantum information processing and
quantum optics with circuit quantum electrodynamics, @noop
journal journal Nature Physics volume 16, pages 247 (year
2020)NoStop
|
http://arxiv.org/abs/2307.00951v2
|
20230703114953
|
A Cross-Chain Query Language for Application-Level Interoperability Between Open and Permissionless Blockchains
|
[
"Felix Härer"
] |
cs.DC
|
[
"cs.DC",
"cs.DB",
"cs.PL",
"C.2.4; D.3.2; E.2; H.2.3"
] |
Cross-Chain Query Language for Open and Permissionless Blockchains
F. Härer
Digitalization and Information Systems Group, University of Fribourg, Switzerland
[email protected]
<https://www.unifr.ch/inf/digits/>
A Cross-Chain Query Language for Application-Level Interoperability Between Open and Permissionless Blockchains
Felix Härer0000-0002-2768-2342
August 1, 2023
===============================================================================================================
Open and permissionless blockchains are distributed systems with thousands to tens of thousands of nodes, establishing novel platforms for decentralized applications. When realizing such an application, data might be stored and retrieved from one or more blockchains by distributed network nodes without relying on centralized coordination and trusted third parties. Data access could be provided through a query language such as SQL at the application level, establishing a unified view on application-level data that is verifiably stored. However, when accessing multiple blockchains through their node software and APIs, interoperability cannot be assumed today, resulting in challenges of inhomogeneous data access. In addition, different feature sets and trade-offs exist, e.g., regarding smart contract functionality, availability, distribution, scalability, and security. For increasing interoperability, the paper at hand suggests pursuing the development of a cross-chain query language at the application level. The language abstracts from implementation by providing a standardized syntax, an integrated data model, and a processing architecture for data queries. This research is an extended and updated paper demonstrating the language syntax, data model, and architecture with an evaluation of compatibility against the largest open and permissionless blockchains today.
§ INTRODUCTION
As of June 2023, a variety of openly accessible blockchains exist with a significant number of active participants. When considering blockchains with at least 1000 daily active addresses, an estimation counts 18 blockchains operating as open platforms for smart contracts or cryptocurrency[<https://www.tradingview.com/markets/cryptocurrencies/prices-most-addresses-active/>, 2023-06-30]. In principle, these platforms can be used for data storage by any business or personal application without centralized coordination and trusted third parties <cit.>. Permissionless and verifiable storage are based on algorithmic consensus in contrast to databases and related technologies. In particular, the systems consist of distributed network nodes joining and operating the network at will while any node is able to verify transactions in the blockchain data structure. Decentralized applications are enabled in this way, primarily for programmable money and contracts.
These systems with the components blockchain data, network, and consensus protocol can be considered open and permissionless blockchains (OPB), enabling novel decentralized applications such as programmable money or contracts. Contrary to the distributed systems prevalent in previous decades, well-known OPB now involve the coordinated efforts of thousands to tens of thousands of nodes, forming open and permissionless infrastructures.
Based on the connectivity of active participants, it is estimated that approximately 16,600 nodes are operating Bitcoin[<https://bitnodes.io/>, 2023-06-30], 7,600 are operating Ethereum[<https://ethernodes.org/>, 2023-06-30], and 3,000 are operating Cardano[<https://adastat.net/pools/>, 2023-06-30]. These estimations might not take into account potentially uncounted nodes hidden due to specific configurations, e.g. located behind routers and firewalls. As adoption grows, alongside the increasing number of open and permissionless blockchains, as well as the vast quantities of readily available data, this paper posits the future significance of these platforms for verifiable data storage and execution. Applications interfacing with these platforms encompass various uses, including payments and currency, e-commerce, timestamping, and the attestation of data and web links <cit.>.
This research offers an extended and updated study of existing work <cit.> with the following research problem, objective, and contribution.
Research Problem. Software accessing data across open and permissionless blockchains (OPB) today face challenges due to interoperability:
* Inhomogeneous access to data due to various OPB implementations.
* Different OPB data models and features exist.
* Different OPB trade-offs exist, notably regarding scalability, security, and decentralization.
Research Objective and Contribution. The objective of this research is to study the three challenges hindering enhanced interoperability among OPB. The paper contributes a cross-chain query language, established by defining an integrated data model, a grammar and concrete syntax, and a processing architecture. In response to query statements submitted by software applications, data from various blockchain nodes is gathered, integrated into the data model, and processed in accordance with the statements. Considering previously suggested conceptual models and query languages, e.g. <cit.> and <cit.>, the language design abstracts from implementation of today's largest OPB. The proof-of-concept implementation demonstrates feasibility and compatibility, but also indicates potential for software to incorporate OPB as integral components of their architecture.
Application Example.
Consider a scenario where numerous e-commerce websites participate in shared loyalty programs, issuing reward points for customer purchases. This model is not uncommon among collaboratively operating airlines[See e.g. <https://www.miles-and-more.com/ch/en.html>, 2023-06-30], among other industries. Given a cross-chain query language, business-level applications across different airlines could access data in a standardized way, re-use queries in their software components, view data on multiple blockchains, integrate and migrate among blockchains, or exchange the underlying blockchains. This is especially advantageous for decentralized scenarios where centralized coordination is limited, e.g. in business networks of different companies relying on separate infrastructure and technology stacks, or generally in decentralized applications.
The paper is organized as follows. Section <ref> lays out background and related studies. Section <ref> discusses OPB, focusing on their properties essential for the derivation of an integrated data model. The data model, a grammar with a derived concrete language syntax, and a processing architecture follow. A demonstration of feasibility is provided in <ref> with a prototype implementation utilizing multiple OPB. The final section, Section <ref>, draws conclusions and provides an outlook.
§ BACKGROUND AND RELATED WORK
The section at hand introduces blockchain fundamentals, discusses open and permissionless blockchains, and existing interoperability approaches.
§.§ From Bitcoin to Blockchains
Following the posting of the Bitcoin whitepaper and corresponding software in 2008 and 2009, respectively <cit.>, the term 'blockchain' emerged as a general term encapsulating its technical architecture. The primary components: (1) a data structure of blocks, arranged in a backward-linked list or any graph, (2) a peer-to-peer network for data distribution, and (3) a consensus protocol, give rise to innovative properties. These notably include the ability to coordinate and validate all operations without trusted third parties or centralized control, open access to all data and operations, and permissionless access, whereby data and operations are not restricted to specific participants <cit.>. Ethereum and subsequent blockchains have enhanced these capabilities by incorporating smart contracts, acting as quasi Turing-complete programs <cit.>. Beyond payments and currency, smart contracts facilitate e-commerce, sales contracts, timestamping, and attestations, among other applications <cit.>.
§.§ Open and Permissionless Blockchains
The progressive development and adoption springing from Bitcoin and Ethereum have yielded OPB with diverse characteristics. Table <ref> catalogs five renowned OPB in the order of their public node count, highlighting the properties of their data structures, networks, and consensus protocols, as well as features pertinent to smart contracts.
Data Structure.
The original design of backward-linked blocks in Bitcoin is coupled with additional trees or graphs in most other OPB. Beyond transactional data from blocks, supplementary queries must be performed for non-transactional data or older data that has undergone pruning. For instance, separate tree structures are incorporated for state storage in Ethereum, where balances and smart contract variables can be accessed <cit.>.
Network.
Well-known OPB networks comprise approximately 1300 to 16000 nodes. With algorithmic operation and validation, an increased node count augments security, such as in mitigating the risks associated with 51% attacks and selfish mining <cit.>, which are frequently observed in smaller Proof-of-Work systems, e.g. 'Bitcoin Gold' <cit.>.
Consensus Protocol. Between 2008 and 2022, a shift in the initially created protocols can be observed, veering away from Proof-of-Work towards Proof-of-Stake, which introduces several trade-offs. While established blockchains such as Bitcoin and Ethereum have prioritized security and decentralization over the years, Cardano <cit.>, Avalanche <cit.>, and Solana <cit.> demonstrated enhancements in efficiency and scalability. This trend is mirrored in the development of novel consensus protocols based on by Proof-of-Stake <cit.> with higher efficiency, advantages to environmental impact, enhanced security, and potentially higher distribution and scalability. For example, Ethereum realizes Proof-of-Stake through GASPER, a combination of the consensus algorithm Caspar-FFG ("Casper the Friendly Finality Gadget") and LMD-GHOST (Latest Message Driven Greedy Heaviest Observed Sub-Tree)[<https://ethereum.org/en/developers/docs/>, 2023-06-30]. Based on Caspar-FFG, blocks are proposed in slots of 12 seconds, part of 6.4-minute epochs of 32 slots, by the staking network nodes. The node proposing a block is randomly chosen while other nodes are organized in randomly formed subnets to carry out validations that are aggregated in attestations. Typically, a block is finalized within two epochs with improvements toward single-slot finality[<https://ethereum.org/de/roadmap/single-slot-finality/>, 2023-06-30]. In the case of chain splits, this design together with the fork choice rule has proven itself in practice, demonstrating improved efficiency, decentralization, and security[https://offchain.medium.com/post-mortem-report-ethereum-mainnet-finality-05-11-2023-95e271dfd8b2https://offchain.medium.com/post-mortem-report-ethereum-mainnet- finality-05-11-2023-95e271dfd8b2, 2023-06-30]. Other blockchains focus especially on scalability, e.g. Solana. However, temporary protocol failures can be observed frequently, resulting in non-availability <cit.>.
Smart Contract Support. Smart contract features are essential for data queries and software applications. Bitcoin offers a limited scripting language employed for programmable monetary transactions and the scalable lightning overlay network. The advent of general-purpose programming in Ethereum and similar platforms introduces a broader range of capabilities and complexity. Currently, most implementations are written and compiled for the Ethereum Virtual Machine, which is present in Ethereum and Avalanche. On the contrary, Cardano and Solana embrace markedly different paradigms. For instance, Cardano supports functional programming, preventing side effects and implementation errors, thus, possibly enhancing security and safety properties[https://docs.cardano.org/plutus/learn-about-plutus/https://docs.cardano.org/plutus/learn-about-plutus/, 2023-06-30].
§.§ Interoperability Between Blockchains
Interoperability is widely acknowledged for transactions spanning multiple blockchains, established in cross-chain swaps and similar concepts practically implemented in so-called 'bridges'. Furthermore, efforts towards standardizing inhomogeneous data have commenced not only for query languages.
Cross-Chain Swaps. Swaps are typically initiated via a protocol on an originating blockchain, where tokens or arbitrary data are locked to prevent further transfer at the onset. A reciprocal transaction is then issued on a secondary blockchain to the initiator of the cross-chain swap, meaning another party often compensates for the tokens with a different asset on the second chain. This transaction includes a cryptographic proof with a secret that releases the tokens on the initial chain. Finally, the counterparty retrieves tokens from the originating chain. A wide array of protocols and variants have been developed on this foundational principle <cit.>. For atomic cross-chain swaps <cit.>, atomicity is assured for all transfers involved in a cross-chain swap. Practical implementations in bridges, however, may exhibit different properties and assurances, not necessarily providing atomicity or other guarantees for the completion of the exchange. Bridges are primarily utilized for cryptocurrency exchanges; for example, Multichain[<https://multichain.org/>, 2023-06-30], Portal[<https://www.portalbridge.com/>, 2023-06-30], and others[<https://l2beat.com/bridges/tvl#active>, 2023-06-30] facilitate cross-chain swaps between Ethereum, Avalanche, among others. However, cross-chain swaps and bridges lack standardization and do not provide uniform access or queries.
Inhomogeneous Data. Standardization efforts are underway to tackle the issue of inhomogeneous data, with sparse prior work addressing non-uniform access. For Ethereum, one study <cit.> explores a conceptual schema derived from the primary data structures of the blockchain. In <cit.>, a query language is proposed for the content of blocks and transactions. This language design leans on SQL syntax and supports concepts such as projection and selection within Ethereum. For data analysis, a framework and its implementation based on Scala have been suggested <cit.>, employing SQL or NoSQL alongside aggregation functions and similar analysis methods.
Another approach <cit.> details a data warehouse and ETL process for analyzing Ethereum data using standard SQL with a multi-dimensional data model for attribute dimension queries and data aggregation support. Although this and similar studies might connect to multiple blockchains, they fail to provide homogeneous data access, queries, or simultaneous access to data across multiple blockchains.
Additional work based on SQL includes <cit.>, a study that uses multiple blockchains to populate a standard MySQL database with the third-party service Google BigQuery. However, the reliance on third-party services as data sources presents another commonly observed issue in previous research, where the validation of blockchain data is either impossible or severely restricted. Other methods comprise public connectors between blockchains, blockchains integrating with others, and hybrid approaches <cit.>.
Interoperability Limitations in Prior Research. Present solutions face limitations in terms of (L1.) data access not being homogeneous, (L2.) incompatibility of node software functions and APIs not providing standardized queries, (L3.) software not being able to view and access data on one or more blockchains in parallel, and (L4.) missing verifiability of the blockchain data. The current emphasis is placed on cross-chain swaps and isolated data analysis as opposed to data integration. The query language proposed herein seeks to mitigate these restrictions by suggesting an integrated data model for uniform access (L1.), a grammar and concrete syntax for standardized access (L2.), a processing architecture supporting multiple blockchains in individual queries (L3.) as well as operating multiple nodes locally for verifying transactions (L4.).
§ CROSS-CHAIN QUERY LANGUAGE
The following two subsections will detail (A.) the data model and (B.) the grammar with a concrete syntax and a corresponding processing architecture. Query statements are processed as per the architecture delineated in subsection (B.), yielding instances of data model classes using data sourced from the APIs of local blockchain nodes.
§.§ Integrated Data Model
The design of the language is predicated on a data model that integrates the principal data structures and attributes of the OPB discussed in Section <ref>. Building on prior work and existing tools addressed in Section <ref>, classes and attributes of the five OPB have been identified, generalized, and incorporated into a unified data model. Figure <ref> presents the comprehensive data model as a UML class diagram. Table<ref> enumerates the main model classes, categorized into four packages to represent the chain, block, account, and transaction concepts of the OPB. The concrete syntax for formulating queries is introduced in subsection <ref>. Statements are articulated in terms of the classes and attributes, specifying the source data using class and attribute names of the data model.
The concepts of the OPB are shown in the table and data model, encapsulated by the classes of the following packages and classes. Classes of the chain package embody one main network and blockchain for Bitcoin, Ethereum, Cardano, and Solana, as represented by the classes Chain, Network, and ChainDescriptor of the data model. Additional test networks with their distinct blockchains, such as Ropsten and Görli in Ethereum, are represented by the Network and ChainDescriptor classes. In Avalanche, the Network class encompasses one primary network, the first of potentially numerous 'subnets', with separate ChainDescriptor instances for the three P/X/C blockchains.
The Block and BlockDescriptor classes represent blocks, with discrete classes for the block's status, its validation via the consensus protocol, and the involved validators. Conceptually, blocks across all blockchains are identified by a hash value, supplemented with metadata like timestamps and a height value denoting the block number, assuming no changes to non-final blocks. For instance, in Bitcoin, multiple blocks might be discovered as successors to a given block; however, only one block gets included in the chain, while others are dismissed with an 'orphan' status. In contrast, Ethereum handles similar cases by retaining one block in the main chain while preserving other blocks at the same level with an 'ommer' status. Blocks in Proof-of-Work chains are not explicitly finalized, permitting the assignment of 'orphan' or 'ommer' status to blocks found in parallel to preceding blocks of the chain. Nonetheless, the likelihood of existing blocks being superseded in this manner diminishes over time, as multiple consecutive parallel blocks with greater cumulative work are required. Explicit block finalization, forestalling the emergence of multiple successors, can be observed in more recent Proof-of-Stake blockchains such as Solana.
Concerning data structure, blocks are connected to one or more existing blocks via the linkDescriptor attribute of the Block class. This connection can establish either a series of backward-linked blocks or a graph structure, such as a Directed Acyclic Graph (DAG) in the Avalanche C chain. Blocks either house transactions directly or are grouped into time-based slots and epochs for Proof-of-Stake validation purposes. Upon appending a block, each block or slot undergoes validation, necessitating validators' involvement. As per the ValidationDescriptor class, the creator of a Bitcoin or Ethereum block validates a linked block using the hashValue attribute. Conversely, for other Proof-of-Stake blockchains, proposers are recorded in the corresponding attributes with attestations, which refer to the ValidatorDescriptor class. Each instance refers to any number of assigned validators who attest to the block's accuracy through their vote and signature, thus representing the concept of multiple groups of validators performing attestations. When recording the transaction of a DAG, one or more transactions in a block can link to one or more transactions from a preceding block. This link is indicated by the linkedBlockDescriptor attribute in the Block class and the dagSupport attribute in the BlockDescriptor class, which are set to 'true' in this scenario.
Accounts, a concept prevalent in Ethereum, Solana, and Avalanche, are embedded in blocks to store assets, tokens, or data that are used for smart contracts. It's important to note that data might directly represent assets or tokens, as seen in Solana. Each account is defined by an ID, with the concept of an address being common to all blockchains. Account storage of assets or tokens can refer to custom assets, as seen in Cardano, or tokens represented by data in general. For tokens, token standards such as Ethereum's ERC-20 or ERC-1155 are represented by the Token class's attributes. Data storage utilizes binary large objects or key-value stores, which are employed in hash-based mapping data structures.
The concepts of transactions in Bitcoin and Cardano are distinctive due to these blockchains' lack of account structures. Consequently, transactions hold references to unspent transaction outputs (UTXOs) from previous transactions. In this model, a UTXO is included alongside the transferred value and a script that outlines locking conditions or holds data. While data inclusion is implied in Bitcoin, Cardano explicitly accommodates data in transactions and its storage associated with an address for smart contract functionality.
On the other hand, in the case of Ethereum, Solana, and the Avalanche C chain, transactions are stored for the transfer of values, data, assets, or tokens between accounts. In the Avalanche X chain, the transfer of native assets is facilitated through the UTXO model. In the data model, the attributes of Transaction and TransactionDescriptor accommodate transfers between addresses by employing the attributes corresponding to the aforementioned concepts.
§.§ Grammar and Query Processing Architecture
The language syntax is rooted in well-established concepts of data query languages, specifically the Structured Query Language (SQL). SQL and similar languages on the one hand permit formalized representation of queries through relational algebra, and on the other hand, allow queries and their execution to be comprehensible to domain experts without deep knowledge of the underlying concepts.
The syntax of SQL is structured around the 'SELECT-FROM-WHERE' block (SFW block). Based on English-language commands, the 'SELECT' clause conducts a projection in the underlying relational model, semantically equivalent to columns. This is followed by the source of the relations in the 'FROM' clause and the selection of tuples utilizing conditions in the 'WHERE' clause. In the relational model, set operations, and notably the Cartesian product, form the foundation for all queries.
In the context of a cross-chain data language, these concepts are applied in the following manner.
Requirements for Queries. Query statements consist of query (Q), source (S), and filter (F) clauses as follows:
Q Query attributes can be any attributes of the data model classes. Each attribute needs to be specified alongside its class, which establishes one column of the query result for each source. This practice prevents ambiguity for conflicting attribute names and allows users to select data based on the required attributes.
S Sources specify where data is extracted from in terms of blockchain and network classes. This can be paired with additional parameters including specific blocks, transactions, and accounts along with associated assets, tokens, and data.
To specify each source, attribute values of the identifying attributes from the Chain, Network, and ChainDescriptor classes must be given. This forms the base of the data source from where extraction will begin. Further specificity can be achieved by providing additional classes, attributes, and attribute values of identifying attributes from other classes such as Block, Transaction, Account, Asset, Token, or Data. This level of granularity allows for data queries targeted at one or more blockchains.
F Filters optionally refine the results of a query based on conditions. By using filters, specific subsets of data can be removed from the query result based on their attributes and attribute values. A filter is specified by a filter function which should contain a comparison operation taking two inputs in the form of query attributes into account. At run-time, filter functions compare the related attributes and their values. Filter functions are applied sequentially to the results obtained before. Due to sequential filtering, the query result only contains data meeting all specified filter conditions.
Grammar and Syntax. In the provided EBNF (Extended Backus-Naur Form) syntax in Listing <ref>, the structure of a query is divided into a series of clauses. These clauses are used to define the aforementioned aspects of each query and are further detailed and fully specified in the complete grammar[Available at <https://github.com/fhaer/CCQL/tree/main/grammar>]. Query clauses specify projections on the data returned from source clauses, where each source clause relates to the extraction of data as described in the requirements. Finally, filter clauses enable selection by attributes and attribute values through comparison functions. When specifying multiple values within any clause, multiple result sets are the result. In the case of SourceSpec, this would trigger the collection of data from multiple, optionally with a block, transaction, or account, as per requirements and data model. Accounts with assets, tokens, or data are given also according to the specification by the data model. The source and filter clauses are further detailed with the full EBNF grammar specification. For an implementation with a domain-specific language, the concrete syntax might be adopted according to its design guidelines with further usability considerations.
[caption=Excerpt of the grammar in EBNF. Attr: Attribute, Spec: Specification, Val: Value, Desc: Descriptor, I: Instance, Net: Network, Tx: Transaction, Acc: Account.,label=lst:grammar1]
QueryStatement ::=
QueryAttrClause
SourceClause
FilterClause? ";"
QueryAttrClause ::=
'Q ' AttrSpec ( ', ' AttrSpec )*
SourceClause ::=
'S ' SourceSpec ( ', ' SourceSpec )*
FilterClause ::=
'F ' FilterSpec ( ', ' FilterSpec )*
AttrSpec ::=
CCQLClass '.' AttrName
SourceSpec ::=
BlockchainI ':' NetI ':' ChainDescI
(':' ( BlockI | TxI | AccI ) )?
FilterSpec ::=
CCQLClass '.' AttrName ComparisonFunction IValue
Query Processing Architecture.
Figure <ref> shows the steps involved in query processing within as part of an application architecture. An application initiates the process by issuing query statements to the parser component where clauses are constructed for further query processing in conjunction with a number of connected local nodes. In the query processing component, the source clause is processed for each specific source, i.e., each SourceSpec with network and chain data with their respective attributes leads to the collection of data from the connected nodes. The results are stored as instances of the data model classes. In the next stage of the process, the query attribute clause is processed. Each data model class instance is read to establish a newly appended column in the result table of the query. For the final process stage, the filter clause is applied with each of the specified filter functions, filtering the existing result table.
§ EVALUATION OF IMPLEMENTATION FEASIBILITY
The aim of this section is to illustrate the feasibility of implementing the proposed query language with a data model and processing architecture. An implementation compatible with OPB previously introduced has been developed for this purpose[Available at <https://github.com/fhaer/CCQL/tree/main>]. It is composed of two main components: (1.) a language grammar that is formalized using the Eclipse Modeling Framework with a concrete syntax specified on the basis of Xtext[<https://www.eclipse.org/Xtext>, 2023-06-30]. In this way, the syntax is used to derive an external Domain-Specific Language (DSL) with corresponding development and editor environments based on Eclipse. Furthermore, the grammar is implementation-independent and can be re-used in future applications. (2.) a prototype command-line application implementing the language and the data model. The application operates according to the proposed architecture and interacts with nodes of the selected OPB to execute queries. It was developed using Python 3.9 and utilizes the web3.py library to access the OPB[<https://web3py.readthedocs.io/en/stable/>, 2023-06-30].
§.§ Software and Hardware Configuration
Setting up the application involved the following blockchain nodes with a configuration that fully validates all blocks:
* Bitcoin node: Bitcoin Core, version 25.0[<https://github.com/bitcoin/bitcoin/>, 2023-06-30]. The initial data synchronization completed after 1 day including the indexing of all transactions.
* Ethereum node: Nethermind execution client, version 1.19.0[<https://downloads.nethermind.io/>, 2023-06-30], together with Nimbus consensus client, version 23.5.1[<https://github.com/status-im/nimbus-eth2/releases>, 2023-06-30] with full validation and execution of transactions. Initial data synchronization completed after approximately 4 weeks.
* Cardano node: Cardano node, version 8.1.1[<https://github.com/input-output-hk/cardano-node>, 2023-06-30]. Initial data synchronization completed after approximately 2 days.
* Avalanche node: AvalancheGo, version 1.10.3[<https://github.com/ava-labs/avalanchego/releases>, 2023-06-30]. Initial data synchronization completed after approximately 4 days.
Accounting for typical application scenarios related to businesses or individuals, the data synchronization was carried out on a consumer-grade laptop in all cases. The laptop was equipped with AMD Ryzen 7 5700U CPU, 16 GB RAM, and SK Hynix BC711 NVMe SSD running Ubuntu 22.04. For the synchronization, the laptop was continuously connected to a 1 Gbit/s fiber internet connection.
To establish feasibility, query statements were evaluated using the prototype, which will be elaborated in the subsequent section. Each statement was executed on the laptop and the locally running blockchain node software without the involvement of further web services or APIs, realizing the processing architecture. As the node software was fully validating and storing data locally, it enabled the generation of query results without network access. It follows that query performance is independent of network latency and solely constrained by the local CPU and IO performance of the device at hand.
§.§ Query Processing
The prototype application, realizing the architecture illustrated in Figure <ref> as described in Section <ref>, was used to evaluate typical queries as follows.
The first query example, illustrated in Figure <ref>, shows the task of identifying transactions within a block. Here, the query attributes define the Block and BlockDescriptor (BlockDesc) classes along with the properties of the block ID, Height, Timestamp, and transactions. The following source clause specifies Ethereum, the main network, chain 1, and a block number. The query terminates by applying a filter to the timestamp attribute. The output of the query manifests as attributes prefixed by the source number, each displaying instance-level data from the data model with corresponding values. For instance, Block.id and Block.height are denoted as '0xfb2e[...]' and '14505661', respectively.
Given a specific block, query attributes could be added for continuing the investigation throughout the data model, e.g. identifying accounts in blocks using Block.accounts followed by Asset.balance in order to retrieve their balances. In a cross-chain scenario, a corresponding transaction might be located on another blockchain, e.g. transferring assets or data. By specifying a block and timestamp, transactions occurring with the same or similar timestamp might be queried. In the second query of Figure <ref>, this example can be seen with the aforementioned classes and attributes. Investigating this scenario further based on the Ethereum and Avalanche transactions, transactions occurring at the same timestamp in both blockchains were located and queried in the third query displayed in Figure <ref>. For obtaining asset transfers and data, the source clause specifies Transaction (T) and TransactionDescriptor (TDesc) classes with attributes for value and data, respectively. The source is addressed by hexadecimal transaction IDs on the two blockchains. From the query results, it can be observed that both transactions are data transactions, transferring assets of value 0.0 and data represented in hexadecimal format. For investigating assets, accounts might be queried in addition, for example as demonstrated in Query 4 of Figure <ref>. Given the transactions with matching timestamps together with the involved assets here indicates the exchange of tokens in a cross-chain swap scenario.
§.§ Discussion
The prototype provides uniform data access to OPB, such as the retrieval of asset and data transfers across multiple blockchains. As per the defined grammar, data access is standardized, facilitating statements that involve one or more blockchains. For meaningful utilization of blockchain properties, it is essential to operate blockchain nodes locally, which can involve significant time and cost for initial synchronizations.
The architecture of the data model follows a data integration approach, where data conforming to well-known OPB can be stored by populating pertinent classes. In contrast, merely relying on multiple individual data models would fail to address the issue at hand.
However, the prototype in its current form has limited support for advanced concepts specific to individual OPB. For instance, calculating transaction fees involving additional utility tokens falls outside the scope of this model. Functionality-wise, the queries with filters are constrained in the prototype, permitting only the sequential application of filters with equality comparisons to multiple sources.
§ CONCLUSION AND OUTLOOK
This paper presents a cross-chain query language grammar, data model, and processing architecture aimed at facilitating uniform data access across multiple blockchains. The approach enables homogeneous data access, query standardization, addressing multiple blockchains within individual queries, and local validation of blockchain data. These facets were only partially covered in previous research.
The feasibility of implementing the language with its processing architecture has been positively evaluated using a prototype, despite certain functional limitations inherent in the implementation. Using the proposed approach of application-level interoperability, software can leverage multiple blockchains to establish a unified view on data while relying on verifiable transactions that are part of an open and permissionless infrastructure.
In future research, these concepts can serve as a basis for addressing further integration aspects among blockchains, e.g. in terms of augmenting data storage distributed on multiple blockchains, and provide advanced integration methods towards enabling blockchains as decentralized application platforms.
§ ACKNOWLEDGMENT
This work is supported by the Swiss National Science Foundation project Domain-Specific Conceptual Modeling for Distributed Ledger Technologies [196889].
splncs04
|
http://arxiv.org/abs/2307.03275v1
|
20230706202339
|
To pretrain or not to pretrain? A case study of domain-specific pretraining for semantic segmentation in histopathology
|
[
"Tushar Kataria",
"Beatrice Knudsen",
"Shireen Elhabian"
] |
cs.CV
|
[
"cs.CV"
] |
To pretrain or Not to pretrain?
T. Kataria et al.
Kahlert School of Computing, University Of Utah Scientific Computing and Imaging Institute, University of Utah Department of Pathology, University of Utah
{tushar.kataria,shireen}@sci.utah.edu, [email protected]
To pretrain or not to pretrain? A case study of domain-specific pretraining for semantic segmentation in histopathology
Tushar Kataria1,2Beatrice Knudsen3 Shireen Elhabian 1,2
August 1, 2023
=======================================================================================================================
Annotating medical imaging datasets is costly, so fine-tuning (or transfer learning) is the most effective method for digital pathology vision applications such as disease classification and semantic segmentation. However, due to texture bias in models trained on real-world images, transfer learning for histopathology applications might result in underperforming models, which necessitates the need for using unlabeled histopathology data and self-supervised methods to discover domain-specific characteristics. Here, we tested the premise that histopathology-specific pretrained models provide better initializations for pathology vision tasks, i.e., gland and cell segmentation. In this study, we compare the performance of gland and cell segmentation tasks with domain-specific and non-domain-specific pretrained weights. Moreover, we investigate the data size at which domain-specific pretraining produces a statistically significant difference in performance. In addition, we investigated whether domain-specific initialization improves the effectiveness of out-of-domain testing on distinct datasets but the same task. The results indicate that performance gain using domain-specific pretraining depends on both the task and the size of the training dataset. In instances with limited dataset sizes, a significant improvement in gland segmentation performance was also observed, whereas models trained on cell segmentation datasets exhibit no improvement.
§ INTRODUCTION
Deep learning models typically require a substantial amount of data to effectively learn generalized latent space representations <cit.>. However, acquiring large medical image datasets is more challenging compared to real-world image datasets for three primary reasons. Firstly, the annotation process for medical images involves domain-specific knowledge from pathologists <cit.> and radiologists to manually outline anatomical structures. This is challenging given the global scarcity of pathology and radiology experts; Secondly, the image annotation interfaces are inefficient generating labor-intensive workflows. Thirdly, inter-observer disagreement among medical professionals necessitates the involvement of multiple experts to repeat each annotation task <cit.>.
Lastly, in addition to the annotation challenges there are biases in medical data. Biases in histopathology images arise from variations in tissue quality, staining protocols leading to difference in color and texture <cit.>, scanning protocols and slide scanners <cit.>.
These biases are often site-specific and can cause major domain shifts between different data sets, which in term reduces the generalization of deep learning models. <cit.>.
Other forms of domain shifts in cancer cohorts include discrepancies between cancer and normal tissue histology, the proportion of histologic cancer subtypes, grades and stages, and variations in clinical, demographic, and race-related variables. These variables generate data imbalances that can degrade the performance of deep learning models during testing.
In medical image vision tasks, fine-tuning pretrained models (also known as transfer learning) has become a common approach <cit.>. These tasks are important for automated diagnosis, cancer grading and predictions of patients outcomes across all cancer types.
Using supervised or self-supervised methods, deep learning models exhibit strong capabilities to learn effective latent representations <cit.>. However, they may suffer from domain-specific texture bias <cit.>, which can impede their performance <cit.>.
Previous research indicates that if sufficient data is available for training, a model trained de-novo (i.e., from scratch) may outperform a fine-tuned model <cit.>.
This suggests a potential benefit of domain-specific pretraining <cit.> over transfer learning from ImageNet <cit.>.
Because large, annotated data sets are difficult to obtain for pretraining on histopathology images, self-supervised and annotation free methods (SSL) provide an alternative strategy for pretraining models to learn valid representation in the latent space <cit.>. Models can then be further fine-tuned with a few annotations to produce acceptable results on test datasets. However, no studies systematically evaluated the impact of domain-specific pretaining for histopathology models that are tasked to learn cell and gland segmentation. The closest matching work to this study is an investigation of pretraining on classification and instance segmentation tasks <cit.>
Because gland and cell segmentation differ from instance segmentation and classification, the effect of pretraining on the analysis of out-of-distribution (OOD) datasets also remains unknown. The contributions of this paper are as follows:-
* Comparison of de-novo trained models with pretrained models on the ImageNet dataset using class supervision <cit.> and self-supervision <cit.> for semantic segmentation tasks in histopathology.
* Finetuning pretrained domain-specific models <cit.> for gland and cell segmentation. These comparisons will indicate whether domain-specific pretraining aids cell and gland segmentation in out-of-distribution data sets after fine-tuning of models.
* Determining the effect of compute resources and data quantity on model performance improvements.
* Investigating whether domain-specific training leads to a better generalization of models.
§ DIFFERENT PRETRAINING STRATEGIES
To investigate whether domain-specific pretraining leads to generalization in gland and cell segmentation tasks, the study aims to address the following research questions:
- Is domain pretraining, which involves initializing the weights with domain-specific images, more effective for transfer learning compared to pretrained weights from ImageNet?
- Do self-supervised outperform supervised weight initializations?
- Does domain-specific pretraining enhance the quality of features and improve the model's performance on datasets with domain shifts?
All initializations are compared against random initialization (i.e., training from scratch), which serves as the baseline to identify initializations (mentioned below) that outperform random. The flow diagram of the study is shown in Figure <ref>.
Models are trained with 3 different types of initializations: (1) pretrained weights using class supervision on ImageNet data: default weights are provided in Pytorch for ImageNetV1 and ImageNetV2. The top-1 accuracies in the initialization amount to 76.13 and 80.85, respectively. These weights are obtained by training a ResNet50 <cit.> model with class supervision.
For two other initialization, weights are obtained using a self-supervised technique called Barlow Twins <cit.>. (2) Pretrained weights with ImageNet data using SSL (SSLImage): Self-supervised weights were obtained after training on data from ImageNet without using labels.
(3) Domain-Specific pretraining using SSL (SSLPathology): This model is released as part of the study in <cit.> for domain-specific pretraining on histopathology data. The model was pretrained using more than three million histopathology image patches sampled from various cancers at different magnifications. More details about the pretraining method and the dataset can be found in <cit.>.
§.§ Dataset Details
We have experimented with gland and cell segmentation tasks on these five histopathology datasets:
Gland Segmentation Datasets: Colon cancer datasets, GlaS and CRAG <cit.>, possess ground truth gland segmentation annotations for normal and colon cancer glands. The GlaS dataset has 88 training & 80 testing images of size less 700x600 pixels, whereas the CRAG dataset has 160 training & 40 testing images of size 1512x1512 pixels.
Cell Segmentation Datasets:
Three cell segmentation datasets are used for experimentation KUMAR <cit.>, CPM17 <cit.> and TNBC <cit.> possess ground truth annotations of nuclear outlines.
§.§ Implementation Details
A U-Net <cit.> model is used with Resnet50 <cit.> backbone for semantic segmentation application(gland & cell both). The decoder is always the same for all models. Models are trained using PyTorch and a data split of 80-20 for training and validation. The best model possessing a minimum loss on validation data is further evaluated on the test dataset. Testing data is only used for inference.
During training the patch size is 256x256, sampled randomly in the whole image. At inference, predictions are averaged over a window size of 128 pixels. The learning rate is fixed to 0.0001 and the number of epochs for all experiments is set to 4000 for gland segmentation and 2000 for cell segmentation. The models are trained five times and average metrics are reported, this ensures that variations due to stochasticity caused by the dataset loader are factored out. Data augmentation includes horizontal and vertical flips, random rotation, and translation. All models are trained on NVIDIA V100 GPUs.
Evaluation Metrics:
Dice and Jaccard scores (also known as the intersection over union) serve as metrics for segmentation tasks <cit.>.
§ RESULTS
Gland Segmentation Results: The line plots (variation marked as shading) of performance measures for different initialization are shown in Figure <ref>,<ref>-A [All images are best viewed on a digital device following magnification.]. We trained models with different backbone initializations on an increasing amount of data. The following observations emerged from these experiments:- (a) Increasing the quantity of data improves performance for all initializations and decreases variation. (b) At all levels of target domain training data, models with pretrained weight initializations outperform those with random initializations, but the performance gap between random initialization and pretraining decreases as the quantity of data increases.
(c) For small datasets, domain-specific pretraining has a significant performance advantage over other initializations. However, as the size of the dataset grows, the effect of domain-specific pretraining diminishes.
Variation in performance due to different amounts of training epochs for all datasets is shown in Figure <ref>,<ref>-B. For very small datasets(10% and 30% graph), domain-specific pretraining outperforms all other initializations at all epochs. However, for larger datasets(100% data), ImageNet supervised weights also outperform at lower epochs as well. This show that domain-specific pretraining is dataset diversity dependent and not computational power. If a dataset is not diverse or small in size, then domain-specific pretraining is beneficial, but other initialization can be better for higher diversity and higher epochs. Qualitative results are shown in supplementary Figure <ref>, domain-specific fine-tuned models have more accurate gland outlines and fewer false positive pixels than other models.
Cell Segmentation Results : The performance of various initializations is depicted in Figure <ref>. Even though some of the observations are similar to those of previous experiments, novel observations emerge from cell segmentation results:- (a) Model performances with KUMAR <cit.> data are an exception where random initialization is outperforming or competitive with other initializations. (b) Domain-specific pretraining is performing similar to or worse than ImageNet initialization for most cases. Altogether our results demonstrate that domain-specific pretraining does not improve the performance of the U-Net/ResNet model for cell segmentation tasks. Qualitative results are shown in supplementary Figure <ref>.
UMAP Results: We sampled 300 random patches from the test sets of GlaS and CRAG to generate projections for encoders and decoders shown in Figure <ref>. Feature values were extracted from the first encoder layer in U-Net, the deepest encoder layer, and the last decoder layer.
In the network's first layer, the projections of features from various initializations form clouds that overlap. We interpret this observation to conclude that the initial layers of deep neural networks capture low-level statistics and that all initializations capture comparable attributes. As encoding depth increases, the representations become more distinct and the overlap decreases, indicating that networks pretrained in different ways may be learning different representations of the same data. This is counterintuitive, as we would expect that each of the pretrained models generates similar high-level representations when performing identical tasks and using the same dataset. However, the distribution of features in the UMAP projection of latent layer representations appears to have topological similarity across initializations which indicates that features for different initialization may be related via a rigid transformation in latent space. A similar conclusion is valid for the decoder UMAP. Together, these results suggest that distinct initializations, despite being clustered at different locations in the UMAP, might learn similar relational feature characteristics between samples in the dataset.
§.§ Out Of Domain Testing Results
For OOD testing we use the pretrained models for out-of-box (without finetuning) testing on other datasets. This analysis reveals the bias to the domain that learned with various initializations.
Gland Segmentation Results The results for OOD testing for the gland segmentation task are shown in <ref>. At a low amount of data, the domain-specific, finetuned models perform best and using random initializations results in the greatest relative performance drop compared to all other initializations.
Cell Segmentation Results: The results of OOD testing for different datasets are shown in supplementary Figure <ref> and lead to the following observations: (a) pretrained models are better than models with random initialization at the same task on unseen datasets from KUMAR <cit.> and CPM17 <cit.>). In contrast, models with random initialization and trained on TNBC <cit.> outperform or perform the same as the pretrained initialized model. (b) A drop in performance exists on TNBC data for models trained on KUMAR <cit.> and CPM17 <cit.> but not for models trained on TNBC <cit.> or KUMAR <cit.> and applied to CPM17. (c) Domain-specific pretrained models when tested on OODdata demonstrate a lesser drop in performance compared to other pretraining approaches.
§ CONCLUSION AND FUTURE WORK
In this study, we demonstrate that a domain-specific pretraining backbone can be beneficial for gland and cell segmentation when data are limited or of low diversity data for the task at hand. However, the need for domain-specific pretraining decreased for gland and cell segmentation as the amount of training data increases. The results of cell segmentation indicate that domain-specific pretraining may not be advantageous for all types of tasks.
The results of UMAP projections indicate that the initial layers of domain-specific and non-domain-specific models learn similar features, but that the deeper encoders are distinct. Although the topology of latent feature representations is similar for the different initialization, models may be learning similar high-level characteristics within the latent feature spaces. Lastly, during out-of-distribution testing, domain-specific pretraining suffers the same performance degradation as other initializations, i.e. domain-specific pretrained models may not be effective at learning site-independent features.
Our final conclusion from this study is that domain-specific pretraining may be beneficial for specific tasks and datasets, but benefits are not universal. Domain-specific pretraining suffers from the same issues as pretraining on image-net. Lastly, we would like to make the reader aware that this study did not cover medical vision tasks such as multi-class semantic segmentation and cell detection. We also did not utilize models pretrained using vision-language models. Both these comparisons are left for future work.
splncs04
§ SUPPLEMENTARY
|
http://arxiv.org/abs/2307.01184v1
|
20230703175007
|
Finding dense minors using average degree
|
[
"Kevin Hendrey",
"Sergey Norin",
"Raphael Steiner",
"Jérémie Turcotte"
] |
math.CO
|
[
"math.CO",
"05C07, 05C35, 05C83"
] |
The first author is supported by the Institute for Basic Science (IBS-R029-C1). The second and fourth authors are supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). Les deuxième et quatrième auteurs sont supportés par le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG). The third author is funded by an ETH Zürich Postdoctoral Fellowship.
[2020]05C07, 05C35, 05C83
Discrete Mathematics Group, Institute for Basic Science (IBS), Daejeon, South Korea
[email protected]
https://sites.google.com/view/kevinhendrey/
Department of Mathematics and Statistics, McGill University, Montréal, Canada
[email protected]
www.math.mcgill.ca/snorin/
Institute of Theoretical Computer Science, Department of Computer Science, ETH Zürich, Switzerland
[email protected]
https://sites.google.com/view/raphael-mario-steiner/
Department of Mathematics and Statistics, McGill University, Montréal, Canada
[email protected]
www.jeremieturcotte.com
Motivated by Hadwiger's conjecture, we study the problem of finding the densest possible t-vertex minor in graphs of average degree at least t-1. We show that if G has average degree at least t-1, it contains a minor on t vertices with at least (√(2)-1-o(1))t2 edges. We show that this cannot be improved beyond (3/4+o(1))t2. Finally, for t≤ 6 we exactly determine the number of edges we are guaranteed to find in the densest t-vertex minor in graphs of average degree at least t-1.
Finding dense minors using average degree
Jérémie Turcotte
August 1, 2023
=========================================
§ INTRODUCTION
In this paper all graphs are simple and finite. We say a graph H is a minor of a graph G if a graph isomorphic to H can be obtained from a subgraph of G by contracting edges (and removing any loops and parallel edges). A k-colouring of a graph G is an assignment of colours from {1,…, k} to the vertices of G such that adjacent vertices are assigned distinct colours. The chromatic number χ(G) is the smallest integer k such that G admits a k-colouring.
Hadwiger <cit.> conjectured that if χ(G) ≥ t, then G contains K_t, the complete graph on t vertices, as a minor. Hadwiger's conjecture is one of the most famous open problems in graph theory. The study of Hadwiger's conjecture has spawned a large number of variants, strengthenings and relaxations; see <cit.> for a survey of this area.
For instance, there has been much progress in recent years in attempting to find the smallest function f(t) for which χ(G)≥ f(t) implies that G contains a K_t minor.
The current best bound f(t) = Ω(tloglog t) is due to Delcourt and Postle <cit.>.
Another strategy is to approach Hadwiger's conjecture by relaxing the condition that the minor be complete. Seymour <cit.> asks for which graphs H on t vertices does χ(G)≥ t guarantee H as a minor. Kostochka <cit.> as well as the second and fourth authors of this paper <cit.> have proved that this holds for large classes of bipartite graphs.
The second author and Seymour <cit.> study the related question of maximizing the edge density of t-vertex minors of G if χ(G)≥ t, that is finding a minor of G on t vertices with as many edges as possible. Unlike in the previous relaxation we allow the minor to change depending on G. It is show in <cit.> that if G has n vertices and independence number 2 (and so χ(G)≥⌈n/2⌉), then G contains a minor on ⌈n/2⌉ vertices and at least (0.98688-o(1))⌈n/2⌉2 edges.
Letting both the number of vertices and edges of the minor vary, Nguyen <cit.> showed that there exists C>0 such that if ε∈ (0,1/256) and χ(G)≥ C tloglog(1/ε), then G contains a minor on t vertices with at least (1-ε)t2 edges. Setting ε=1/t^2 recovers the result of Delcourt and Postle mentioned above.
In this paper, we study the following strengthening of the question considered by the second author and Seymour: What is the densest t-vertex minor of G if the average degree is at least t-1 ? This is motivated by the well-known fact that χ(G)≥ t implies that G contains a subgraph G' with minimum degree at least t-1.
It is a direct consequence of a result of Mader <cit.> (see <ref> below) that every graph of average degree at least t-1 contains a minor on t vertices with at least 1/4t2 edges.
Our main result is the following improvement, which we prove in <ref>. Let (G) denote the average degree of a graph G.
thmmainthm
If t∈ and G is a graph with average degree (G)≥ t, then G contains a minor on t vertices with at least (√(2)-1-24/t)t2 edges.
Note that in <ref> we assume for convenience that (G)≥ t and not (G)≥ t-1 as in the problem above. However, this only affects the lower order terms.
In <ref>, we show that the constant in the previous result cannot be improved past 3/4.
thmupperthm
For t∈, there exists a graph G with average degree (G)≥ t such that G does not contain a minor on t vertices with more than (3/4+o(1))t2 edges.
In fact, we will describe a large class of graphs which satisfy the condition of <ref>.
Finally, in <ref>, proved in <ref> we determine for small values of t the exact number of edges we are guaranteed to find in the densest t-vertex minor.
thmsmallthm
If 2≤ t≤ 6 is an integer and G is a graph with average degree (G)≥ t-1, then G contains a minor on t vertices with at least
* 1 edge if t=2,
* 3 edges if t=3,
* 5 edges if t=4,
* 8 edges if t=5, and
* 11 edges it t=6.
Furthermore, none of these values can be improved.
§.§ Notation
Let G be a graph. We denote by V(G) the set of vertices of G and E(G)⊆V(G)2 the set of edges of G. We will write (̌G)=|V(G)| for the number of vertices of G and (G)=|E(G)| for the number of edges of G. If u∈ V(G), we write N_G(u) for the (open) neighbourhood of u, N_G[u]=N_G(u)∪{u} for the closed neighbourhood of u, and _G(u)=|N_G(u)| for the degree of u in G. We denote the minimum degree of G by δ(G)=min_u∈ V(G)_G(u) and the average degree of G by (G)=∑_u∈ V(G)_G(u)/(̌G). If X⊆ V(G), we write G[X] for the subgraph of G induced by X, and G-X=G[V(G)∖ X] for the subgraph obtained by removing the vertices in X; if X={u} we will write G-u for G-X. If e∈ E(G), we write G-e for the graph obtained by removing e, and G/e for the graph obtained from G by contracting the edge e (and removing any resulting loops or duplicate edges); in particular G/e is a minor of G. If Z is a real-valued random variable, we write 𝔼[Z] for the expected value of Z.
§ LOWER BOUND
In this section, we prove <ref>.
The following result of Mader <cit.> will allow us to get a lower bound on the minimum degree in the neighbourhood of each vertex, by taking a minor of our graph. We include a short proof for the sake of completeness, since the paper <cit.> is only available in German.
If G is a graph, then G contains a minor H such that (H)≥(G) and δ(H[N[u]])> (G)/2 for every u∈ V(H).
Let H be a minor of G such that (H)≥(G) which minimizes (̌H). Suppose for a contradiction that H does not respect the statement, i.e. there is some u∈ V(H) such that δ(H[N[u]])≤(G)/2.
If _H(u)=0, then it is direct that (H-u)≥(H), which contradicts the minimality of H. Hence, we may suppose that u has at least 1 neighbour.
Given that the degree of u in H[N[u]] is as least as large as the degree in H[N[u]] of every other vertex in N[u], there exists v∈ N(u) such that _H[N[u]](v)≤(G)/2≤(H)/2. In other words, |N[u]∩ N(v)|≤(H)/2. Then,
(H/uv)=2(H/uv)/(̌H/uv)=2((H)-|N[u]∩ N(v)|)/(̌H)-1≥2(H)-(H)/(̌H)-1=(̌H)·(H)-(H)/(̌H)-1=(H)
which is a contradiction to the minimality of H.
The next easy lemma tells us when removing vertices does not decrease average degree.
If G is a graph and X⊊ V(G) is such that exactly M edges of G have at least one end in X and M≤(G)·|X|/2, then (G-X)≥(G).
We may compute directly that
(G-X)=2(G-X)/(̌G-X)=2((G)-M)/(̌G)-|X|≥2(G)-(G)·|X|/(̌G)-|X|=(̌G)·(G)-(G)·|X|/(̌G)-|X|=(G).
The next lemma allows us to extract a dense subgraph on t vertices; this is a standard application of the first moment method.
If t∈ and G is a graph with (̌G)≥ t, then G contains a subgraph on t vertices with at least (G)/(̌G)t2 edges.
The statement is trivial when t=1, so we may assume that t≥ 2. Let Z be a uniformly random subset of V(G) of size t. Given uv∈ E(G), the probability that uv is an edge of G[Z]
is (̌G)-2t-2/(̌G)t=t(t-1)/(̌G)((̌G)-1). As (G)=(̌G)·(G)/2, we have
𝔼[(G[Z])]
=(̌G)·(G)/2·t(t-1)/(̌G)((̌G)-1)
=(G)/(̌G)-1t2≥(G)/(̌G)t2.
Hence, there exists at least one choice of Z such that (G[Z])≥(G)/(̌G)t2. Hence the statement holds for G[Z].
In the next lemma, we find a dense subgraph on t vertices by extending an already dense, but not large enough, set of vertices X to a set of size t by sampling the remaining vertices in another set Y. We will apply this lemma when X is a union of closed neighbourhoods and Y a closed neighbourhood (or vice versa), the conditions on the minimum degrees of the induced subgraphs on these sets will come from <ref>.
If t∈, G is a graph and X,Y⊆ V(G) are such that |X|≤ t, |X∪ Y|≥ t and δ(G[X]),δ(G[Y])≥t/2, then there is a subgraph of G on t vertices with at least (1/2(x+(1-x)^2/y)-1/t)t2 edges, where x=|X|/t and y=|Y|/t.
If |X|=t, the statement follows directly by considering the t-vertex subgraph G[X], which contains at least 1/2·|X|·δ(G[X]) ≥1/2· t·t/2≥1/2t2 edges. Hence, we may suppose that |X|≤ t-1, and as a consequence that |Y∖ X|≥ 1.
Let Y' be a uniformly random subset of Y∖ X of size t-|X| and set Z=X∪ Y', which is possible given that |X∪ Y|≥ t implies t-|X|≤ |Y∖ X|.
The number of edges with both ends in X is at least 1/2·|X|·δ(G[X])≥t|X|/4. The number of edges with both ends in Y∖ X is 1/2∑_v∈ Y∖ X |N(v)∩ (Y∖ X)| and the number of edges between X and Y∖ X is ∑_v∈ Y∖ X |N(v)∩ X|, so the number of edges with both ends in X∪ Y and at least one end in Y∖ X is
1/2∑_v∈ Y∖ X |N(v)∩ (Y∖ X)|+∑_v∈ Y∖ X |N(v)∩ X|
≥1/2∑_v∈ Y∖ X|N(v)∩ (X ∪ Y)|
≥1/2 |Y∖ X|·δ(G[X∪ Y])≥t|Y∖ X|/4.
Suppose uv is an edge of G[X∪ Y] with at least one end in Y∖ X, say u∈ Y∖ X. Then, uv is an edge in G[Z] if and only if {u,v}⊆ Z. If v∈ Y∖ X, then the probability that uv is an edge is the probability that {u,v}⊆ Y', which is |Y∖ X|-2t-|X|-2/|Y∖ X|t-|X|=(t-|X|)(t-|X|-1)/|Y∖ X|(|Y∖ X|-1)≥(t-|X|)(t-|X|-1)/|Y∖ X|^2. If v∈ X, then the probability that uv is an edge is the probability that u∈ Y', which is t-|X|/|Y∖ X|≥(t-|X|)(t-|X|-1)/|Y∖ X|^2. Here we use the fact that t-|X|≤ |Y∖ X|
Hence, we have
𝔼[(G[Z])]
≥t|X|/4+t|Y∖ X|/4·(t-|X|)(t-|X|-1)/|Y∖ X|^2
≥t|X|/4+t/4·(t-|X|)(t-|X|-1)/|Y|
≥(t-1)· tx/4+t-1/4·(t-tx)(t-tx-1)/ty
=1/2(x+(1-x)(1-x-1/t)/y)t2
≥(1/2(x+(1-x)^2/y)-1/t)t2,
where in the last step we used that 1-x ≤ y. Hence, there is at least one choice of Y' such that G[Z]=G[X ∪ Y'] has t vertices and at least (1/2(x+(1-x)^2/y)-1/t)t2 edges, as desired.
The next lemma finds a dense subgraph on t vertices given a dense set X if the vertices outside of X all have sufficiently large degree. Contrary to the previous lemma, here the set X is not extended to a set of size t, instead its properties will allow us to show that G is itself not too large, and so a good candidate to apply <ref>.
Let t,c∈, λ>1+3/t and let G be a graph with (G)≥ t and such that (H)≤ t for all non-null proper subgraphs H of G. If ∅≠ X⊊ V(G) is such that δ(G[X])≥t/2 and _G(u)>λ t-1 for all u∈ V(G)∖ X, then G contains a subgraph on t vertices with at least (λ-1-3/t)t/(λ-3/4)|X|t2 edges.
First note that (G-X)<t≤(G). Let M be the number of edges with at least one end in X. <ref> implies that M>(G)·|X|/2≥t|X|/2. In particular, we then have that the sum of degrees of vertices in X is
∑_v∈ X_G(v)
=∑_v∈ X |N(v)∩ X|+∑_v∈ X |N(v)∖ X|
=(1/2∑_v∈ X |N(v)∩ X|+∑_v∈ X |N(v)∖ X|)+1/2∑_v∈ X |N(v)∩ X|
= M+1/2∑_v∈ X |N(v)∩ X|
>t|X|/2+|X|δ(G[X])/2
≥3t|X|/4.
Furthermore, we have
∑_v∈ V(G)∖ X_G(v)≥ ((̌G)-|X|)(λ t-1).
Let u be any vertex of G. As (G-u)<t,
(G)=2(G)/(̌G)=2((G-u)+_G(u))/(̌G)=(̌G-u)(G-u)+2_G(u)/(̌G)<(G-u)+2≤ t+2.
Hence,
(̌G)(t+2)> (̌G)·(G)=∑_v∈ V(G)_G(v)
≥3t|X|/4+((̌G)-|X|)(λ t-1).
Rearranging yields that
(̌G)< (λ-3/4-1/t)|X|/λ-1-3/t<(λ-3/4)|X|/λ-1-3/t.
Since (G)≥ t, we in particular have that (̌G)≥ t. Finally, to get the desired subgraph we apply <ref> to G to get a subgraph on t vertices with at least
(G)/(̌G)t2≥(λ-1-3/t)t/(λ-3/4)|X|t2
edges.
The next lemma is a core element of our proof, which we summarize here. We want to find a set X of vertices such that G[X] has minimum degree at least t/2 which has order as close as possible to t. We will be taking X to be a union of closed neighbourhoods in our graph (as, by <ref>, we will be able to assume that closed neighbourhoods have this minimum degree); we can construct this set by sequentially adding neighbourhoods of vertices. The closer |X| is to t, the denser a subgraph of G of order t we will be able to find; when |X|>t we can use <ref> to sample a dense subset of size exactly t and when |X|<t we can use either <ref> or <ref> (depending on the degrees of vertices outside of X). In practice, we will introduce some tolerance and attempt to find X such that α t≤ |X|≤β t (this is Case <ref> in the following lemma). The first way of failing is when constructing X, at some point the size is smaller than α t, but the size jumps over β t when adding any other neighbourhood of size smaller than α t; this is Case <ref>, to which we will apply <ref>. The other way of failing is to run out of vertices of small degree, in which case we are in Case <ref>. In this case, the fact that all remaining vertices have large degree will allow us to apply <ref>. The parameters α,β will later be chosen to optimize the trade-offs between these cases.
If t∈, G is a graph such that δ(G[N[u]])≥t/2 for every u∈ V(G), and 1/2≤α<β∈, then either
* there exists X⊆ V(G) such that α t≤ |X|≤β t and δ(G[X])≥t/2,
* there exists X⊆ V(G) such that t/2≤ |X|<α t, δ(G[X])≥t/2, and _G(u)> β t-1 for every u∈ V(G)∖ X, or
* there exist X,Y⊆ V(G) such that |X|,|Y|<α t, |X∪ Y|>β t and δ(G[X]),δ(G[Y])≥t/2.
If there exists u∈ V(G) such that α t-1≤_G(u)≤β t-1, then setting X=N[u] we are in Case <ref>. Hence, we may assume that for every u∈ V(G), _G(u)<α t-1 or _G(u)>β t-1.
Let A=⋃{N[u] : u∈ V(G), (u)<α t-1}. Note that since δ(G[N[u]])≥t/2 for every u ∈ V(G), we also have δ(G[A])≥t/2 and thus |A|>t/2.
If A is such that |A|<α t, then we are in Case <ref> for X=A, since every vertex not in A has degree at least α t-1, and hence greater than β t-1 by the assumption above.
Otherwise, we can find B=⋃_i=1^k N[x_i], for x_1,…,x_k∈ V(G) all of degree smaller than α t-1, such that |B|≥α t. Pick such a set which minimizes k. Again, note that δ(G[B])≥t/2. If |B|≤β t, we are in Case <ref> with X=B. Hence suppose that |B|>β t.
Note that necessarily k≥ 2, as if B=N[x_1], we have |B|=_G(x_1)+1<α t. By minimality of k, |⋃_i=1^k-1 N[x_i]|<α t. Hence, Case <ref> holds for X=⋃_i=1^k-1 N[x_i] and Y=N[x_k].
Finally, we are ready to derive
<ref>, which we restate for convenience, from the above lemmas.
*
The statement is trivial for t≤ 24, so assume t≥ 25. We may suppose that (G) has no proper minor such that (G)≥ t; in particular, G contains no proper minor H such that (H)≥(G). By <ref> we have that δ(G[N[u]])≥t/2 for every u∈ V(G).
Let α=4/5, β=ν=6/5 and let γ= √(2)-1-24/t. We wish to prove that G contains a subgraph on t vertices with at least γt2 edges. As 1/2 < α < β we can apply <ref> to G and consider three cases depending on the outcome of <ref> which holds.
First suppose we are in Case <ref>, i.e. there exists X⊆ V(G) such that α t≤ |X|≤β t and δ(G[X])≥t/2. There are two subcases here. The first is t≤ |X|≤β t. <ref> applied to G[X] then ensures that G contains a subgraph on t vertices with at least
(G[X])/(̌G[X])t2≥t/2/β tt2=1/2βt2 = 5/12t2 >γt2
edges.
The other subcase is α t≤ |X|≤ t. In the following, we assume without loss of generality that X is chosen of maximum size subject to satisfying the conditions of Case (1) in Lemma <ref> as well as |X|≤ t. In particular, this implies that for every vertex u ∈ V(G)∖ X, we have |X ∪ N[u]|>t, for otherwise we could replace X with X ∪ N[u], contradicting the maximality assumption.
Let us consider two possibilities. The first is that there exists u∈ V(G)∖ X such that _G(u)≤ν t-1. Let Y=N[u] and write |X|=xt and |Y|=yt. By <ref> there exists a subgraph of G on t vertices with at least
(1/2(x+(1-x)^2/y)-1/t)t2≥(1/2(x+(1-x)^2/ν)-1/t)t2
edges. Hence, the theorem holds in this case as
γ≤min_α≤ x≤ 11/2(x+(1-x)^2/ν)-1/t = 1/2(α+(1-α)^2/ν)-1/t = 5/12-1/t.
Otherwise, _G(u)> ν t-1 for every u∈ V(G)∖ X. By <ref> G contains a subgraph on t vertices with at least
(ν-1-3/t)t/(ν-3/4)|X|t2≥(ν-1-3/t)t/(ν-3/4) tt2>(ν-1/ν-3/4-12/t)t2 = (4/9-12/t)t2 > γt2
edges, where the second inequality holds as ν>1. This finishes the proof in the case that outcome <ref> of <ref> holds.
Now suppose that outcome <ref> of <ref> holds, i.e. there exists X⊆ V(G) such that t/2≤ |X|<α t, δ(G[X])≥t/2, and _G(u)> β t-1 for every u∈ V(G)∖ X. By <ref>, G contains a subgraph on t vertices with at least
(β-1-3/t)t/(β-3/4)|X|t2>(β-1-3/t)t/(β-3/4) tt2>(β-1/(β-3/4)-24/t)t2 = (4/9-24/t)t2 > γt2
edges, where the second inequality uses β>1.
Finally, suppose that outcome <ref> of <ref> holds, i.e. there exist X,Y⊆ V(G) such that |X|,|Y|<α t, |X∪ Y|>β t and δ(G[X]),δ(G[Y])≥t/2. Without loss of generality suppose |X|≥ |Y| and let |X|=xt and |Y|=yt. By <ref>
there exists a subgraph of G on t vertices with at least
(1/2(x+(1-x)^2/y)-1/t)t2≥(1/2(x+(1-x)^2/x)-1/t)t2
edges. Hence, it suffices to show that
1/2(x+(1-x)^2/x) ≥√(2)-1.
This last inequality simplifies to (√(2) x -1)^2 ≥ 0 for x > 0 and hence holds for all such x, as desired.
§ UPPER BOUND
In this section, we prove <ref>. We first need the following definitions.
For k∈, k-trees are the graph family defined in a recursive manner as follows:
* The complete graph K_k+1 is a k-tree.
* If G is a k-tree and C ⊆ V(G) is a clique in G with |C|=k, then the graph obtained from G by adding a new vertex with neighbourhood C is also a k-tree.
It follows readily from this definition that for any k-tree G, (G)=k2+k((̌G)-k). Furthermore, every minor of a k-tree with at least k+1 vertices is also a spanning subgraph of a k-tree. Indeed, the treewidth (G) of a graph G with at least k+1 vertices is at most k if and only if G is a spanning subgraph of a k-tree <cit.>, and it is well-known that treewidth is minor-monotone (that is, (G')≤(G) for every minor G' of a graph G). Sufficiently large k-trees, with appropriately chosen parameter k, will be our first candidates for <ref>.
Let S_r=K_1,r be the star graph with r leaves. We define the graph S_k,r,s as the graph obtained from S_r by replacing every leaf by cliques A_1,…,A_r on s vertices and replacing the central vertex by a clique C on k vertices. In particular, every vertex of C is adjacent to every other vertex in the graph. Such vertices are said to be universal.
These graphs, with appropriately chosen parameters, will also be candidates for <ref>. First note that
(S_k,r,s)=2(k2+rs2+k r s)/k+rs.
Given graphs G and H, we say the collection (B_u)_u∈ V(H) of pairwise disjoint non-empty subsets of V(G) is a model of H in G if G[B_u] is connected for every u∈ V(H) and G contains at least one edge between B_u and B_v if uv∈ E(H). It is easy to see that there exists a model of H in G if and only if H is a minor of G. It is also direct that if |B_u|=1 for every u∈ V(H), then G contains a subgraph isomorphic to H (precisely, on vertex set ⋃_u∈ V(H)B_u).
We now show that with these graphs, we may restrict ourselves to finding a dense subgraph on t vertices, which is simpler than finding minors.
If k,r,s∈ and H is a minor of S_k,r,s, then S_k,r,s has a subgraph isomorphic to H.
Let ℬ=(B_u)_u∈ V(H) be a model of H in G which minimizes ∑_u∈ V(H)|B_u|. If |B_u|=1 for every u∈ V(H), then we are done by the above remark. Hence, assume |B_v|≥ 2 for some v∈ V(H).
If B_v⊆ A_i for some 1≤ i≤ r, let x∈ B_v. If B_v∩ C≠∅, let x∈ C∩ B_v. Given the structure of S_k,r,s and that the subgraph induced by B_v is connected, these are the only two possible cases. Let B_v'={x} and B_u'=B_u for u∈ V(H)∖{v}.
It is easy to verify that in both of these cases, if w∈ V(G)∖ B_v is adjacent to at least one vertex of B_v, then w is adjacent to x. Hence, (B_u')_u∈ V(H) is a model of H in G which contradicts the minimality of ℬ.
We now compute an upper bound on the density of t-vertex subgraphs of S_k,r,s.
If k,r,s,t∈ are such that k+rs≥ t≥ k, then S_k,r,s does not contain a subgraph on t vertices with more than
f(k,s,t)=k2+k(t-k)+⌊t-k/s⌋s2+t-k-⌊t-k/s⌋ s2
edges.
Let X⊆ V(S_k,r,s) such that |X|=t which maximizes the number of edges in G[X] with first priority and then, subject to (G[X]) being maximum, maximizes |X∩ C| with second priority. This is possible given that (̌S_k,r,s)=k+rs≥ t. Our goal is thus to upper bound (G[X]).
We first claim that C⊆ X. Suppose to contrary that there exists c∈ C∖ X. Given that |X|=t≥ k=|C|>|C∖{c}|, this implies there exists x∈ X∖ C. Let X_0=X∖{x} and X'=X_0∪{c}.
Then,
(G[X])
=(G[X_0])+|N(x)∩ X_0|
≤(G[X_0])+|N(c)∩ X_0|
= (G[X']),
where the inequality follows from the fact that c is a universal vertex of G. Since |X' ∩ C|>|X ∩ C|, this contradicts our choice of X. Hence, we have C⊆ X.
The number of edges in G[C] is k2. Given that |X∖ C|=t-k and every vertex in X∖ C is connected to all k vertices in C, the number of edges between C and X∖ C is k(t-k).
For 1≤ i≤ r, let a_i=|X ∩ A_i|. In particular, ∑_i=1^ra_i=t-k and for every i we have 0≤ a_i≤ s. Then (G[X∖ C])=∑_i=1^ra_i2. Suppose there exist distinct 1≤ i,j≤ r such that 0<a_i,a_j<s. Without loss of generality, suppose a_i≥ a_j. Under this assumption, it is easy to verify that a_i2+a_j2< a_i+12+a_j-12, and so by choosing one more vertex in A_i and one fewer vertex in A_j we could obtain a subgraph with more edges, contradicting the maximality of (G[X]). Hence a_i∈{0,s} for all except at most one index i ∈{1,…,r}. It then follows from ∑_i=1^ra_i=t-k that a_i=s for exactly ⌊t-k/s⌋ choices of i, and the possible remaining non-empty set of the form X ∩ A_i contains exactly (t-k)-⌊t-k/s⌋ s vertices. Hence, G[X∖ C] contains exactly ⌊t-k/s⌋s2+t-k-⌊t-k/s⌋ s2 edges. This concludes the proof of the lemma.
We may now prove <ref>, which we restate for convenience.
*
We prove the theorem in 2 ways.
3mm
Using k-trees
In order to prove the theorem, consider any k(t)-tree G, where k(t)=(1/2+o(1))t>t/2, and for which (̌G) is sufficiently large (as a function of k(t)) such that
(G)=2k(t)2+k(t)((̌G)-k(t))/(̌G)=2k(t)-(k(t)+1)k(t)/(̌G)> t.
Let G' be any minor of G on t vertices. As noted earlier, G' must be a spanning subgraph of some k(t)-tree. Hence,
(G')≤k(t)2+k(t)(t-k(t))=(1/2+o(1))^2t^2/2+(1/2+o(1))^2t^2=(3/4+o(1))t2,
as desired.
See <ref> for an example of such a graph.
3mm
Using S_k,r,s
In order to prove the theorem, we consider the graphs S_k(s(t),t),r,s(t) with k(s(t),t)=⌈t-s(t)/2⌉+1 and some choice of s(t)≥ 1, to be specified later.
Given that lim_r→∞(S_k(s(t),t),r,s(t))=s(t)-1+2k(s(t),t)=s(t)-1+2(⌈t-s(t)/2⌉+1)≥ t+1, we have that (S_k(s(t),t),r,s(t))≥ t for sufficiently large r.
Applying <ref> and <ref>, we only need show that f(k(s(t),t),s(t),t)=(3/4+o(1))t2. We show that this holds for various choices of s(t).
One possible choice for s(t) is s(t)= (1/2i+o(1))t for fixed i∈ (see <ref> for the case with i=1.) Then k(s(t),t)=(1/2-1/4i+o(1))t. We may then compute that
k(s(t),t)2=(1/2-1/4i+o(1))^2t^2/2=(1/4-1/4i+1/16i^2+o(1))t2
and
k(s(t),t)(t-k(s(t),t))=(1/2-1/4i+o(1))(1-(1/2-1/4i+o(1)))t^2=(1/2-1/8i^2+o(1))t2.
Given that
⌊t-k(s(t),t)/s(t)⌋=⌊t-(1/2-1/4i+o(1))t/(1/2i+o(1))t⌋=⌊1/2+1/4i+o(1)/1/2i+o(1)⌋=i
for sufficiently large t, we have
⌊t-k(s(t),t)/s(t)⌋s(t)2=(i+o(1))(1/2i+o(1))^2t^2/2=(1/4i+o(1))t2
and
t-k(s(t),t)-⌊t-k(s(t),t)/s(t)⌋ s(t)2 =(1-(1/2-1/4i+o(1))-(i+o(1)) (1/2i+o(1)))^2t^2/2
=(1/16i^2+o(1))t2.
Thus we obtain for the value of f(k(s(t),t),s(t),t):
((1/4-1/4i+1/16i^2+o(1))+(1/2-1/8i^2+o(1))+(1/4i+o(1))+(1/16i^2+o(1)))t2
=(3/4+o(1))t2,
as desired.
Another possible case is s(t)=o(t) (see <ref> for the case s(t)=1). An analogous computation to above yields the result in this case (this can informally be seen by letting i tend to infinity). In fact, in this case the result also follows from the approach for k-trees discussed above.
§ SMALL GRAPHS
In this section, we prove <ref>. We first need the following definitions.
A (proper) separation of a graph G is a pair (A,B) such that A,B⊆ V(G), A∪ B=V(G), A∖ B,B∖ A≠∅ and there are no edges between vertices in A∖ B and B∖ A. The order of (A,B) is |A∩ B|. We say a graph G is k-connected if (̌G)≥ k+1 and G does not have a separation of order strictly smaller than k. Note that complete graphs are the only graphs to not have any separation.
Given a graph H and k∈, we say a graph G is a (H,k)-cockade if G is isomorphic to H or if G can be obtained from smaller (H,k)-cockades G' and G” by identifying a clique of size k of G' with a clique of size k of G”. A simple inductive argument can be used to show that if G is an (H,k)-cockade then (G)=(̌G)-k/(̌H)-k(H)-(̌G)-(̌H)/(̌H)-kk2.
We first prove the following upper bound on the extremal function for minors in 𝒦_6^-4, the class of graphs on 6 vertices and 11 edges. See, for instance, the introduction of <cit.>, and references therein, for a summary of similar results on the extremal functions of small graphs. By K_5^- we denote the graph obtained from K_5 by removing one edge.
If G is a graph such that (G)≥5/2(̌G)-7/2, then G contains a minor with 6 vertices and 11 edges, unless G is isomorphic to K_1, K_5^- or K_5.
First note that ⌈5/2n-7/2⌉=-1, 2, 4, 7, 9, 12 when, respectively, n=1,2,3,4,5,6. It is then immediate that the only graphs G with (̌G)≤ 5 and at least 5/2(̌G)-7/2 edges are K_1, K_5^- and K_5. If (̌G)=6, then G contains at least 12 edges, and thus the statement also holds in this case. This shows that the theorem holds if G has at most 6 vertices, and therefore we may assume (̌G)≥ 7.
Towards a contradiction, suppose then that the statement is false, and let G be a counterexample that minimizes (̌G) and then, subject to (̌G) being minimum, minimizes (G). The latter condition in particular implies that (G)=⌈5/2(̌G)-7/2⌉<5/2(̌G).
First suppose G is 3-connected. If some e∈ E(G) is in fewer than 2 triangles, then
(G/e)≥(G)-2≥5/2(̌G)-11/2=5/2((̌G/e)+1)-11/2=5/2(̌G/e)-3.
By minimality of G, G/e, contains a minor on 6 vertices and 11 edges (using that (̌G/e)≥ 6 to exclude the small exceptions). As this contradicts our assumptions on G, every edge in G lies on at least two triangles.
Next suppose there exists A⊆ V(G) of size 5 such that (G[A])=8. Let u∈ V(G)∖ A. As G is 3-connected, Menger's theorem <cit.> implies there exist internally vertex-disjoint u-A paths P_1,P_2,P_3 in G (with no internal vertices in A). Then, G[A∪ V(P_1)∪ V(P_2)∪ V(P_3)] contains a minor on 6 vertices and at least 11 edges, which can be obtained by contracting all but one edge in each of P_1,P_2,P_3. Hence such a set A does not exist.
Given that (G)=2(G)/(̌G)<5, there exists u∈ V(G) of degree at most 4 (and at least 3, by 3-connectivity).
Consider the case _G(u)=4. As every edge of G is in at least 2 triangles, δ(G[N_G[u]])≥ 3. In particular, (G[N_G[u]])=1/2(G[N_G[u]])(̌G[N_G[u]])≥15/2, and as this is an integer (G[N_G[u]])≥ 8. However, we have excluded such a choice A=N_G[u] earlier.
Hence, we may suppose that _G(u)=3, say N_G(u)={x,y,z}. Again as every edge of G is in at least 2 triangles, G[N_G[u]] is necessarily isomorphic to K_4. Let v∈ N_G(x)∖ N[u] (such a vertex necessarily exists as otherwise ({u,x,y,z},V(G)∖{u,x}) would form a separation of order 2 in G, contradicting 3-connectivity). As proved above, vx is in at least one triangle. If v is adjacent to y or z, then G[{u,x,y,z,v}] contains at least 8 edges, which we have excluded earlier. Hence, there exists w∈ V(G)∖ N_G[u] which is adjacent to both x and v. Again, by Menger's theorem there exist at least three internally vertex-disjoint paths between {v,w} and {u,y,z}. At most one of these can contain x. Let P_1,P_2 be two of these paths which don't contain x, we may also suppose they do not contain any of {u,x,y,z,v,w} as internal vertices. Then, G[{u,x,y,z,v,w}∪ V(P_1)∪ V(P_2)] contains a minor on 6 vertices and 11 edges, which can be obtained by contracting all but one edge in each of P_1,P_2.
Hence, we may now suppose that G is not 3-connected. As (̌G)≥ 7, we may then suppose G is not a complete graph, so let (A,B) be a separation of G of minimal order. In particular, |A∩ B|≤ 2.
We divide the rest of the proof into cases depending on size of A ∩ B.
First suppose |A∩ B|=0. By minimality of G, if (G[A])≥5/2|A|-2 (in particular, G[A] cannot be isomorphic to K_1, K_5^- or K_5), then G[A] contains a minor on 6 vertices and 11 edges, which is a contradiction. Hence, we may assume that (G[A])< 5/2|A|-2, and similarly that (G[B])< 5/2|A|-2. Then,
(G)=(G[A])+(G[B])<5/2(|A|+|B|)-4=5/2(̌G)-4<5/2(̌G)-7/2,
which is a contradiction to our hypothesis, so this case is not possible.
Now suppose |A∩ B|=1, say A∩ B={x}. Then G is connected, and in particular G[A] and G[B] are connected and |A|,|B|≥ 2. If G[A] is isomorphic to K_5, then G[A∪ y] is a 6-vertex graph with at least 11 edges, where y is a neighbour of x in B. Hence, G[A] is not isomorphic to K_5, and similarly for G[B]. Then, by minimality of G, if (G[A])≥5/2|A|-3 (note that this implies that A cannot isomorphic to K_5^-), G[A] contains a minor on 6 vertices and 11 edges, which is a contradiction. Hence, we may assume that (G[A])< 5/2|A|-3, and similarly that (G[B])< 5/2|B|-3. Then,
(G)=(G[A])+(G[B])<5/2(|A|+|B|)-6=5/2((̌G)+1)-6=5/2(̌G)-7/2,
which contradicts our hypothesis, so this case is not possible.
Finally, suppose |A∩ B|=2; say A∩ B={x,y}. If x,y are in different components of G[A]-xy, at least one of x or y would be a cut vertex, which would contradict our choice of separation. Hence, there exists an x-y path P_1 with at least 2 edges and with internal vertices in A∖ B. Similarly, there exists an x-y path P_2 with at least 2 edges and with internal vertices in B∖ A.
If G[A] is isomorphic to K_5^- or K_5, then G[A∪ V(P_2)] contains a minor on 6 vertices and at least 11 edges, which we can obtain by contracting all but 2 of the edges of P_2. Hence, G[A], and similarly G[B], are not isomorphic to K_5^- or K_5.
By minimality of G, if (G[A])≥5/2|A|-7/2 or (G[B])≥5/2|B|-7/2, then G contains a minor on 6 vertices and 11 edges, which is a contradiction. Given that the number of edges must be an integer, we have that (G[A])≤5/2|A|-4 and (G[B])≤5/2|B|-4.
If xy∈ E(G), then
(G)=(G[A])+(G[B])-1≤5/2(|A|+|B|)-9=5/2((̌G)+2)-9<5/2(̌G)-7/2,
which is a contradiction to our hypothesis.
Hence, xy∉ E(G). Suppose first that (G[A])≤5/2|A|-9/2 and (G[B])≤5/2|B|-9/2. Then,
(G)=(G[A])+(G[B])≤5/2(|A|+|B|)-9=5/2((̌G)+2)-9<5/2(̌G)-7/2,
which would be a contradiction to our assumptions on G. Thus, at least one of the two above inequalities is invalid. Without loss of generality, we may assume that (G[A])> 5/2|A|-9/2 and thus (G[A])≥5/2|A|-4. Now this implies (G[A]+xy)=(G[A])+1≥5/2|A|-3>5/2|A|-7/2. Contracting P_2 into an edge, we obtain that G[A]+xy is a minor of G. Using the minimality of G, we then find that G[A]+xy is isomorphic to K_5^- or K_5. The case G[A]+xy≃ K_5^- is impossible, as it would mean that (G[A])=8<5/2|A|-4. Thus, we have G[A]+xy ≃ K_5. But then by removing superflous vertices and edges from B and contracting all but two of the edges in P_2, we obtain a minor of G isomorphic to the graph obtained from G[A]≃ K_5^- by adding a new vertex adjacent to x and y. This is a graph on 6 vertices and 11 edges, as desired.
As we have found a contradiction to our initial assumption that G is a smallest counterexample in every possible case, this completes the proof of the theorem.
Although <ref> is sufficiently strong for our purposes, it might be possible to improve it, as we are not aware of any family of graphs G with no 𝒦_6^-4 minor for which (G)≈5/2(̌G). However, (K_5^-,1)-cockades do not contain any 𝒦_6^-4 minor and contain ≈9/4(̌G) edges.
We are now ready to prove <ref>, which we restate for convenience.
*
*
2pt
Upper bounds
5pt
We show that the values in the statement cannot be improved. For t=2,3, these values cannot be improved as no graph on t vertices can have more than t2 edges.
For t=4,5, consider the graphs S_2,r,t-3 as defined in <ref>. One easily verifies that for such t and r≥ 4,
(S_2,r,t-3)=2(22+rt-32+2 r (t-3))/2+r(t-3)≥ t-1.
By <ref> and <ref>, S_2,r,t-3 does not contain any minor on t vertices with more than f(2,t-3,t) vertices, which we can directly compute to be 5 and 8 for, respectively, t=4 and t=5.
For t=6, consider any (K_5^-,2)-cockade G with (̌G)≥ 26. First note that (G)=(̌G)-2/5-2· 9-(̌G)-5/5-222=8/3(̌G)-13/3≥5/2(̌G) and so (G)≥ 5. However, we claim that G cannot any minor on 6 vertices and 12 edges.
It is easy to see that every graph on 6 vertices with at least 12 edges is either 3-connected or contains a K_5 subgraph.
However, as G is constructed in a tree-like fashion by identifying edges between copies of K_5^-, the only 3-connected minors of G are in fact minors of K_5^- (more generally, for k=0,1,2, it is well-known that if G is a k-sum of G_1 and G_2, and H is 3-connected, then H is a minor of G if and only if H is a minor of G_1 or of G_2). Hence, G can contain neither a 3-connected 6-vertex graph nor K_5 as a minor, proving our claim
5pt
Lower bounds
5pt
t=2 : Any graph with average degree at least one contains an edge, and thus contains a minor on 2 vertices with one edge.
5pt
t=3 : It is well-known that if G is a forest, (G) ≤(̌G)-1, wand in particular that (G)=2(G)/(̌G)<2 (if G is non-null). Given that (G)≥ 3-1=2, G is not a forest and thus contains a cycle C and thus G contains K_3 as a minor.
5pt
t=4 : It is well-known, and easy to see, that K^-_4-minor-free graphs are the graphs for which every component is a cactus graph, that is a connected graph in which every block is either an edge or a cycle. It is also well-known (for instance, <cit.>) that if G is a cactus graph, then (G)≤⌊3((̌G)-1)/2⌋ edges (the proof proceeds by induction on the number of blocks). Given that G has average degree at least 3, (G)≥3/2(̌G) and so G contains a K^-_4 minor, i.e. a minor on four vertices with five edges, as claimed.
5pt
t=5 : Dirac <cit.> proved that for any graph G such that (G)≥ 2(̌G)-2, either G contains a minor on 5 vertices and 8 edges or G is a (K_4,1)-cockade. Note that in the latter case, (G)=(̌G)-1/4-1· 6-(̌G)-4/4-112=2((̌G)-1)<2(̌G). Hence, if (G)≥ 4, we have that (G)≥ 2(̌G) and so G contains a minor on 5 vertices and 8 edges.
5pt
t=6 : Given that (̣G)≥5/2(̣G), <ref> implies this result directly (noting that none of the small exceptions in <ref> have average degree at least 5).
§ CONCLUDING REMARKS
We have considered the problem of finding the best possible α such that every graph with average degree at least t contains a minor on t vertices with at least (α -o(1))t2 edges; we have shown that √(2)-1≤α≤3/4. It would be interesting to further improve these bounds.
We note that in our proof of <ref>, we have only used contractions to be able to consider the smallest minor of G such that (G)≥ t and apply <ref>. Once we have obtained that all closed neighbourhoods have minimum degree greater than t/2, we only consider subgraphs. In this setup, we cannot improve our lower bound on α beyond 1/2. Indeed, consider the line graph of the complete graph K_n. It is t:=2(n-2)-regular, has closed neighbourhoods of minimum degree (n-1)>t/2 and it is not hard to verify that this graph contains no subgraph on t vertices with more than (1+o(1))n^2=(1/2+o(1))t2 edges.
§ ACKNOWLEDGMENTS
This research was partially completed at the Second 2022 Barbados Graph Theory Workshop held at the Bellairs Research Institute in December 2022.
abbrvurl
|
http://arxiv.org/abs/2307.00381v1
|
20230701164239
|
Effective Matching of Patients to Clinical Trials using Entity Extraction and Neural Re-ranking
|
[
"Wojciech Kusa",
"Óscar E. Mendoza",
"Petr Knoth",
"Gabriella Pasi",
"Allan Hanbury"
] |
cs.IR
|
[
"cs.IR",
"cs.CL"
] |
Simplifying the large mass expansion
[
August 1, 2023
====================================
Introduction
Clinical trials (CTs) often fail due to inadequate patient recruitment.
Finding eligible patients involves comparing the patient's information with the CT eligibility criteria.
Automated patient matching offers the promise of improving the process, yet the main difficulties of CT retrieval lie in the semantic complexity of matching unstructured patient descriptions with semi-structured, multi-field CT documents and in capturing the meaning of negation coming from the eligibility criteria.
Objectives
This paper tackles the challenges of CT retrieval by presenting an approach that addresses the patient-to-trials paradigm.
Our approach involves two key components in a pipeline-based model: (i) a data enrichment technique for enhancing both queries and documents during the first retrieval stage, and (ii) a novel re-ranking schema that uses a Transformer network in a setup adapted to this task by leveraging the structure of the CT documents.
Methods
We use named entity recognition and negation detection in both patient description and the eligibility section of CTs.
We further classify patient descriptions and CT eligibility criteria into current, past, and family medical conditions.
This extracted information is used to boost the importance of disease and drug mentions in both query and index for lexical retrieval.
Furthermore, we propose a two-step training schema for the Transformer network used to re-rank the results from the lexical retrieval.
The first step focuses on matching patient information with the descriptive sections of trials, while the second step aims to determine eligibility by matching patient information with the criteria section.
Results
Our findings indicate that the inclusion criteria section of the CT has a great influence on the relevance score in lexical models, and that the enrichment techniques for queries and documents improve the retrieval of relevant trials.
The re-ranking strategy, based on our training schema, consistently enhances CT retrieval and shows improved performance by 15% in terms of precision at retrieving eligible trials.
Conclusion
The results of our experiments suggest the benefit of making use of extracted entities.
Moreover, our proposed re-ranking schema shows promising effectiveness compared to larger neural models, even with limited training data.
These findings offer valuable insights for improving methods for retrieval of clinical documents.
§ INTRODUCTION
Clinical trials (CTs) are crucial to the progress of medical science, specifically in developing new treatments, drugs, or medical devices <cit.>.
Awareness and access to these studies are still challenging both for patients and physicians, making the recruitment of patients a significant obstacle to the success of trials <cit.>.
Even if a sufficient number of patients is found, the recruitment process requires screening the patients for eligibility, which is a labour-intensive task <cit.>.
Automated identification of eligible participants not only promises great benefits for translational science <cit.> but also aids patients by allowing them to be included in specific trials <cit.>.
In recent years, several initiatives have been proposed to build automatic systems for matching patients to CTs <cit.>.
The task has been defined as an information retrieval problem under the patient-to-trials evaluation paradigm <cit.>.
Here, the query is constituted by patient-related information, either in the form of electronic health records (EHRs) or ad-hoc queries, and the documents are the CTs <cit.>.
This retrieval task involves the semantic complexity of matching the patients' information with heterogenous, multi-fielded CT documents <cit.>.
In addition to this, the eligibility criteria often use complex language structures (e.g. concepts can be negated twice) and medical jargon given in either semi-structured or unstructured ways <cit.>.
To date, the existing approaches have revealed a significant lack of balance between efficiency and effectiveness. While pipeline-based models showcase promising performance, the substantial model sizes required to achieve competitive results raise concerns regarding costly deployment and limitations on reproducibility.
This work presents a system for CT matching that uses data enrichment techniques for supporting CTs search with probabilistic lexical model as fist retriever, and a re-ranking setup with a Transformer network with a moderate size.
On the one hand, we develop a data enrichment process for both queries and documents.
It consists of entity recognition and negation detection modules, applied to both the patient's description and the eligibility section of CTs.
The data enrichment process also provides the classification of both patient's descriptions and CT eligibility criteria into current, past and family-medical conditions.
The extracted information boosts the importance of affirmative and negative mentions of diseases and drugs for both the documents and queries in the traditional retrieval scenario.
On the other hand, we define a training strategy for re-ranking trials using a pre-trained language model in a two-step schema that leverages the structure of CT by considering not only the traditional topical relevance objective but also the eligibility criteria.
Taking the result from our first stage retrieval process, we then match patient's information with descriptive sections of the trials for re-ranking based on topical relevance.
Later, we further train this model by matching patient data with trial eligibility criteria in an attempt to discriminate documents as eligible or excluded.
We evaluate our work on the TREC Clinical Trials track 2021 and 2022 collections, showing that our methods improve finding relevant trials.
More specifically, our contributions are as follows:
* We evaluate the utility of individual sections of CT text on the performance of the lexical retrieval system showing that the inclusion criteria section alone contributes the most to the effectiveness of the search system.
* We introduce a new query and document enrichment model that uses information extraction modules to handle challenges posed by unstructured EHR descriptions and eligibility criteria sections of CTs.
The additional data explicitly highlight sections of patients' medical history and establish a novel way of handling a negation from the eligibility criteria.
Rather than relying on dictionaries to find these entities, we use neural network-based information extraction models.
* We propose and develop an effective re-ranking setup adapted to CT retrieval considering different learning objectives for training.
We evaluate its quality both on the general, pre-trained BERT model, as well as biomedical domain-focused versions.
§ BACKGROUND
This section describes previous work on CT matching with various paradigms, approaches to extract information from clinical data and from patient-related information, and neural re-ranking for CT retrieval.
§.§ Clinical trials matching
The TREC Clinical Trials track concerns the task of matching single patients to clinical trials.
Other tasks concerning CT matching mentioned in the literature are cohort-based retrieval <cit.> and trial-to-trial retrieval <cit.>.
In the context of the TREC CT track, patient-related information is written as free-text, whereas the document collection consists of a snapshot of ClinicalTrials.gov database[<https://clinicaltrials.gov>].
Each clinical trial contains multiple fields, including two titles (brief and official one), condition, summary, detailed description, and eligibility criteria.
The content of these fields can range from structured (e.g. gender and age of eligible patients) through semi-structured (e.g. eligibility criteria section) to unstructured (e.g. description and summary).
The eligibility criteria field contains inclusion and exclusion criteria, a core aspect of the CT matching task.
Trials were judged using a graded relevance scale of three points: 0 if the patient is not relevant to the CT, 1 if the patient is topically relevant but excluded based on the eligibility criteria, and 2 when the patient fulfils the eligibility criteria.
The TREC CTs track differs from previous medical TREC tracks in several aspects.
TREC Precision Medicine 2017–2020 <cit.> is concerned with matching CTs to a patient summary consisting of only the patient's disease, relevant genetic variants, and basic demographic information.
On the other hand, TREC CT topics consist of an unstructured patient summary.
TREC Clinical Decision Support 2014–2016 <cit.> used topics written similarly (free-text patient descriptions), but the task was to match patients to PubMed publications, instead of CT documents.
Moreover, none of the previous tracks used a graded relevance scale focused on eligibility.
Figure <ref> provides an example of a patient's EHR description and of the sections from a relevant CT.
Using a bag-of-words approach to tackle the patient-to-trial matching problem may pose difficulties as both the patient's description and the CTs contain many irrelevant terms, thereby introducing noise.
Moreover, both can contain negated key terms (for instance, the exclusion criteria), the handling of which is essential for deciding eligibility but may not be trivial even when using n-grams or neural network-based models <cit.>.
Additionally, the sections of queries and documents may have different importance because of their time dependency (i.e., past or present conditions) and because they can refer to either patients or patients' family medical history.
One can try to overcome these issues by structuring both the query and documents, and extracting relevant items first.
Previous work attempted to solve a CT matching task using various lexical and neural models.
<cit.> annotated a corpus with terms from medical dictionaries and with negations for retrieving trials for the TREC Precision Medicine track.
A large number of systems reported in the TREC CT used variants of the Okapi BM25 model <cit.> or the Divergence from Randomness (DFR) model <cit.> that has demonstrated potential in the biomedical information retrieval field.
§.§ Information extraction from clinical data
Information extraction from clinical data has been an active area of research in recent years.
Previous work has focused on automatic extraction of eligibility criteria from clinical trial protocols.
For instance, <cit.> presented a method for identifying and segmenting eligibility criteria into five semantic categories, including demographic information, health status, treatment history, laboratory test reports, and lifestyle.
The EliIE system <cit.> was proposed for converting free-text eligibility criteria for clinical research into a structured and formalised format using a 4-step process including entity and attribute recognition, negation detection, relation extraction, normalization of concepts and output structuring.
Other studies aimed to extract information from patients' health records.
The development and uptake of NLP methods for processing free-text clinical notes has been growing in recent years.
A systematic review of the literature by <cit.> showed that there is a significant increase in the use of machine learning methods for NLP in clinical notes related to chronic diseases, and that deep learning is an emerging methodology.
The ConText algorithm aims to determine whether conditions mentioned in clinical reports are negated, hypothetical, historical, or experienced by someone other than the patient <cit.>.
The n2c2 n2c2/OHNL 2019 shared task <cit.> focused on extracting family history information from clinical notes.
<cit.> utilised heuristics to detect medical history and negated terms in patients' records.
Despite these efforts, there has been a lack of approaches that integrate information extraction techniques to enhance both query and document representation.
Specifically, there is a lack of methods that effectively combine the extracted terms to determine eligibility ranking.
This presents an opportunity for further exploration in the field.
§.§ Neural approaches for CT
Several approaches using Transformer-based architectures and pre-trained models, such as BERT <cit.>, have achieved state-of-the-art effectiveness in some of the biomedical information processing applications.
In CT retrieval, there have been multiple attempts to use BERT embeddings in both dual-encoder and cross-encoder retrieval setups with different pre-trained models such as BioBERT or ClinicalBERT <cit.>.
These results correspond to implementations of methods applied to traditional ad-hoc retrieval tasks and have not outperformed multiple experiments under traditional retrieval models <cit.>.
On the other hand, <cit.> proposed a multi-stage neural ranking system for the CTs matching problem, relying on T5-based models, currently with state-of-the-art results in multiple retrieval tasks, including CT.
According to the findings presented in TREC CT 2021 <cit.>, T5-based models currently outperform smaller transformers models in CT retrieval. In this paper, we propose an effective training strategy that takes into account various aspects of clinical trial retrieval, including topical relevance and eligibility criteria, as separate learning objectives. We evaluate its quality both on the general, pre-trained BERT model, as well as biomedical domain-focused versions. Our approach results in a strong competitor to T5-based models with a much simpler architecture, as demonstrated by the official results reported in TREC CT 2022 <cit.>. Specifically, our model performs second-best overall, outperformed only by the model proposed by <cit.> in the best-performing category. These findings suggest that BERT-based models with our proposed training strategy can provide a viable alternative to T5-based models in clinical trial retrieval.
§ METHODOLOGY
This section describes the steps for processing CTs' and patients' descriptions used as input to probabilistic lexical models.
We then define our two-stage neural re-ranking pipeline.
§.§ Clinical trial processing and ranking
We parse the content of a clinical trial document to split it into specific sections.
The eligibility criteria section contains a crucial component of the trial used to distinguish if a patient is eligible for a given trial.
Our CT processing is focused on making the eligibility criteria as fine-grained as possible so we can easily discriminate aspects referring to medical history and drugs.
We start by using parser based on heuristics to split the eligibility criteria section of clinical trials into two lists containing inclusion and exclusion criteria, respectively.
We further classify each sentence from inclusion and exclusion as concerning one of the three sections: `current medical condition', `past medical condition' and `family medical history'.
Our motivation is that admission notes (which the topics simulate), consist of several sections that do not have equal impact on the patients' relevance to the trial and may even be irrelevant to their eligibility.
Similarly, clinical trials can have different types of information stored in their eligibility section.
We then use a pre-trained entity extraction model together with an algorithm for determining negation to detect affirmative and negative drug and disease entities in both inclusion and exclusion sections.
In the next step, we remove double negations coming from negated exclusion criteria.
For every entity in the exclusion criteria, we swap their modifier (from affirmative to negative and vice versa).
For instance, the exclusion criterion `Patients who are not smoking' becomes the inclusion criterion `Patients who are smoking'.
This step may not always be correct; nevertheless, we found it to be a good approximation, allowing us to collapse these two sections into one.
The final result is a single list of extracted entities, classified by their section and modifier.
All extracted keywords from a document D_i can be described by the set K_D_i = {A_i^cmc, A_i^pmc, A_i^fmh, N_i^cmc, N_i^pmc, N_i^fmh}, where A stands for a list with affirmative entities, N for negative entities, and cmc, pmc and fmh for current medical conditions, past medical conditions, and family medical history, respectively.
We can then enrich the CT documents representation by expanding them with the extracted keywords.
However, in order to preserve the semantic information about each extracted entity (section and modifier information), we use prefixing with special tokens.
Furthermore, as many of these entities are multi-word expressions, we concatenate the tokens using the underscore character `_' to create a single token.
Specifically, we create new tokens by adding them the prefixes `cmc', `pmc' and `fmh' for each respective section and additionally `no' when an entity is negated (e.g. N_i^pmc = [`myasthenia gravis',`shortness of breath'] becomes [`pmc_no_myasthenia_gravis',
`pmc_no_shortness_of_breath']).
We append the new tokens to the pre-processed document and use the enriched document to create an index for the lexical retrieval models.
§.§ Patient description processing
As we are essentially aiming to match the patient to the CT criteria, we follow a similar approach to enrich the input query.
A patient's description is split into the sections of current medical conditions, past medical conditions, and family medical history.
As for the trials, we run an entity and negation detection algorithm for each section.
Extracted keywords for query Q_j can be represented as K_Q_j = {A_j^cmc, A_j^pmc, A_j^fmh, N_j^cmc, N_j^pmc, N_j^fmh}, where each element contains a list of extracted entities.
Finally, after tokenisation, the query for lexical models containing the original patient description is enriched by appending the extracted entities.
§.§ Filtering
Following approaches from previous work <cit.>, we employ filtering on the age and gender to eliminate trials for which patients would be excluded based on the demographic criteria.
We parse the age and gender information from patient descriptions for all patients.
In clinical trials, this corresponds to `minimum_age', `maximum_age' and `gender' fields.
In this step, we remove the trials for which the patient is ineligible based on these two criteria.
Furthermore, we try rule-based parsing to extract information about smoking and alcohol consumption from both patients and clinical trials.
Similarly to the demographic filters, we use this information to filter out ineligible patients.
§.§ Re-ranking
Taking advantage of the structure of the documents and the topic processing discussed in Sections <ref> and <ref>, respectively, we define a training schema with two objectives. Here, inspired by the notion of curriculum learning, the approach aimed at decomposing complex knowledge and designing a curriculum for learning concepts from simple to hard <cit.>, we follow the heuristic that the CT retrieval task can be decomposed into two sub-tasks. First, we set the retrieval objective, which simply relies on discriminating topical relevance (both eligible and excluded documents are relevant). Second, we set the objective of eligibility classification (only eligible documents are relevant).
We use the pre-trained language model BERT <cit.> with the standard approach known as cross-encoder neural ranking model. For fine-tuning, a linear combination layer is stacked atop the Transformer network, whose parameters are tuned with a ranking loss function. We use a pairwise loss function and train the model for re-ranking outputs from the process described in previous sections.
Thus, the model is trained for these two objectives consecutively, such that there are two instances of the same model that we optimise with the following loss:
ℒ(q,d^+,d^-;ϕ)=max(0,1-s(q,d^+;ϕ)+s(q,d^-;ϕ)),
where d^(+)/(-) denotes embedded-relevant or non-relevant documents to the topic representation q, ϕ represents the model's parameters with the final linear layer, and s is the predicted score.
As shown in Figure <ref>, we match patient information with descriptive sections of the trials for re-ranking based on topical relevance (+d corresponds to sections of relevant trials).
We consider the eligibility classification a harder task.
Moreover, we hypothesise that for this task, the model could benefit from the knowledge that it already has from the previous training.
We further train this model by matching patients' information with criteria in an attempt to discriminate documents as eligible or excluded (+d corresponds to trials categorised as eligible, and -d corresponds to trials categorised as relevant but discarded).
This process results in two different models. During inference time, we follow a similar schema: we take the BM25 rank and re-rank twice the top-k ranked trials using the two resulting models, respectively. When referring to this re-ranking procedure we call it TCRR: Topical and Criteria Re-Ranking.
§ EXPERIMENT SETUP
This section details the datasets and baselines we have employed as well as the evaluation procedure.
Additionally, we discuss the implementation of the methods described in the paper.
§.§ Dataset
The corpus released by TREC contain 375,580 clinical trials. In 2021, 75 topics (patient notes) were used, and 50 more were created in 2022.
There are 35,832 relevance judgements in 2021 and 35,394 in 2022.
More details of the datasets can be found in Table <ref> of Appendix <ref>.
Clinical trial documents released by TREC are in xml format and consist of several sections.
In our experiments we consider the following sections: brief title, official title, description, summary, conditions and criteria.
For our experiments, we use the sets of topics as they where provided.
For neural re-ranking, we present experiments using the topics from 2021 for training and from 2022 for testing and vice-versa.
Additional splitting for training and development for neural models is described in Section <ref>.
§.§ Evaluation
We follow the evaluation procedure from the TREC Clinical Trials track, which is the standard evaluation procedure for ad-hoc retrieval tasks.
As the relevance assessment is given in a graded relevance scale (eligible, excluded, or not relevant), the performance of the models is measured using normalised discounted cumulative gain (nDCG).
We present results as reported by TREC, using nDCG@5 and nDCG@10, Precision (P@10), and Reciprocal Rank (RR).
We treat unjudged documents as non-relevant, ensuring that our results are not biased towards models that retrieve a large number of unjudged documents.
Furthermore, we focus on Precision as the primary metric for optimising retrieval models.
Our goal is to identify eligible trials, and Precision provides strict feedback to achieve this aim.
§.§ Baselines
As discussed in Section <ref>, for our custom re-ranking we train two different models: TopicalRR and CriteriaRR.
When used independently, we consider them as baselines:
TopicalRR The model trained for re-ranking based on the topical objective is initialised with the weights of bert-base-uncased[<https://huggingface.co/bert-base-uncased>] (as well as other two domain-specific trained models: BioBERT[<https://huggingface.co/seiya/oubiobert-base-uncased>] and Clinical-BERT[<https://huggingface.co/Tsubasaz/clinical-pubmed-bert-base-512>]).
CriteriaRR The model trained for re-ranking based on the eligibility criteria classification objective is initialised with the weights of the TopicalRR. We further train this model.
Additionally, we consider the following two neural models as baselines:
TraditionalRR The cross-encoder we use to compare our proposed training procedure with the traditional training, we train the model from the same checkpoint bert-base-uncased.
MonoBERT One of the comparable models implemented from the TREC CT track. It is based on the cross-encoder architecture and trained on data drawn from the corpus in a weakly supervised fashion <cit.>.
§.§ Implementation details
We use the ScispaCy <cit.> and medspaCy <cit.> to implement our entity extraction experiments.
We apply the spaCy NER model trained on the bc5cdr dataset to detect disease and drug mentions.
We have decided to use the ConText algorithm <cit.> to determine whether extracted entities were negative or affirmative.
While more recent algorithms are available for identifying assertions in clinical text <cit.>, we opted for the ConText algorithm due to its established track record and availability inside the ScispaCy library.
Moreover, as our approach is focused on enriching not only 125 queries but also 375,000 clinical trial documents, an additional criterion for selecting the ConText model was its scalability.
Text is lowercased, and tokenised with the spaCy's model; punctuation and stopwords are removed.
As a main lexical retrieval model, we use the BM25+ <cit.> “out-of-the-box”, i.e. without parameter optimisation, implemented in the Rank-BM25[<https://github.com/dorianbrown/rank_bm25>] Python package.
Furthermore, for the first two experiments, we also test two other lexical models, namely TF-IDF <cit.> and DFR model based on inverse document frequency with Bernoulli after-effect and H2 normalisation (In_expB2) <cit.>, both implemented in the Terrier search engine[<http://terrier.org>].
On the other hand, we use PyTorch Lightning <cit.> and Transformers[<https://github.com/huggingface/transformers>] to implement the neural re-ranking pipeline. As discussed in Section <ref>, we train different models for re-ranking with different configurations (see Section <ref>).
The TopicalRR, after splitting the datasets into train (60%), development (10%), and test (30%) is trained on 8192 samples from the training set per epoch divided into batches of 16 samples. We further train this model on 1024 samples with batches of 16 to get to the CriteriaRR. Samples for these two models were selected as described in Section <ref> and as shown in Figure <ref>.
We pick positive samples only present in BM25 rankings as well as hard negatives from ranked-irrelevant or unlabeled documents.
We re-rank top-50 trials from the BM25 run[We ran experiments changing the cutoff from 20 to 100 with a step of 10 to find 50 as the optimal cutoff.].
Finally, to compare our proposed training procedure with the traditional training of a cross-encoder, we train the TraditionalRR from the same checkpoint bert-base-uncased, on 2048 samples, where relevant documents are only those categorised as eligible.
All neural models are trained for ten epochs, with early stopping based on Precision. Our training was performed on an Nvidia Quadro RTX 8000 GPU.
§ RESULTS
We begin by assessing the effectiveness of using clinical trial sections.
Subsequently, we examine the influence of extracted entities and filtering techniques.
Then, we conduct neural re-ranking experiments.
§.§ Clinical trials sections
We first evaluate the utility of different sections of CTs.
We extracted inclusion and exclusion sections for 91% of clinical trials.
For the remaining 9% of trials, we assume that both criteria sections are empty.
We create several indexes and retrieval models with different combinations of sections as input features.
The results for the BM25+ model are presented in Table <ref>.
The first eight rows represent results when only one section was used to create an index, whereas the remaining rows present runs conducted on the concatenations of selected sections.
Results for In_expB2 and TF-IDF retrieval models are presented in Table <ref> of Appendix <ref>.
Among single section runs, the usage of the inclusion field alone yields the highest results for Precision@10 and nDCG@5, both for 2021 and 2022 data.
Moreover, for 2021 topics, the inclusion section also achieves the highest nDCG@10 and RR from all single topics, and it is on par with the run, which uses all sections except criteria combined (run 6 versus run 13).
Notably, for 2022, the summary field achieves the highest RR among all single-field runs.
This is true for all three retrieval models.
This can be caused by having the first relevant trial more generic (i.e. covering broader or more common diseases) and relevant but not necessarily specific to the patient's conditions.
Figure <ref> of Appendix <ref> shows a topic-by-topic comparison for RR and P@10 for the BM25+ model.
We can observe that there are still some topics for which the model using the inclusion section achieves a higher RR score than the summary field.
Concatenating more sections to create an index improves the on-average nDCG scores.
However, this does not always hold for the metrics that consider the distinction between eligible and ineligible (P@10 and RR).
The exclusion section achieves the worst results from all single section runs (run 7), even when compared to runs using only the title of a clinical trial.
Moreover, simply adding the text from the exclusion section for the bag-of-words approaches decreases the retrieval performance when compared to using the inclusion section only (run 16 versus 14).
These outcomes motivate our subsequent experiments and document enrichment techniques described in Section <ref>, where we try to structure the knowledge contained in the eligibility section to take advantage of the available data.
The results for In_expB2 and TF-IDF (Table <ref> of Appendix <ref>) models follow a similar trend, with the differences for 2022 data even higher than for the BM25+ model.
This outcome shows that our findings can be generalised to other lexical models.
§.§ Impact of extracted entities
To determine the impact of the extracted entities, we selected the optimal configuration of input sections from the previous step, which used the summary, description, titles, conditions, and inclusion criteria (run 14).
We use these sections as a base document representation and enriched it with different combinations of extracted entities: c – only current medical conditions, cf – current and family medical history, cp – current and past medical conditions, cfp – current, family and past medical conditions.
The results for the BM25+ model are presented in Table <ref>.
Using extracted items from patients positively impacts the final score.
The highest Precision scores are achieved with extracted affirmative and negated entities for the current and family medical history.
The low impact of past medical condition can be explained by an infrequent occurrence of this data in patient descriptions in the TREC dataset and the quality of the ConText algorithm.
Extracted entities contribute more positively to the measures where judgements distinguish between eligible and ineligible patients.
The best-performing model (14d) comprises all available extracted data (affirmative and negative entities for current, past and family medical history) to enrich the index.
This tells us that our proposed method can potentially improve the retrieval with complex negated sentences.
However, the relative performance gain is low, and a detailed analysis is needed to understand how it can be further improved.
An example of extracted entities is presented in Table <ref>.
As can be seen, the performance of both entity extraction and section classification models generates both false positives and false negatives, which influences the final retrieval result.
Further fine-tuning on domain data could improve the quality.
Results for In_expB2 and TF-IDF retrieval models are presented in Table <ref> of Appendix <ref>.
The In_expB2 model on TREC CT 2021 data is the only one for which our query and document enrichment techniques are not improving results.
We hypothesise that this is the case as the starting model (run 14) was already a very strong model compared to other baselines.
For the TF-IDF model, we can observe that the enrichment with current and past medical entities yields the best results both for 2021 and 2022 data.
Figure <ref> of Appendix <ref> presents a topic-by-topic analysis of the results in terms of the number of relevant trials ranked in top 20 using lexical models.
We can observe an incremental gain both from extracted entities and filtering.
§.§ Effectiveness of filtering
Next, we test several filtering methods as described in Section <ref>.
As a base run, we take our best configuration from the previous experiment: BM25+ run enriching data with current medical conditions and medical history of the patient and family (run 14d).
Results for TREC CT 2021 are presented in Table <ref>.
Our filtering results align with other researchers' results, confirming that utilising age and gender fields can improve the quality of the final matching.
The usage of both filters (run e) removes, on average, 26.3% trials from the top 1000 retrieved documents for all topics of the 2021 collection, improving the P@10 score by 4.9 percentage points over the unfiltered run.
Out of these two fields, the contribution of the age filter has more impact and is significantly better than the base run.
On the other hand, smoking and alcohol related-filtering does not help to improve the results further (runs f and g).
We grouped this filters together as our algorithm did not identify any smoker, and only nine drinking patients in TREC CT 2021 topics.
Despite only these few mentions, we observe deterioration of the results.
§.§ Neural re-ranking
Table <ref> shows the results of the re-ranking procedure discussed in Section <ref>. We used the different models for re-ranking the results of a BM25 rank. We report the evaluations on the 2022 data. Models were trained on the 2021 data. Result of the TCRR model corresponds to the official TREC CT 2022 evaluation <cit.>.
As we hypothesise, in the context of CTs, the model benefits from the decomposition of the retrieval problem into two objectives, as it is experienced by TCRR (see Section <ref>), the model exposed to the two learning objectives and best performing. We also provide results for TopicalRR and CriteriaRR, independently, which are the models exposed only to the first (topical relevance) and second (eligibility classification) learning objectives. Additionally, we present results for the regular re-ranking setup TraditionalRR.
For this set of experiments, we are mainly interested in the evaluation in terms of Precision since, in a real scenario, only eligible trials are considered.
Given that on average other proposed systems perform poorly, as shown by the TREC CT median results <cit.>, precision (P@10) anywhere near 50% is regarded as a good result for this task.
We analyse results from the proposed approach and find a significant improvement between the performance of TCRR models (TCRR and TCRR_Bio) and BM25 at a 95% confidence level.
On average, this approach allows Bert-based models to gather more relevant documents than the selected baselines in the top 10.
We report results on different domain specific pre-trained models that we trained following our proposed approach.
Again, we evaluated the best performing model, TCRR_Bio, in terms of Precision and found the improvement statistically significant.
Figure <ref> presents two plots with an averaged per patient count of relevant and excluded trials depending on a cutoff point for the TREC CT 2022 collection.
Both techniques applied to lexical models, namely extracting drug and disease entities and filtering by age and gender, have a positive impact in finding more eligible trials.
However, only the run with filtering is able to retrieve consistently fewer ineligible trials than the baseline run.
We can also see that, on average, our best non-neural run (14d-AG), retrieves twice as many trials for which a patient is eligible than excluded.
Similarly, the TCRR neural re-ranking is further improving the number of relevant trials, but helps in removing ineligible only for the first 15 trials.
One possible explanation is that we re-ranked only the top 50 trials retrieved by the first-stage ranker.
§ DISCUSSION
In this work, we revisit the pipeline-based model for patient-to-CT matching.
First, we report an extensive set of experiments for the first stage retrieval and propose an effective enrichment procedure to get the best out of the initial ranks.
Second, we propose an adaptation of training a cross-encoder to the CT problem, taking advantage of the structured nature of the considered documents and the task.
We find that the inclusion criteria section has the most considerable impact on the retrieval score for all three tested lexical models meaning that these models cannot use all the available information.
These outcomes motivate our further work in structuring queries and documents using entity extraction and negation classification methods.
The results show improvements in finding relevant trials when applying data enrichment methods.
We show results for experiments on different configurations of our pipeline and compare our approach with different models previously used for the task.
We focus on BERT-based models, which so far have not necessarily outperformed probabilistic lexical ranking models for the clinical trial matching task. Even though the results in Table <ref> also show how changing the initial weights of the model can affect the overall performance (i.e., by choosing domain-specific model like BioBERT), we show that the improvements of our proposed approach are not due to the selection of a domain-specific pre-trained model, which is the case of the TCRR.
These results also provide an idea of which pre-trained model fits the task best. Overall, the TCRR initialised with BioBERT weights shows promising results, while ClinicalBERT weights were not the best choice in this scenario.
To our knowledge, this study is the first to focus on enriching documents and queries showing gains in the models' ability to find more eligible trials.
Furthermore, our novel re-ranking concerning eligibility shows additional improvement for this task, comparable to the more expensive approach using the T5 architecture <cit.>.
Our proposed re-ranking formula is different as it explicitly models the eligibility decisions instead of using only the topical relevance.
This distinguishes our study from the previous works concerning clinical trial re-ranking <cit.>.
Although this work focused on CT retrieval, we believe the approach can also be applied to other IR tasks where first, they involve ranking documents based on topics, and, in a second instance, the retrieval results are tailored by considering more specific criteria or constraints. One example of such a task is the selection of primary studies (citation screening) for the systematic literature reviews <cit.>.
There are several limitations of this study, both related to the dataset and the models.
Usage of the TREC CT collection implies that the patient descriptions are relatively short, i.e., EHR admission note-style documents.
We acknowledge that our approaches could have problems handling longer sequences.
Additional limitations are related to the amount of data available for training and evaluating systems on the CT retrieval task. This issue, in our study, explicitly affects the curriculum learning scenario in the eligibility determination objective. It may limit the model in learning relevant patterns needed to scale to different clinical settings or patient populations.
Furthermore, the topics are written only in English.
This does not concern clinical trials, for which the ClinicalTrials.gov database is the leading international source.
Nevertheless, multilingual medical retrieval may present challenges for both lexical and neural models, as the nuances and complexities of medical terminology can vary significantly across languages.
Addressing these limitations and developing strategies for multilingual medical retrieval is an essential area for future research.
§ CONCLUSION
This paper presents an approach for clinical trial retrieval under the patient-to-trial paradigm.
We investigate the impact of individual clinical trial sections showing that the `inclusion' section alone contributes the most to the final retrieval score.
Moreover, we evaluate the handling of complex eligibility criteria for matching patients to clinical trials by combining input from information extraction modules into a lexical retrieval model.
The extracted drug and disease entities and their negations positively impact the retrieval of eligible trials.
On the other hand, filtering based on gender and age proved to be successful in eliminating ineligible trials.
Additionally, we propose an effective training strategy for neural re-ranking of clinical trials based on two distinct learning objectives.
The first objective is the traditional relevance objective, while the second objective focuses on giving importance to the eligibility criteria and involves a classification objective that distinguishes between eligible and discarded samples.
Our results indicate that even with limited data, this model is capable of further improving the Precision of our approach.
Even though our proposed system involves many single components, it showcases an alternative approach to the clinical trial matching problem, emphasising the importance of eligibility criteria.
In future work, we plan to measure the impact of extracted entities on neural re-ranking models.
This work was supported by the EU Horizon 2020 ITN/ETN on Domain Specific Systems for Information Extraction and Retrieval – DoSSIER (H2020-EU.1.3.1., ID: 860721).
plainnat
§ DATASETS SUMMARY
A summary of datasets is presented in Table <ref>.
§ OTHER LEXICAL MODELS
Table <ref> presents results for the clinical trial documents sections impact on the ranking with In_expB2 and TF-IDF models.
Table <ref> shows results for the query and document enrichment experiment with In_expB2 and TF-IDF models.
§ TOPIC-BY-TOPIC ANALYSIS
Figure <ref> shows topic-by-topic comparison for RR and P@10 for BM25+ using inclusion (run 6) summary (run 4) and summary, description, titles and condition sections concatenated (run 13).
Figure <ref> presents the number of relevant trials at the top 20 retrieved trials for the three best BM25+ runs from each experiment.
|
http://arxiv.org/abs/2307.01840v1
|
20230704174243
|
Empirical Sample Complexity of Neural Network Mixed State Reconstruction
|
[
"Haimeng Zhao",
"Giuseppe Carleo",
"Filippo Vicentini"
] |
quant-ph
|
[
"quant-ph",
"cs.LG",
"physics.comp-ph"
] |
APS/123-QED
Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
Center for Quantum Science and Engineering, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
Zhili College, Tsinghua University, Beijing 100084, China
Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
Center for Quantum Science and Engineering, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
Center for Quantum Science and Engineering, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
CPHT, CNRS, École polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France
Collège de France, Université PSL, 11 place Marcelin Berthelot, 75005 Paris, France
Quantum state reconstruction using Neural Quantum States has been proposed as a viable tool to reduce quantum shot complexity in practical applications, and its advantage over competing techniques has been shown in numerical experiments focusing mainly on the noiseless case.
In this work, we numerically investigate the performance of different quantum state reconstruction techniques for mixed states: the finite-temperature Ising model.
We show how to systematically reduce the quantum resource requirement of the algorithms by applying variance reduction techniques.
Then, we compare the two leading neural quantum state encodings of the state, namely, the Neural Density Operator and the positive operator-valued measurement representation, and illustrate their different performance as the mixedness of the target state varies.
We find that certain encodings are more efficient in different regimes of mixedness and point out the need for designing more efficient encodings in terms of both classical and quantum resources.
Empirical Sample Complexity of Neural Network Mixed State Reconstruction
Filippo Vicentini
August 1, 2023
========================================================================
§ INTRODUCTION
Recent advances in quantum technologies <cit.> have led to diversified applications in areas such as quantum simulation <cit.>, communication <cit.>, cryptography <cit.> and machine learning <cit.>.
However, present-day noisy intermediate-scale quantum (NISQ) devices are inherently noisy <cit.> and limited in size.
Techniques to mitigate noise <cit.>, reduce quantum circuit depth <cit.>, minimize the number of quantum shots, and verify experimentally prepared states <cit.> are crucial to leveraging these devices in realistic applications.
One such technique, quantum state reconstruction (or tomography) <cit.>, uses a limited number of measurements to produce a classical approximation ρ̂ of the quantum state ρ prepared on a device.
The classical reconstruction facilitates the efficient computation of many observables without further quantum circuit evaluations and serves to validate the quantum device <cit.>.
Classical reconstructions, which vary in computational overhead, are all derived from datasets of measurement outcomes collected by repeatedly measuring a set of observables on the state ρ prepared on the device.
The accuracy of the reconstruction can be quantified via several reconstruction errors ϵ, such as the difference in expectation values between the reconstruction and the original state or other distance measures like infidelity or trace distance between ρ̂ and ρ.
An important indicator of the asymptotic performance of such methods is the sample complexity: the size of the quantum-generated dataset needed to obtain a classical reconstruction with a certain error ϵ.
Recent research has established that tomography methods using generic quantum states, such as maximum likelihood estimation <cit.>), necessitate a sample size that grows exponentially with the system size <cit.>.
One way to circumvent this is to design randomized measurement protocols that only estimate expectation values of certain observables (e.g., classical shadow tomography <cit.>)
Alternatively, one can exploit the fact that only a small set of possible states, with low complexity, are actually observed in physical models <cit.>, and design more efficient encodings of those physically realizable states to alleviate the data requirement, similar to variational methods commonly adopted to simulate quantum systems classically.
For example, one can use matrix product states to efficiently encode and reconstruct one-dimensional pure states with area law entanglement <cit.>.
For more general states, generative neural networks (NN) can be used as a variational ansatz trained to reproduce the measurement data <cit.>.
These ansatze, called Neural Quantum States (NQS), encode the quantum state in a compact form, significantly reducing the data requirement for reconstructions. The cost associated with this approach is introducing a non-convex optimization problem, which established techniques in modern artificial intelligence can approximately solve <cit.>.
Designing efficient NQS encodings for general mixed states presents a significant challenge.
Three broad categories exist: (i) an expansion onto a set of pure-states encoded with standard NQS, which is exponentially costly for states with large entropy <cit.>; (ii) a physical (positive-semidefinite) neural-network encoding of the mixed state in the Pauli-Z computational basis known as Neural Density Operator (NDO) <cit.> and (iii) a neural network encoding of the probability distribution over the outcomes of a set of informationally-complete positive operator-valued measurements (POVM-NQS).
This last approach is less costly than (ii), but may result in unphysical density matrices <cit.>.
While the approximation used in the first approach is well-understood, the relationship between the latter two methods remains unclear.
Furthermore, no comparison has been made thus far regarding their effectiveness in Quantum State Reconstruction tasks.
In particular, whether these two methods share the same sample complexity is uncertain.
While the dependence on system size has been extensively studied <cit.>, few comparisons have been made regarding the dependence on reconstruction error ϵ.
This ϵ dependence can be especially significant for NISQ algorithms such as variational quantum eigensolvers (VQEs) <cit.>, as a worse scaling will lead to a large increase in the number of quantum executions to achieve the same accuracy of the result.
For classical shadow tomography, the sample complexity is known to scale as ϵ^-2, which doesn't yield an asymptotic improvement over naive statistical averaging <cit.>.
More recently, numerical evidence suggests that the pure state NQS method holds an approximately quadratic advantage (i.e., ϵ^-1) over classical shadow in energy estimation for certain molecular ground states <cit.>.
However, it is unknown if this advantage persists when the target state is mixed.
In this work, we first improve the reconstruction algorithm by considerably reducing its classical computational overhead using the Control-Variates variance reduction technique (<ref>).
Then, we conduct comprehensive numerical simulations to investigate the sample complexity of mixed-state reconstruction for different NQS encodings.
In particular, we benchmark NDO and POVM-NQS on reconstructing the finite temperature density matrix of the Transvese-Field Ising model.
We numerically demonstrate that for NDO, the quadratic advantage in pure state reconstruction can only survive when the state is slightly mixed, and the scaling deteriorates when the state is highly mixed.
On the other hand, POVM-NQS does not hold such an advantage even for pure states and has a similar scaling as classical shadows, independent of how mixed the target state is.
Consequently, NDO performs better than POVM-NQS for nearly pure states, while for highly mixed states, the situation is reversed.
We also propose a phenomenological model that can explain the results.
These results provide valuable guidance to the practical implementation of NQS-based state reconstruction and also point out the need for designing more efficient encodings in terms of quantum resources.
§ NEURAL QUANTUM STATE RECONSTRUCTION
We begin by describing the general framework of NQS reconstruction.
The fundamental concept involves training a (potentially generative) NN that approximates a quantum state in a well-defined basis to reproduce the statistics of the measurement data.
Let's assume that in the experiments, the system state ρ has been measured with N_b different POVM measurements {{P^b_i}_i=1^K}_b=1^N_b, where ∑_i=1^K P^b_i=I.
This includes projective measurements like Pauli string measurements as a special case.
Assume that we have gathered N_d measurement outcomes under each measurement basis, where each outcome is denoted by a number σ^b_j ∈{1, …, K} for j = 1, …, N_d.
The dataset that we aim to reproduce can be represented as:
D = ⋃_b=1^N_b D_b = ⋃_b=1^N_b{σ^b_j}_j=1^N_d.
We use p_b(σ) to denote the probability of obtaining the measurement outcome σ by measuring ρ under basis b, and use q^θ_b(σ) to denote the corresponding probability given by the NQS ρ_θ with variational parameters θ.
Our goal is to minimize the averaged distance between these two probability distributions over all bases.
We quantify this distance by the Kullback–Leibler divergence KL(pq)=𝔼_σ∼ plogp(σ)/q(σ).
We, therefore, define the loss function as:
ℒ(θ) = 1/N_b∑_b=1^N_bKL(p_b q^θ_b)
≈1/N_b∑_b=1^N_b1/N_d∑_j=1^N_dlog q_b^θ(σ^b_j) + const.,
where we have omitted the constants that do not depend on θ and approximated the expectation values 𝔼 with the sample average over the finite dataset.
We then use gradient-based optimization methods to optimize the parameters θ, in which the t^th iteration reads
θ_t = opt(θ_t-1, ∇_θℒ(θ_t-1)),
where opt refers to the optimization algorithm used (e.g, for gradient descent with learning rate α, opt(θ, g) = θ - α g).
In practice, when the dataset is large, we employ a technique called mini-batching to reduce the computational cost.
This involves estimating the KL-divergence and its gradient not on the whole dataset, but only on a smaller subset of it.
Once the training procedure has converged, the NQS can be used to generate samples, predict properties of interest, or, for sufficiently small systems, retrieve the full density matrix.
We now briefly introduce the two different NQS encodings that we compared during our investigations: NDO and POVM-NQS.
§.§ Neural Density Operator
The NDO encoding is compatible with projective measurements.
We take Pauli string measurements {X, Y, Z}^⊗ N on an N-qubit system as an example.
The corresponding projectors will be denoted with {P^b_i} and the basis rotation matrix with {U_b}.
Measurement outcomes can be denoted by bit-strings of length N (e.g., σ^b_j = (0100) for N=4).
When a properly normalized neural network (NN) is used - for instance, an autoregressive NN - to parameterize the density matrix elements ηρ_θη' = NN_θ(η, η'), the variational probability distribution reads
q^θ_b(σ) = ∑_η, η'σU_bησU_bη'^*NN_θ(η, η').
If the NN parameterization is not normalized, an additional normalization term in the loss function appears, whose gradient can be estimated through Monte Carlo sampling <cit.>.
Note that this method works for generic projective measurement schemes that may not be informationally complete.
§.§ POVM-NQS
The POVM-NQS method exploits the one-to-one correspondence between the state ρ and the outcome statistics of a single (N_b=1) informationally complete POVM measurement {P_i}_i=1^K.
The probability of obtaining an outcome σ∈{1, …, K} is given by p(σ) = (ρ P_σ).
Conversely, the density matrix can be obtained by the inverse formula
ρ = ∑_σ, σ'p(σ') T^-1_σσ' P_σ,
where T_σ, σ' = (P_σ P_σ') is the overlap matrix <cit.>.
Therefore, reconstructing p(σ) suffices to determine ρ.
As an example, we consider the tensor products of single-qubit Pauli-4 measurements {P_(0), (1), (2) = 1/3|↑_x, y, z⟩⟨↑_x, y, z|, P_(3)=I - P_(0) - P_(1) - P_(2)}^⊗ N for an N qubit system.
Under this measurement scheme, a normalized neural network is used to approximate p(σ): q^θ(σ) = NN_θ(σ).
When the NN is trained, one can use the inverse formula to reconstruct the target state, or directly estimate relevant properties via sampling <cit.>.
§ VARIANCE REDUCTION VIA CONTROL VARIATES
To investigate the asymptotic behavior of the sample complexity, we must train the NN to high precision.
However, we observe that the noise introduced by the mini-batching strategy makes accurate training prohibitively expensive in practice.
This becomes clear when we consider that, by randomly sampling a batch of outcomes B from the dataset at each iteration, the gradient is computed as:
g_B(θ) = 1/|B|∑_σ^b_j∈ B∇_θlog q^θ_b (σ^b_j),
which is an unbiased estimator, with variance
Var[g_B] = 1/|B|Var_{σ^b∼ p_b}[1/N_b∑_b=1^N_blog q^θ_b(σ^b)],
which remains finite as q^θ_b approaches the target p_b.
This asymptotically finite variance is in sharp contrast to the zero-variance property of Variational Monte Carlo, which allows for accurate optimization of the ground-state with a relatively small number of samples <cit.>.
The statistical fluctuations in the gradient estimation introduce noise that prevents the reconstruction error from dropping below its standard deviation, which scales proportionally to 1/√(|B|).
Consequently, in situations where we cannot afford training with large batch sizes, we must reduce the variance of the gradient estimator.
The Control Variates (CV) method is a well-established statistical technique for variance reduction <cit.>, which was recently discussed in the context of Variational Monte Carlo <cit.>.
Specifically, when estimating the gradient g_B(θ_t) at step t, we introduce a second random variable (the CV), which represents the gradient evaluated at an earlier step t': g_B(θ_t').
We then adjust the gradient estimator to g_B(θ_t-1) - g_B(θ_t') + 𝔼g_B(θ_t').
This revised estimator is unbiased but can have lower variance because g_B(θ_t) and g_B(θ_t') are correlated.
The expectation can be computed by averaging over the entire dataset.
Consequently, we obtain the following variance-reduced training rule:
θ_t = opt (θ_t-1, g_B(θ_t-1) - g_B(θ_t') + ∇_θℒ(θ_t')).
To further reduce the computational cost, we update the CV only once every T steps, i.e., t' = T ⌊ t/T ⌋.
For all simulations in this paper, we set T=50.
This method is also known as stochastic variance reduced gradient (SVRG) in the machine learning literature <cit.>.
To further substantiate our approach, we conduct a systematic numerical analysis.
As a benchmark problem, we consider the NDO reconstruction of the one-dimensional open-boundary transverse field Ising model (TFIM)
H_Ising = -∑_i=1^N-1Z_iZ_i+1 - h∑_i=1^N X_i.
We set h=1 and N=3, small enough to study the batch size's effects systematically.
We randomly generate 10^3 measurement shots for each of the 3^3=27 Pauli basis, train the NQS with and without CV for different values of the batch size B, and repeat every simulation 100 times with different initial conditions and random seeds.
In <ref>, we compare three metrics, the KL divergence averaged over all measurement basis (KL), the error in energy ε = |(Hρ_θ)-(Hρ)|, and the infidelity I(ρ, ρ_θ) = 1-(√(√(ρ_θ)ρ√(ρ_θ)))^2.
Dashed and solid lines correspond to training with and without CV.
It is evident that when NQS is trained without CV, the errors scale like 1/√(B) as expected from <ref>, and then saturates for large batch size at an intrinsic limit set by the dataset.
When the CV method is applied, the adverse effect of mini-batching is eliminated, and the errors are independent of the batch size.
These results validate the effectiveness of our CV method, which we will use in the rest of our analysis.
We note that this CV method applies to generic NQS reconstruction algorithms, including different pure and mixed encodings, and should be used as a default technique when mini-batching introduces noise.
The code is implemented and open-sourced in NetKet <cit.>.
§ RESULTS AND DISCUSSION
§.§ Simulations
To study the performance of NDO and POVM-NQS, we simulate the finite-temperature Gibbs ensemble of the TFIM, which is representative of mixed states where the prepared states might interact with a thermal bath.
We generate measurement datasets of varying sizes and use different NQS ansatze for the reconstruction to understand the asymptotic scaling behavior of the sample complexity.
In this work, we focus on sample sizes in the regime of 10^2 to 10^4, which are currently achievable with modern quantum devices <cit.>.
By plotting the errors of different sample sizes on a log-log scale, we can determine the sample complexity scaling exponents from the slopes of the linear fits.
We use the loss value (defined in <ref>), the infidelity I, and the error in energy ε as metrics for the reconstruction error.
As the density matrix reconstructed through the POVM-NQS method might be negative, the infidelity may not always be a good indicator.
Therefore, we take the absolute value of infidelity and also calculate the average classical infidelity I_cl = 1 - 1/N_b∑_b∑_σ√(p_b(σ)q_b^θ(σ)), which is commonly used as a performance indicator in the literature on POVM-NQS.
We consider the 3-qubit one-dimensional open-boundary TFIM at h=1, and use thermal states ρ_β = exp(-β H_Ising)/[exp(-β H_Ising)] across a wide range of inverse-temperatures β∈[10^-1,10^1] as the target states.
The numerical details can be found in <ref>.
In Fig. <ref>, we show three inverse-temperatures β = 10, 1, 0.1 representing low, medium, and high-temperature regimes, respectively.
We plot the sample complexity scaling behaviors using solid lines, with the linear fits shown as dashed lines.
The slopes and r-squared values of the linear fits are reported in the corresponding legends.
In Fig. <ref>, we summarize the scaling exponents for different inverse temperatures.
§.§ Scaling Behavior
As shown in <ref>, we note that the scaling exponents for KL and classical infidelity for both classes of NQS ansatze are approximately -1, regardless of the mixedness of the target states.
This is because we are learning the classical probability distributions of the measurement outcomes, which is known to have sample complexity Θ(ϵ^-1), for a classical error ϵ quantified by the KL or classical infidelity <cit.>.
Regarding energy, POVM-NQS and NDO show qualitatively different behaviors.
The scaling exponent for POVM-NQS remains around -2 irrespective of the mixedness of the target state, suggesting that the method doesn't improve the asymptotic quantum shot complexity over naive statistical averaging or classical shadows.
On the other hand, NDO exhibits a scaling exponent of approximately -1 for slightly mixed states, although this exponent gradually deteriorates to -2 for highly mixed states.
This suggests that NDO has an advantage over POVM-NQS when the target state is only slightly mixed, but the advantage disappears when the mixedness increases significantly.
As NDO ansatze can naturally represent pure-states, this observation is consistent with the behavior of pure-state reconstruction, which was recently numerically also demonstrated to have an exponent of -1 (see Ref. <cit.>).
We further note that NDO tends to be more accurate than POVM-NQS in terms of reconstruction error for the same sample size. Still, the classical optimization process is often harder to converge, leading to higher classical overhead.
This might be related to variance problems arising from zeroes in the density matrix, similar to what was recently found for variational dynamics in Ref. <cit.>.
In <ref>, we conduct the same study for a molecular ground-state (LiH) subject to depolarization and find a consistent picture as here.
§.§ Theoretical Analysis
The previous section discussed the asymptotic quantum shot complexity for both KL and classical infidelity.
Now, our focus shifts to examining the error scaling on the energy and quantum infidelity, intending to provide a theoretical explanation for the observed behavior.
To this end, we consider a simple phenomenological model of errors occurring in NQS reconstructions.
After the training procedure, we assume that the NQS does not perfectly encode the target state, but has a small error denoted as δ.
We then derive asymptotic expressions to identify how this error affects various error metrics.
The findings are summarized in <ref>.
For POVM-NQS, which directly encodes the probability distribution of POVM outcomes, we make the assumption that
q^θ(σ)=p(σ) + δΔ(σ)+o(δ^2),
where ∑_σΔ(σ)=0.
This implies that the total variation distance TV, defined as ∑_σ |p(σ) - q^θ(σ)|/2 is of order δ.
Building upon the theory of classical distributions learning, we understand that the sample complexity scales as TV^-2∼δ^-2 <cit.>.
Since the energy can be expressed as an expectation over the POVM distribution ∑_σp(σ)H_σ, with H_σ = (P_σ'H)T^-1_σ'σ <cit.>, the error in energy is also of order δ.
As a result, the energy error exhibits an exponent of -2.
For NDO, which directly encodes the density matrix elements, we consider:
ρ_θ = ρ + δΔ + o(δ^2),
where Δ=0.
The trace distance TD, defined as ρ^θ-ρ_1/2 will therefore be of order δ.
According to the theory of quantum state learning, the sample complexity scales as TD^-2∼δ^-2 <cit.>.
In <ref>, we prove that the energy error is of order δ^2 when ρ is a pure state, and of order δ otherwise.
This distinction arises from the cancellation of terms of order δ when the state is pure, due to the wavefunction being an eigenstate of the Hamiltonian.
However, such cancellation doesn't occur for mixed states or generic observables, leading to the observed scaling behavior: the energy has an exponent of -1 for pure states, deteriorating to -2 as mixedness increases.
Furthermore, we show that KL is of order δ^2 and establish a general proposition that the trace distance can be upper bounded by the square root of the KL.
This finding is consistent with the exponent of -1 for KL.
The behavior of quantum infidelity presents a more complex scenario.
According to the theory of quantum state tomography <cit.>, infidelity should exhibit an exponent of -1, which agrees with the NDO simulations when mixedness is small or large.
However, in the regime of intermediate mixedness, we observe a degradation of the infidelity exponent, forming a so-called valley.
We explain this phenomenon in <ref> due to the misalignment between KL and infidelity, and we can reproduce a qualitatively consistent valley with random reconstruction error Δ in numerical simulations.
The remaining quantitative discrepancies might arise from complicated interactions between the NN structure, training heuristics, and properties of the target states.
For POVM-NQS, we observe that a simple switch from I(ρ, ρ_θ) to I(ρ_θ, ρ) significantly alters the behavior, indicating the presence of several negative eigenvalues in ρ_θ.
This suggests that the observed behavior is primarily caused by the unphysical nature of POVM-NQS reconstruction and is presented here for completeness.
§ CONCLUSIONS
In this paper, we systematically study the sample complexity of NQS mixed-state reconstruction and compare different NQS encodings, including NDO and POVM-NQS.
To achieve accurate reconstruction, we introduce a strategy to systematically suppress the noise introduced by mini-batching based on Control-Variates.
We provide theoretical arguments and numerical proof that this strategy leads to significantly better accuracy of reconstruction algorithms and has no trade-offs.
Even though we only discuss the case of mixed-state reconstruction, it can also be applied to any scheme based on NQS.
We also open-sourced a high-quality implementation in the quantum state reconstruction (QSR) driver of NetKet <cit.>.
We then present extensive numerical simulations for the finite-temperature TFIM, which is a prototypical example of realistic scenarios that experimentalists would encounter in quantum simulation experiments on NISQ devices.
We find that NDO offers a quadratic advantage over POVM-NQS and classical shadows in the asymptotic sample complexity when the state is pure or almost pure.
This advantage deteriorates and eventually vanishes when the target state becomes more mixed.
On the other hand, POVM-NQS treats states of various mixedness on an equal footing and does not have such an advantage at all, regardless of the state's mixedness.
Therefore, NDO is a more efficient tool for state reconstruction for slightly mixed states.
Our results establish asymptotic sample complexity as an important performance indicator for designing NQS architectures and showcase the advantages of enforcing physical constraints at the level of the NN architecture.
Finally, this manuscript also provides a first comparison of the performance of the NDO and POVM-NQS encodings for mixed-states, which has otherwise not been investigated and might be of interest for developing variational methods to simulate finite-temperature and/or open quantum systems.
We thank J. Carrasquilla for insightful discussions.
We acknowledge the Tsinghua Astrophysics High-Performance Computing platform and SCITAS (EPFL) for providing computational and data storage resources.
§ NUMERICAL DETAILS
Here we list the numerical details of our studies.
The NDO used in the simulations of an N-qubit system is a restricted Boltzmann machine with one layer of N hidden neurons and N ancillas <cit.>.
The POVM-NQS is an autoregressive dense NN with 2 layers of 10 neurons <cit.>.
All training are conducted via the Adam optimizer <cit.> with learning rate 10^-3, batch size 100, maximal iteration number 10^5 and CV update frequency T=50.
The training is terminated when the loss value stops decreasing for 2000 iterations.
All code is implemented with NetKet <cit.> and JAX <cit.>.
We note that the numerical study of the NDO scaling for mixed states is computationally more challenging than the case of pure states, such as Ref. <cit.>.
This is because for pure states, measurements of all the Pauli strings in the Hamiltonian suffice to determine the state, while for mixed states they don't.
Intuitively, when the pure state ansatz is trained to reproduce the probability distributions of all Hamiltonian terms, it gives an energy approximating the true energy, which is the minimal energy.
Then by the variational principle, the state also approximates the ground state.
In contrast, for mixed states, one has to measure an informationally-complete set of bases (e.g., all the Pauli bases) to minimize the reconstruction error.
This exponentially growing basis-set size makes numerical simulations for larger systems very challenging.
Nevertheless, our theoretical analysis is independent of the system size and agrees with numerical simulations on small systems.
Also, a relevant open question would be to investigate what is the effect of a truncated basis set on the reconstruction accuracy, which is relevant for experimental implementation.
§ THEORETICAL DETAILS
For NDO, we assume ρ_θ = ρ + δΔ + o(δ^2), and we will omit o(δ^2) in derivations in this appendix.
The error in energy reads |(ρ_θ H)-(ρ H)| = δ |(Δ H)|+o(δ^2), which is in general of order δ.
However, if the state is pure, i.e., ρ = |ψ⟩⟨ψ| and we assume ρ_θ = (|ψ⟩+δ|Δ⟩)(⟨ψ|+δ⟨Δ|), where ⟨Δ||ψ⟩=0.
Then Δ = |Δ⟩⟨ψ| + |ψ⟩⟨Δ|, and (Δ H) = 2ReΔHψ=0, because |ψ⟩ is an eigenstate of H.
Thus the error in energy is of order δ^2 for pure states.
Moreover, for KL, we have
q^θ_b(σ) = ((ρ+δΔ)P^b_σ) = p_b(σ) + δ(Δ P^b_σ).
Thus KL(p_bq^θ_b) = -∑_σ p_b(σ)log(1+δ(Δ P^b_σ)/p_b(σ)) = -δ(Δ∑_σ P^b_σ)=o(δ^2), where we have used ∑_σ P^b_σ=I and Δ=0.
Therefore KL=∑_b=1^N_bKL(p_bq^θ_b)/N_b=o(δ^2).
In fact, apart from the phenomenological model, we can also derive the quadratic relation between KL and trace distance via the following inequality, which is model-independent.
(Trace distance bounded by KL over all Pauli basis).
For two n-qubit states ρ and ρ', let TD=ρ-ρ'_1/2 be the trace distance and use σ_i∈{I, X, Y, Z} to denote Pauli operators.
Define the KL over all Pauli basis as KL=∑_i_1, …, i_n=0^3KL(p_i_1, …, i_np'_i_1, …, i_n), where p and p' denote the Bernoulli probability distributions given by the corresponding Pauli measurements and states.
Then we have
TD≤2^n/√(2)√(KL).
To show this, note that all Pauli string operators form a complete basis of the space of Hermitian matrices.
Therefore, we can decompose ρ as
ρ = ∑_i_1, ⋯, i_n=0^3 C_i_1, ⋯, i_nσ_i_1⊗⋯⊗σ_i_n.
The coefficients
C_i_1, ⋯, i_n = 1/2^ntr(ρσ_i_1⊗⋯⊗σ_i_n)=1/2^n𝔼_σ∼ p_i_1, ⋯, i_n[σ],
where p_i_1, ⋯, i_n is the Bernoulli distribution given by ρ measured in the basis σ_i_1⊗⋯⊗σ_i_n.
We use the same notations with primes to denote the corresponding quantities of ρ'.
Then the trace distance can be bounded by the differences in coefficients as
TD ≤1/2∑_i_1, ⋯, i_n=0^3 |C_i_1, ⋯, i_n-C'_i_1, ⋯, i_n| σ_i_1⊗⋯⊗σ_i_n_1
=1/2∑_i_1, ⋯, i_n=0^3 |C_i_1, ⋯, i_n-C'_i_1, ⋯, i_n|· 2^n.
On the other hand, the differences in coefficients can be bounded as
|(C_i_1, ⋯, i_n-C'_i_1, ⋯, i_n)|
= 1/2^n|𝔼_σ∼ p_i_1, ⋯, i_n[σ] - 𝔼_σ∼ p'_i_1, ⋯, i_n[σ] |
≤1/2^n∑_σ∈{± 1} |p_i_1, ⋯, i_n(σ) - p'_i_1, ⋯, i_n(σ)|
≤2/2^n√(1/2KL(p_i_1, ⋯, i_n p'_i_1, ⋯, i_n)),
where the last inequality follows from Pinsker's inequality <cit.>.
Therefore, we arrive at
TD(ρ, ρ') ≤∑_i_1, ⋯, i_n=0^3 √(1/2KL(p_i_1, ⋯, i_n p'_i_1, ⋯, i_n))
≤2^n/√(2)√(KL),
where we have used the mean inequality.
This proposition can also serve as a guarantee for training quantum density estimators in the context of quantum federated learning <cit.>.
§ THE VALLEY PHENOMENON
Here we aim to provide an explanation of the valley phenomenon observed in NDO: the exponent of infidelity is -1 for β→ 0, ∞, while decreasing to -2 for intermediate β.
We start from our phenomenological model and try to find a relationship between KL and infidelity.
Since the behavior of KL is well understood, such a relationship would give us insights into how infidelity behaves.
We consider the error matrix Δ to be drawn randomly as Δ = AA^†/(AA^†), where each entry of A follows the complex standard Gaussian distribution.
Then we calculate the perturbed state ρ_θ = ρ + δΔ, and normalize it again by dividing its trace.
For a given β and the corresponding target state ρ, we randomly generate 100 such perturbed states for different choices of δ∈ [10^-3.5, 10^-2.5], and calculate the KL and infidelity I against the target state.
We find that the resulting (KL, I) pairs fall on a straight line in log-log scale, with r-squared greater than 0.99.
The slope α=α(β) depends on β, and gives an effective power law relationship between KL and I: I ∝KL^α.
This means that the way we quantify the reconstruction error impacts the scaling exponents we observe.
In particular, such misalignment between KL and infidelity leads to a β-dependent difference of α(β) in exponents.
Now we assume that KL has an exponent of -1, which is theoretically and numerically validated.
Hence the sample complexity is proportional to KL^-1∝ I^-1/α(β).
In <ref>, we plot the simulated exponents -1/α(β) in blue, with the standard deviation indicated by the shaded region.
We find a valley pattern that is qualitatively consistent with what we observe in NDO simulations, which confirms our theoretical explanation.
The rest quantitative differences might be a complicated result of the NN design, generalization, training heuristics, and the property of the target states.
§ DEPOLARIZED MOLECULAR GROUND-STATES
In this appendix, we study the sample complexity scaling behavior for LiH ground-state subjected to depolarization noise.
This scenario emerges in digital simulation or VQEs, where the quantum gates are imperfect and introduce noise that can be modeled as depolarization.
We apply the parity transformation to transform the Fermionic Hamiltonian into a 4-qubit Hamiltonian that can be implemented on quantum computers (details of the transformations are found in the appendix of <cit.>).
We take its ground state |ψ⟩ and simulate the depolarized states ρ_p = (1-p)|ψ⟩⟨ψ| + pI/2^4 over p∈ [0, 1] as the target states.
In <ref>, we choose three depolarization intensity p = 0, 0.01, 0.1 that represents low, medium, and high depolarization regimes, and plot the corresponding results.
We observe that the behavior of POVM-NQS on LiH approximately matches what was seen on the TFIM, while the quantum shot complexity of the NDO is generally lower than that of TFIM by about 0.5.
This might arise from the specific NN architecture and training heuristics used here.
Intuitively, depolarization noise has less structure that can be exploited by NNs than thermal states, leading to a slightly worse scaling in the simulations.
Nevertheless, an advantage of NDO over POVM-NQS can still be observed at small mixedness, while disappearing when decoherence leads to very mixed states, showing a consistent picture as TFIM.
|
http://arxiv.org/abs/2307.03026v1
|
20230706144303
|
Exploratory mean-variance portfolio selection with Choquet regularizers
|
[
"Junyi Guo",
"Xia Han",
"Hao Wang"
] |
math.OC
|
[
"math.OC",
"math.PR"
] |
0pt 0pt 0pt
-2.0 cm 25.5 truecm 16.3 truecm
4pt
equationsection
Proof.
Proof
#1#2([ #1; #2 ])
ḍ
=
p =
p →
→
a.s. →
a.s. =
w ↪
(
[
theoremTheorem[section]
corollary[theorem]Corollarylemma[theorem]Lemmaproposition[theorem]Propositionexample[theorem]Exampledefinition[theorem]Definitionhypothesis[theorem]Hypothesisremark[theorem]Remarka]Junyi Guoa]Xia Hanb]Hao WangCorresponding author.
E-mail addresses: [email protected] (J. Guo); [email protected] (X. Han); [email protected] (H. Wang) [a] School of Mathematical Sciences and LPMC, Nankai University, Tianjin, 300071, China [b] School of Mathematical Sciences, Nankai University, Tianjin, 300071, ChinaExploratory mean–variance portfolio selection with Choquet regularizers
[
=======================================================================
In this paper, we study a continuous-time exploratory mean-variance (EMV) problem under the framework of reinforcement learning (RL), and the Choquet regularizers are used to measure the level of exploration. By applying the classical Bellman principle of optimality, the Hamilton–Jacobi–Bellman equation of the EMV problem is derived and solved explicitly via maximizing statically a mean–variance constrained Choquet regularizer. In particular, the optimal distributions form a location–scale family, whose shape depends on the choices of the Choquet regularizer. We further reformulate the continuous-time Choquet-regularized EMV problem using a variant of the Choquet regularizer. Several examples are given under specific Choquet regularizers that generate broadly used exploratory samplers such as exponential, uniform and Gaussian. Finally, we design a RL algorithm to simulate and compare results under the two different forms of regularizers.
Keywords: Choquet regularization, mean-variance problem, reinforcement learning, stochastic control
§ INTRODUCTION
Reinforcement learning (RL) is an active subarea of machine learning. In RL, the agent can directly interact with the black box environment and get feedback. This kind of learning that focuses on the interaction process between the agent and the environment is called trial-and-error learning. By trial and error learning, we skip the parameter estimation of the model and directly learn the optimal policy <cit.>, which can overcome some difficulties that traditional optimization theory may have in practice. Many RL algorithms are based on traditional deterministic optimization, and the optimal solution is usually a deterministic policy. But in some situations, it makes sense to solve for an optimal stochastic policy for exploration purposes. The stochastic policy is to change the determined action into a probability distribution through randomization. Searching for the optimal stochastic policy has many advantages, such as robustness <cit.> and better convergence <cit.> when the system dynamics are uncertain.
Entropy measures the randomness of the actions an agent takes, and thus can indicate the level of exploration in RL. The idea of maximum entropy RL is to make the strategy more random in addition to maximizing the cumulative reward, so entropy together with a temperature parameter is added to the objective function as a regularization term; see e.g., <cit.>. Here, the temperature parameter is a regularization coefficient used to control the importance of entropy; the larger the parameter, the stronger the exploratory ability, which helps to accelerate the subsequent policy learning and reduces the possibility of the policy converging to a local optimum. <cit.> generalized maximum entropy RL to continuous state and continuous action settings rather than tabular settings. <cit.> first established a continuous-time RL framework with continuous state and action from the perspective of stochastic control and proved that the optimal exploration strategy for the linear–quadratic (LQ) control problem in the infinite time horizon is Gaussian. Further, <cit.> applied this RL framework for the first time to solve the continuous-time mean-variance (MV) problem, and we refer to <cit.> for more summaries.
Motivated by <cit.>, <cit.> extended the exploratory stochastic control framework to an incomplete
market, where the asset return correlates with a stochastic market state, and learned an equilibrium policy under a mean-variance criterion. <cit.> studied
the exploratory Kelly problem by considering both the amount
of investment in stock and the portion of wealth in stock as
the control for a general time-varying temperature
parameter.
From the perspective of risk measures, <cit.> first introduced another kind of index that can measure the randomness of actions called Choquet regularization. They showed that the optimal exploration distribution of LQ control problem with infinite time horizon is no longer necessarily Gaussian as in <cit.>, but are dictated by the choice of Choquet
regularizers. As mentioned in <cit.>, Choquet regularizers have a number of theoretical and practical advantages to be used for RL. In particular, they satisfy several “good” properties such as quantile additivity, normalization, concavity, and consistency with convex order (mean-preserving spreads) that facilitate analysis as regularizers. Moreover, the availability of a large class of Choquet regularizers makes it possible to compare and choose specific regularizers to achieve certain objectives specific to each learning problem. To the best of our knowledge, there is no literature using other regularizers rather than entropy to quantify the information gain of exploring the environment for practical problems. Thus, it is natural to consider some practical exploratory stochastic control problems using the Choquet regularizers for regularization.
This paper mainly studies the continuous-time exploratory mean-variance (EMV) problem as in <cit.> in which we replace the differential entropy used for regularization with the Choquet regularizers. When looking for pre-committed optimal strategies as the goal, the MV model can be converted into a LQ model in finite time horizon by <cit.>. The form of the LQ-specialized HJB equation suggests that the problem boils down to a static optimization where the given Choquet regularizer is to be maximized over distributions with given mean and variance, which has been solved by <cit.>. Since the EMV portfolio selection is formulated in a finite time horizon, we show that the optimal distributions form a location–scale family with a time-decaying variance whose shape depends on the choice of Choquet regularizers. This suggests that the level of exploration decreases as the time approaches the end of the planning horizon. We further give the optimal exploration strategies under several specific Choquet regularizers, and observe insights of the perfect separation between exploitation and exploration in the mean and variance of the optimal distribution and the positive effect of a random environment on learning.
Inspired by the form of entropy, we further reformulate the continuous-time Choquet-regularized RL problem based on a variant of Choquet regularizers – logarithmic Choquet regularizers. Because of the monotonicity of the logarithmic function, the problem can still be solved by maximizing the Choquet regularizer over distributions with given mean and variance. However, since the regularizers affect the value function, it is to be expected that the variance of the optimal distributions is different. Explicitly expressed costs of exploration for the two different forms of regularizers and close connections between the classical and the EMV problems are discussed. It is interesting to see that the costs of exploration for the two EMV problems are quite different. To be specific, with the Choquet regularizers, the exploration cost depends on the unknown model parameters and the specific regularizers, while with logarithmic Choquet regularizers, the derived exploration cost only depends on the exploration parameter and the time horizon, and it is the same as the cost when using entropy as the regularizer in <cit.>.
Finally, based on the policy improvement and convergence theorems, we designed a RL algorithm to solve the EMV problems according to the continuous-time policy gradient method proposed by <cit.> and then simulated it. By letting the Choquet integral being some concrete choices, we show that our RL algorithm based on Choquet regularizations and logarithmic Choquet regularizers perform on par with the one in <cit.> where the differential entropy is applied and Gaussian is always the optimal exploration distribution.
The rest of this paper is organized as follows. Section <ref> introduces the MV problem under the Choquet regularizations. Section <ref> solves the continuous-time EMV problem and gives several examples. Section <ref> discusses the corresponding results under the variant of Choquet regularizations. Section <ref> introduces the RL algorithm, and the simulation results of the algorithm are summarized in Section <ref>. Section <ref> concludes the paper.
§ FORMULATION OF PROBLEM
§.§ Choquet regularizers
We assume that (Ω, ℱ, ℙ) is an atomless probability space. With a slight abuse of notation,
let
ℳ denote both the set of (probability) distribution functions of real random variables and the set of Borel probability measures on ℝ, with
the obvious identity Π(x)≡Π((-∞, x]) for x ∈ℝ
and
Π∈ℳ. We denote by ℳ^p⊂ℳ, p∈[1,∞), the set of distribution functions or probability measures with finite p-th moment. For a random variable X and a distribution Π, we write X ∼Π if the distribution of X is Π under ℙ, and X d= Y if two random variables X and Y have the same distribution.
We denote by μ and σ^2 the mean and variance functionals on ℳ^2, respectively; that is, μ(Π) is the mean of Π and σ^2(Π) the variance of Π for Π∈ℳ^2. We denote by ℳ^2(m,s^2) the set of Π∈ℳ^2 satisfying μ(Π)=m∈ and σ^2(Π)=s^2>0.
In <cit.>, the Choquet regularizer is defined to measure and manage the level of exploration for RL based on a subclass of signed Choquet integrals <cit.>. Given a concave function h:[0,1]→ of bounded variation with h(0)=h(1)=0 and Π∈ℳ,
the Choquet regularizer Φ_h on ℳ is defined as
Φ_h(Π)≡∫ h∘Π([x,∞))x̣:=∫_-∞^0[h∘Π ( [x,∞) )-h(1)]x̣+∫_0^∞ h∘Π([x,∞))x̣.
Note that the concavity of h is equivalent to several other properties, and in particular,
to that Φ_h is a concave mapping which means that
Φ_h(λΠ_1 + (1-λ) Π_2 ) ≥λΦ_h( Π_1) + (1-λ) Φ_h(Π_2 ), Π_1,Π_2∈ℳλ∈ [0,1],
and consistency with convex order means
Φ_h( Π_1 ) ≤Φ_h(Π_2 ), Π_1,Π_2∈ℳΠ_1Π_2.[Π_1 is smaller than Π_2 in convex order, denoted by Π_1Π_2, if 𝔼[f( Π_1)] ≤𝔼[f( Π_2)] for all convex functions f, provided that the two expectations exist. It is immediate that Π_1 Π_2 implies 𝔼[Π_1]≤𝔼[Π_2].]
If Π_1Π_2, then Π_2 is also called a mean-preserving spread of Π_1, which intuitively means that Π_2 is more spread-out (and hence “more random") than Π_1. The set of h:[0,1]→ is denoted by ℋ.
We remark that the above properties indeed suggest that Φ_h(Π) serves as a measure of randomness for Π, since both a mixture and a mean-preserving spread introduce extra randomness.
On the other hand, h(0)=h(1)=0 is equivalent to Φ_h(δ_c)=0, ∀ c∈, where δ_c is the Dirac measure at c. That is, degenerate distributions do not have any randomness measured by Φ_h.
Choquet regularizers include, for instance, range, mean-median deviation, the Gini deviation, and inter-ES differences; see Section 2.6 of <cit.>.
By Lemma 2.2 of <cit.>, Φ_h is well defined, non-negative, and
location invariant and scale homogeneous for h∈ℋ.[We call Φ_h to be location invariant and scale homogeneous if
Φ_h(Π')=λΦ_h (Π)
where Π' is the distribution of λ X+c for λ >0, c∈ and X∼Π.] The properties imply that any distribution for exploration can be measured in non-negative values. Moreover, the measurement of randomness does not depend on the location and is linear in its scale, which make Φ_h a meaningful regularizer that measures the level of randomness, or the level of exploration in the RL context.
For a distribution Π∈ℳ, let its left-quantile for p∈(0,1] be defined as
Q_Π(p)=inf{x∈: Π(x) ≥ p} .
Next, we give a lemma which we will rely on when considering the EMV problem formulated by <cit.>. Let h' be the right-derivative of h and ‖ h'‖_2=(∫_0^1(h'(p))^2dp)^1/2.
If h is continuous and not constantly zero, then
a maximizer Π^* to the optimization problem
max_Π∈ℳ^2Φ_h (Π) μ (Π) =m σ^2 (Π) = s^2
has the following quantile function
Q_Π^*(p) = m + s h'(1-p) / ||h'||_2, p∈ (0,1),
and the maximum value of (<ref>) is Φ_h(Π^*)= s||h'||_2.
By Lemma <ref>, <cit.> presented many examples linking specific exploratory distributions with the corresponding Choquet regularizers and generated some common exploration measures including ϵ-greedy, three-point, exponential, uniform and Gaussian; see their Examples 4.3–4.6 and Sections 4.3–4.5.
The result in Lemma <ref> can be extended to a more general case involving higher moments. For a>1, Theorem 5 in <cit.> showed that if the uncertain set is given by
ℳ^a(m, v)={Π∈ℳ^a: μ(Π)=m 𝔼[|Π-m|^a] ≤ v^a},
the optimization problem
max_Π∈ℳ^aΦ_h (Π), for p∈(0,1), can be solved by
Q_Π(p) = m + v |h^'(1-p)-c_h, b|^b/h^'(1-p)-c_h, b[h]_b^1-b, h^'(1-p)-c_h, b≠ 0, Q_Π(p) = m .
Here, b ∈[1, ∞] is the Hölder conjugate of a, namely b=(1-1 / a)^-1, or equivalently, 1/a+1/b=1,
c_h, b=x ∈ℝminh^'-x_b and [h]_b=min _x ∈ℝh^'-x_b=h^'-c_h, b_b,
with
h^'-x_b=(∫_0^1|h^'(p)-x|^b d p)^1 / b, b<∞ and h^'-x_∞=max _p ∈[0,1]|h^'(p)-x|, x ∈ℝ.
§.§ Continuous-time EMV problem
The classical MV problem has been well studied in the literature; see e.g., <cit.>, <cit.> and <cit.>. We first briefly introduce the classical MV problem in continuous time.
Let T be a fixed investment planning horizon and {W_t,0⩽ t ⩽ T} be a standard Brownian motion defined on a given filtered probability space (Ω,ℱ,{ℱ_t}_0⩽ t⩽ T,ℙ) that statisfies usual conditions.
Assume that a financial market consists of a riskless asset and only one risky asset, where the riskless asset has a constant interest rate r>0 and the risky asset has a price process governed by
Ṣ_t=S_t(μṭ+σẈ_t), 0⩽ t
⩽ T,
with S_0=s_0>0 where μ∈ℝ,σ >0 is the mean and volatility parameters, respectively. The Sharpe ratio of the risky asset is defined by ρ=(μ-r)/σ. Let u={u_t,0⩽ t ⩽ T} denote the discounted amount invested in the risky asset at time t, and the rest of the wealth is invested in the risk-free asset. By (<ref>), the discounted wealth process {X^u_t,0⩽ t ⩽ T} for a strategy u_t is then given as
X̣_t^u =σ u_t(ρṭ+Ẉ_t), 0⩽ t
⩽ T,
with X_0^u=x_0∈ℝ. Under the continuous-time MV setting, we aim to solve the following constrained optimization problem
min_u Var[X_T^u] subject to E[X_T^u]=z,
where {X_t^u,0⩽ t ⩽ T} satisfies the dynamics (<ref>) under the investment strategy u, and z ∈ℝ is an investment target determined at t=0 as the desired mean payoff at the end of the investment horizon [0,T].
By applying a Lagrange multiplier w, we can transform (<ref>) into an unconstrained problem
min_uE[(X^u_T)^2]-z^2-2w(E[X_T^u]-z)=min_uE[(X^u_T-w)^2]-(w-z)^2.
The problem in (<ref>) was well studied by <cit.>, and it can be solved analytically, whose solution u^* depends on w. Then the original constraint E[X_T^u^*]=z determines the value of w.
Employing the method in <cit.> and <cit.>, we give the “exploratory" version of the state dynamic (<ref>) motivated by repetitive learning in RL. In this formulation, the control process is now randomized, leading to a distributional or exploratory control process denoted by Π={Π_t,0⩽ t ⩽ T}. Here, Π_t∈ℳ(U) is the probability distribution function for control at time t, with ℳ(U) being the set of distribution functions on U. For such a given distributional control Π∈ℳ(U), the exploratory version of the state dynamics in (<ref>) is changed to
X̣_t^Π=b(Π_t)ṭ+σ(Π_t)Ẉ_t, 0<t⩽ T,
with X_0^Π=x_0, where
b(Π):=∫_ℝρσ u Π̣(u) and σ(Π):=√(∫_ℝσ^2 u^2 Π̣(u)).
Denote the mean and variance processes associated with the control process Π by μ_t and σ^2_t for 0⩽ t⩽ T:
μ_t: =∫_ℝu Π̣_t(u),
σ^2_t: =∫_ℝu^2Π̣_t(u)-μ_t^2.
Then it follows from (<ref>)–(<ref>) that
X̣_t^Π=ρσμ_tṭ+σ√(μ_t^2+σ_t^2)Ẉ_t,
with X_0^Π=x_0. We refer to <cit.> for more detailed explanation of where this exploratory formulation comes from.
Next, we use a Choquet regularizer Φ_h to measure the level of exploration, and the aim of the exploratory control is to achieve a continuous-time EMV problem under the framework of RL. For any fixed w ∈ℝ, we get the Choquet-regularized EMV problem by adding an exploration weight λ>0, which reflects the strength of the exploration desire:
min_Π∈𝒜(0,x_0)E[(X_T^Π-w)^2-λ∫_0^TΦ_h(Π_t)ṭ]-(w-z)^2,
where 𝒜(t,x) is the set of all admissible controls Π for (t,x)∈ [0,T)×ℝ. A control process Π∈𝒜(t,x) is said to be admissible if (i) for t⩽ s⩽ T, Π_s ∈ℳ(ℝ) a.s.; (ii) for A ∈ℬ(ℝ), {∫_AΠ_s(u)ụ,t ⩽ s ⩽ T} is ℱ_s-progressively measurable;
(iii) E[∫_t^T(μ_s^2+σ_s^2)ṣ]<∞; and
(iv) E[(X_T^Π-w)^2-λ∫_t^TΦ_h(Π_s)ṣ|X_t^Π=x]<∞.
The value function is then defined as
V(t,x;w):=inf_Π∈𝒜(t,x)E[(X_T^Π-w)^2-λ∫_t^TΦ_h(Π _s)ds|X_t^Π=x]-(w-z)^2,
and the value function under feedback control Π is
V^Π(t,x;w):=E[(X_T^Π-w)^2-λ∫_t^TΦ_h(Π _s)ṣ|X_t^Π=x]-(w-z)^2.
§ SOLVING EMV PROBLEM
In this section, we aim to to solve the Choquet-regularized EMV problem.
Firstly, we have following result based on Lemma <ref>.
Let a continuous h∈ℋ be given.
For any Π={Π_t}_t≥ 0∈𝒜(t,x) with mean process {μ_t}_t≥ 0 and variance process {σ_t^2}_t≥ 0, there exists Π^*={Π^*_t}_t≥ 0∈𝒜(t,x) given by
Q_Π^*_t(p) = μ_t + σ_t h'(1-p) / ||h'||_2, p∈ (0,1), t≥0,
which has the same mean and variance processes satisfying
V^Π^* (t,x;w)⩽ V^Π(t,x;w).
By (<ref>), it is clear that the term E[(X_T^Π-w)^2|X_t^Π=x] in (<ref>) only depends on the mean process {μ_t}_t≥ 0 and the variance process {σ_t^2}_t≥ 0 of {Π_t}_t≥ 0. Thus,
for any fixed t≥ 0, choose Π_t^* with mean μ_t and variance σ_t^2
that maximizes Φ_h(Π). Together with Lemma <ref>, we get the desired result.
Proposition <ref> indicates that the control problem in (<ref>)
is maximized within a location–scale family of distributions,[Recall that given a distribution Π the location-scale family of Π is the set of all distributions Π_a,b parameterized by a∈ and b>0 such that Π_a,b(x)=Π((x-a)/b) for all x ∈ℝ] which is determined only by h.
We know from Remark <ref> that if both the reward term and the dynamic process only depend on the mean process μ_t and the a-th moment process σ^a_t of Π_t for t≥ 0, then we have V^Π^* (t,x;w)⩽ V^Π(t,x;w)
with Π^*_t satisfying
Q_Π_t^*(p) = μ_t + σ_t |h^'(1-p)-c_h, b|^b/h^'(1-p)-c_h, b[h]_b^1-b, h^'(1-p)-c_h, b≠ 0, Q_Π_t^*(p) = μ_t, .
Using the Bellman's dynamic principle, we get
V(t,x;w)=inf_Π∈𝒜(t,x)E[-λ∫_t^sΦ_h(Π_v)dv+V(s,X_s^Π;w)|X_t^Π=x].
Then we can deduce from (<ref>)
that V satisfies the HJB equation
V_t(t,x;w)+min_Π∈ℳ(ℝ)[12σ^2(Π)V_xx(t,x;w)+b(Π)V_x(t,x;w)-λΦ_h(Π)]=0.
By (<ref>), the HJB equation in (<ref>) is equivalent to
V_t(t,x;w)+min_Π∈ℳ(ℝ)[σ^22(μ(Π)^2+σ(Π)^2)V_xx(t,x;w)+ρσμ(Π)V_x(t,x;w)-λΦ_h(Π)]=0,
with terminal condition V(T,x;w)=(x-w)^2-(w-z)^2. Here, we assume that Π has finite second-order moment, and μ(Π) and σ(Π)^2 are the mean and variance of Π, respectively.
We now pay attention to the minimization in (<ref>). Let
φ(t,x,Π)=σ^22(μ(Π)^2+σ(Π)^2)V_xx(t,x;w)+ρσμ(Π)V_x(t,x;w)-λΦ_h(Π).
Note that φ(t,x,Π) only depends on Π by μ(Π) and σ(Π)^2 except Φ_h(Π), we get
min_Π∈ℳ(ℝ)φ(t,x,Π)=min_m∈ℝ,s>0min_Π∈ℳ(R)
μ(Π)=m,σ(Π)^2=s^2φ(t,x,Π),
and the inner minimization problem is equivalent to
max_Π∈ℳ(R)Φ_h(Π) subject to μ(Π)=m, σ(Π)^2=s^2.
By Lemma <ref>, the maximizer Π^* of (<ref>) whose quantile function is Q_Π^*(p) satisfies
Q_Π^*(p)=m+sh'(1-p)‖ h'‖_2,
and
Φ_h(Π^*)=s‖ h'‖_2.
Then the HJB equation in (<ref>) is converted to
V_t(t,x;w)+min_m∈ℝ,s>0[σ^22(m^2+s^2)V_xx(t,x;w)+ρσ mV_x(t,x;w)-λ s‖ h'‖_2]=0.
By the first-order conditions, we get the minimizer of (<ref>)
m^*=-ρσV_xV_xx, and s^*=λ‖ h' ‖_2σ^2v_xx.
Bringing m^* and s^* back into (<ref>), we can rewrite (<ref>) as
V_t-ρ^22V_x^2V_xx-λ^22σ^2‖ h'‖_2^2V_xx=0.
By the terminal condition V(T,x;w)=(x-w)^2-(w-z)^2, a smooth solution to (<ref>) is given by
V(t,x;w)=(x-w)^2e^-ρ^2(T-t)-λ^2‖ h'‖_2^24ρ^2σ^2(e^ρ^2(T-t)-1)-(w-z)^2.
Then we can deduce from (<ref>), (<ref>) and (<ref>) that
m^*=-ρσ(x-w), and
s^*=λ‖ h'‖_22σ^2e^ρ^2(T-t),
and the dynamic (<ref>) under Π^* becomes
X̣_t^*=-ρ^2(X_t^*-w)ṭ+√(ρ^2(X_t^*-w)^2+λ^2‖ h'‖^2_24σ^2e^2ρ^2(T-t))Ẉ_t
with X_0^*=x_0.
Finally, we try to calculate w. By E[max_t∈[0,T](X_t^*)^2]<∞ and using Fubini theorem, we get
E[X_t^*]=x_0+E[∫_0^t-ρ^2(X_s^*-w)ds]=x_0+∫_0^t-ρ^2(E[X_s^*]-w)ṣ.
Hence, E[X_t^*]=(x_0-w)^2e^-ρ^2t+w. It follows from E[X_T^*]=z that
w=ze^ρ^2T-x_0/e^ρ^2T-1.
We summarize the above results in the following theorem.
The value function of Choquet-regularized EMV problem in (<ref>) is given by
V(t,x;w)=(x-w)^2e^-ρ^2(T-t)-λ^2‖ h'‖_2^24ρ^2σ^2(e^ρ^2(T-t)-1)-(w-z)^2,
and the corresponding optimal control process is Π^*, whose quantile function is
Q_Π^*(p)=-ρσ(x-w)+λ h'(1-p)2σ^2e^ρ^2(T-t),
with the mean and variance of Π^*
μ(Π^*)=-ρσ(x-w), and σ(Π^*)^2=λ^2‖ h'‖_2^24σ^4e^2ρ^2(T-t).
The optimal wealth process under Π^* is the unique solution of the SDE
X̣_t^*=-ρ^2(X_t^*-w)ṭ+√(ρ^2(X_t^*-w)^2+λ^2‖ h'‖^2_24σ^2e^2ρ^2(T-t))Ẉ_t
with x_0^*=x_0. Finally, the Lagrange multiplier w is given by
w=ze^ρ^2T-x_0/e^ρ^2T-1.
Along with the similar lines of the verification theorem in <cit.> (see their Theorem 4), we can verify that for any w∈ℝ, (<ref>) is indeed the value function and the optimal control Π^* is admissible.
There are several observations to note in this result. We can see from (<ref>) that
for any Choquet regularizer, the optimal exploratory distribution is uniquely determined by h'. Different h corresponds to a different Choquet regularizer; hence h will certainly affect the way and the level of exploration. Also, since h'(x) is the “probability weight" put on x when calculating the (nonlinear) Choquet expectation; see e.g., <cit.> and <cit.>, the more weight put on the level of exploration, the more spreaded out the exploration becomes around the current position. In addition, we point out that if we fix the value of ‖ h'‖^2_2 for different Choquet regularizers by multiplying or dividing by a constant, the mean and variance of the different optimal distributions are equal.
Moreover, the optimal control processes under Φ_h has the same expectation as the one in <cit.> when the differential entropy is used as a regularizer, which is also identical to the optimal control of the classical, non-exploratory MV problem, and the expectation is independent of λ and h. Meanwhile, the variance of optimal control process is independent of state x but decreases over time, which is different from <cit.> where an infinite horizon counterpart is studied. This is intuitive because by exploration, one can get more information over time, and then the demand and aspiration of exploration decreases. In a sense, the expectation represents exploitation which means making the best decision based on existing information, and the variance represents exploration. As a result, the
observations above show a perfect separation between exploitation
and exploration.
In the following example, we show optimal exploration samplers under the EMV framework for some concrete choices of h studied in <cit.>. Theorem <ref> yields that the mean of the optimal distribution is independent of h, so we will specify only its quantile function and variance for each h discussed below.
(i) Let h(p)=-plog(p). Then we have
Φ_h (Π)=∫_0^∞Π([x,∞)) log(Π([x,∞)))x̣,
which is the cumulative residual entropy defined in <cit.> and <cit.>; see Example 4.5 of <cit.>. The optimal policy is a shifted-exponential distribution given as
Π^*(u; t,x)=1-exp{-2σ^2/λ e^ρ^2(T-t)(u+ρ/σ(x-w)) -1}.
Since h'^2_2=1, the variance of Π^* is given by
(σ^*(x))^2=λ^24σ^4e^2ρ^2(T-t).
(ii) Let h(p)=∫_0^p z(1-s)ṣ, where z is the standard normal quantile function. We have Φ_h (Π) =∫_0^1 Q_Π(p) z (p) p̣; see Example 4.6 of <cit.>. The optimal policy is a normal distribution given by
Π^*(· ; t,x)= N(-ρσ(x-w), λ^24σ^4e^2ρ^2(T-t)),
owing to the fact that h'^2_2=1.
(iii)
Let h(p)=p-p^2. Then Φ_h(Π) = [|X_1-X_2|]/2, which is the Gini mean difference; see Section 4.5 of <cit.>. The optimal policy Π^*(·;x) is a uniform distribution given as
U[-ρσ(x-w)-λ2σ^2e^ρ^2(T-t),-ρσ(x-w)+λ2σ^2e^ρ^2(T-t)].
Since h'^2_2=1/3, the variance of Π^* is given by (σ^*(x))^2=λ^2e^2ρ^2(T-t)/12σ^4.
§ AN ALTERNATIVE FORM OF CHOQUET REGULARIZERS
As mentioned in Introduction, for an absolutely continuous Π, Shannon's differential entropy, defined as
DE(Π):=-∫_Π'(x)log(Π'(x))x̣
is commonly used for exploration–exploitation balance in RL; see <cit.>, <cit.> and <cit.>. It admits a different quantile representation (see <cit.>)
DE(Π)= ∫_0^1 log (Q'_Π(p)) p̣.
It is clear that DE is location invariant, but not scale homogeneous. It is not quantile additive either. Therefore, DE is not a Choquet regularizer.
Inspired by the logarithmic form of DE, we consider another EMV problem:
V(t,x;w):=inf_Π∈𝒜(t,x)E[(X_T^Π-w)^2-λ∫_t^TlogΦ_h(Π_s)ds|X_t^Π=x]-(w-z)^2,
where we apply the logarithmic form of Φ_h as the regularizer to measure and manage the level of exploration. According to the monotonicity and concavity of logarithmic function, we can easily verify that logΦ_h is still a concave mapping:
logΦ_h(λΠ_1 + (1-λ) Π_2 ) ≥log(λΦ_h( Π_1) + (1-λ) Φ_h(Π_2 )) ≥λlogΦ_h(Π_1)+(1-λ)logΦ_h(Π_2)
Π_1,Π_2∈ℳλ∈ [0,1],
and consistent with convex order:
logΦ_h( Π_1 ) ≤logΦ_h(Π_2 ), Π_1,Π_2∈ℳΠ_1Π_2.
Comparing to the properties of Φ_h, logΦ_h is not necessarily non-negative as Φ_h. However, the non-negativity does not inherently affect the exploration. Further, Φ(Π) is zero when Π is Dirac measure, we then have logΦ(δ_c)=-∞ for all c∈ℝ. The location invariance for logΦ_h is obvious. For scale homogeneity, logΦ_h is no longer linear in its scale, but we have logΦ_h(Π')=logΦ_h(Π)+logλ for any λ>0 where Π' is the distribution of λ X for λ >0 and X∼Π. It is interesting to see that the level of randomness is captured by the term of logλ. Based on the observations above, we find that logΦ_h has many similarities with DE in capturing the randomness.
We remark that maximizing Φ_h over ℳ^2(m,s^2) is equivalent to maximizing logΦ_h over ℳ^2(m,s^2). In the following theorem, we give the optimal result of (<ref>) directly. Since the procedure is similar to Section <ref>, we omit the details here.
The value function of (<ref>) is given by
V(t,x;w)=(x-w)^2e^-ρ^2(T-t)+λρ^24(T^2-t^2)-λ2(ρ^2T+logλ‖ h' ‖_2^22eσ^2)(T-t)-(w-z)^2,
and the corresponding optimal control process is Π^* with quantile function
Q_Π^*(p) =-ρσ(x-w)+√(λ2σ^2‖ h'‖^2_2)e^1/2ρ^2(T-t)h'(1-p).
Moreover, the mean and variance of Π^* are
μ(Π^*) =-ρσ(x-w), and σ(Π^*)^2=λ2σ^2e^ρ^2(T-t).
The optimal wealth process under Π^* is the unique solution of the SDE
X̣_t^* =-ρ^2(X_t^*-w)ṭ+√(ρ^2(X_t^*-w)^2+λ2e^ρ^2(T-t))Ẉ_t
with X_0^*=x_0.
Finally, the Lagrange multiplier w is given by
w=ze^ρ^2T-x_0/e^ρ^2T-1.
By (<ref>), we can see that the optimal exploratory distribution is also uniquely determined by h'. Since the form of logΦ_h affects the value function, even though the form of optimal distributions is the same, it is to be expected that the variance of the optimal distributions is different from (<ref>).
It is worth pointing that the mean and variance of the optimal distributions are the same as those in <cit.> where the differential entropy is used as a regularizer, which is an interesting observation. This is because
for the payoff
function depending only on the mean and variance processes of the distributional control, the Gaussian distribution maximizes the entropy when the mean and variance are fixed, and the maximized
MV constrained entropy and logΦ_h are equal and both logorithmic in the given standard deviation and independent of the mean.
Moreover, since different h corresponds to different exploratory distributions, our optimal exploratory distributions are no longer necessarily Gaussian as in <cit.>, and are dictated by the choice of Choquet regularizers, which can be such as Gaussian, uniform distribution or exponential distribution.
Parallel to Example <ref>, we give Example <ref>. Theorem <ref> yields that both the mean and the variance of the optimal distribution are independent of h, so we will specify only its quantile function.
(i) Let h(p)=-plog(p). Then we have
logΦ_h (Π)=log∫_0^∞Π([x,∞)) log(Π([x,∞)))x̣.
The optimal policy is a shifted-exponential distribution given as
Π^*(u; t,x)=1-exp{-√(2σ^2/λ e^ρ^2(T-t))(u+ρ/σ(x-w)) -1}.
(ii) Let h(p)=∫_0^p z(1-s)ṣ, where z is the standard normal quantile function. We have logΦ_h (Π) =log∫_0^1 Q_Π(p) z (p) p̣. The optimal policy is a normal distribution given by
Π^*(· ; t,x)= N(-ρσ(x-w),λ2σ^2e^ρ^2(T-t)).
(iii)
Let h(p)=p-p^2. Then logΦ_h(Π) = log[|X_1-X_2|]-log2. The optimal policy Π^*(·;x) is a uniform distribution given as
U[-ρσ(x-w)-√(3λ2σ^2e^ρ^2(T-t)),-ρσ(x-w)+√(3λ2σ^2e^ρ^2(T-t))].
Next, we consider the solvability equivalence between the classical and the exploratory MV problems. Here, “solvability equivalence” implies that the solution of one
problem will lead to that of the other directly, without needing to solve it separately.
Recall the classical MV problem in Section <ref>. The explicit forms of optimal control and value function, denoted respectively by u^* and V^cl, were given by Theorem 3.2-(b) of <cit.>. We provide the solvability equivalence between the classical and the exploratory MV problems defined by (<ref>), (<ref>) and (<ref>), respectively. Since the proof is similar to that of Theorem 9 in Appendix C of <cit.>, we omit the details here.
The following three statements (a), (b), (c) are equivalent.
(a) The function V(t,x;w)=(x-w)^2e^-ρ^2(T-t)-λ^2‖ h'‖_2^24ρ^2σ^2(e^ρ^2(T-t)-1)-(w-z)^2, (t,x)∈ [0,T]×ℝ, is the value function of the EMV problem (<ref>) and the optimal feedback control is Π^*, whose quantile function is
Q_Π^*(p) =-ρσ(x-w)+λ h'(1-p)2σ^2e^ρ^2(T-t).
(b) The value function V(t,x;w)=(x-w)^2e^-ρ^2(T-t)+λρ^24(T^2-t^2)-λ2(ρ^2T+logλ‖ h' ‖_2^22eσ^2)(T-t)-(w-z)^2, (t,x)∈ [0,T]×ℝ, is the value function of the EMV problem (<ref>) and the optimal feedback control is Π^*, whose quantile function is
Q_Π^*(p) =-ρσ(x-w)+√(λ2σ^2‖ h'‖^2_2)h'(1-p)e^1/2ρ^2(T-t).
(c) The function V^cl(t,x;w)=(x-w)^2e^-ρ^2(T-t)-(w-z)^2, (t,x)∈ [0,T]×ℝ, is the value function of the classical MV problem (<ref>) and the optimal feedback control is
u^*(t,x;w)=-ρσ(x-w).
Moreover, the three problems above all have the same Lagrange multiplier
w=ze^ρ^2T-x_0/e^ρ^2T-1.
From the proposition above, we naturally want to explore more connections between (a), (b) and (c). In fact, they have the following convergence property.
Suppose that statement (a) or (b) or (c) of Proposition <ref> holds. Then for each (t,x,w)∈ [0,T]×ℝ×ℝ,
lim_λ→ 0Π^*(· ;t,x;w)= lim_λ→ 0Π^*(· ;t,x;w)=δ_u^*(t,x;w)(·) weakly,
and
lim_λ→ 0|V(t,x;w)-V^cl(t,x;w)|=0, and lim_λ→ 0|V(t,x;w)-V^cl(t,x;w)|=0.
The weak convergence is obvious and the convergence of value function follows from
lim_λ→ 0λ^2‖ h'‖_2^24ρ^2σ^2(e^ρ^2(T-t)-1)=0,
and lim_λ→ 0λ2logλ‖ h'‖_2^22eσ^2=0.
Next, we examine the “cost of exploration" – the loss in the original (i.e.,
non-regularized) objective due to exploration, which was originally defined and derived in <cit.> for problems with entropy regularization.
Due to the explicit inclusion of exploration in the objectives (<ref>) and (<ref>), the cost of the EMV problems are defined as
C^u^*,Π^*(0,x_0;w)=(V(0,x_0;w)+λ𝔼[∫_0^TΦ_h(Π_t^*)dt|X_0^Π^*=x_0])-V^cl(0,x_0;w),
and
C^u^*,Π^*(0,x_0;w)=(V(0,x_0;w)+λ𝔼[∫_0^TlogΦ_h(Π_t^*)dt|X_0^Π^*=x_0])-V^cl(0,x_0;w).
Suppose that statement (a) or (b) or (c) of Proposition <ref> holds. Then the cost of exploration for the EMV problem are, respectively, given as
C^u^*,Π^*(0,x_0;w)=λ^2‖ h'‖_2^24ρ^2σ^2(e^ρ^2T-1),
and
C^u^*,Π^*(0,x_0;w)=λ T2 .
Note that
Φ_h(Π^*_t)=σ(Π^*_t)‖ h'‖_2=λ‖ h'‖_2^22σ^2e^ρ^2(T-t),
and
logΦ_h(Π^*_t)=log(σ(Π^*_t)‖ h'‖_2)=1/2log(λ‖ h'‖^2_22σ^2e^ρ^2(T-t)).
Bringing Φ_h(Π^*_t) and logΦ_h(Π^*_t) back into (<ref>) and (<ref>), respectively, we can get (<ref>) and (<ref>).
The costs of exploration for the two EMV problems are quite different. When Φ_h is regarded as the regularizer, the derived exploration cost does depend on the unknown model parameters through h, μ and σ. (<ref>) implies that, with other parameters being equal, to reduce the exploration cost one should choose regularizers with smaller values of h'_2.
Moreover, by (<ref>), we have
C^u^*, Π^*(0,x_0;w)=λh'_2 /2 ρ^2 σ^*(x_0)-λ^2‖ h'‖_2^24ρ^2σ^2,
meaning that the cost is proportional to the standardized deviation of the exploratory control, but inversely proportional to the square of the Sharp ratio ρ^2.
In contrast, when logΦ_h is regarded as the regularizer, the derived exploration cost only depends on λ and T. It is also interesting to note that C^u^*,Π^*(0,x_0;w) in (<ref>) is the same as the one using DE as the regularizer; see Theorem 3.4 of <cit.>.
Nevertheless, they also have some common features. The exploration cost increases as the exploration weight λ and the exploration horizon T increase, due to more emphasis placed on exploration. In addition, the costs are both independent of the Lagrange multiplier, which suggests that the exploration cost will not increase when the agent is more aggressive (or risk-seeking) reflected by the expected target z or equivalently the Lagrange multiplier w.
To compare C^u^*,Π^*(0,x_0;w) and C^u^*,Π^*(0,x_0;w), we have
C^u^*,Π^*(0,x_0;w)C^u^*,Π^*(0,x_0;w)=λ‖ h'‖_2^22σ^2e^ρ^2T-1ρ^2T=λ‖ h'‖_2^22σ^2(1+∑_n=1^∞ρ^2nT^n(n+1)!).
Then we can easily verify which regularizer has smaller exploration cost under determined market parameters. In general, from a cost point of view, when λ, ‖ h'‖_2 and ρ^2 are small enough and σ is relatively large, Φ_h is a good choice to reduce cost; otherwise logΦ_h may be a better choice.
§ RL ALGORITHM DESIGN
§.§ Policy improvement
In RL setting, the policy improvement is an important process which ensures the existence of a new policy better than any given policy. In Proposition <ref>, we have showed that the EMV problem in (<ref>) can be maximized within a location–scale family of distributions. Such a property is also applied to the EMV problem in (<ref>) when logΦ_h is regarded as the regularizer.
In the following theorem, by Itô's formula, we can also verify that for any given policy, when the regularizer is Φ_h or logΦ_h, there always exists a better policy in a location-scale family which depends on h. So we can search the optimal exploration distribution only in this location-scale family.
Let w∈ℝ be fixed and Π (resp. Π) be an arbitrarily given admissible feedback control whose corresponding value function is V^Π(t,x;w) (resp. V^Π(t,x;w)) under regularizer Φ_h (resp. logΦ_h). Suppose that V^Π(t,x;w) (resp. V^Π(t,x;w))∈ C^1,2([0,T)×ℝ∩ C^0([0,T]×ℝ)) and V^Π_xx(t,x;w) (resp. V^Π_xx(t,x;w))>0 for any (t,x)∈[0,T)×ℝ. Suppose further that the feedback control Π (resp. Π) whose quantile function is
Q_Π(p) =-ρσV_x^ΠV_xx^Π+λσ^2V_xx^Πh'(1-p)
resp. Q_Π(p) =-ρσV_x^ΠV_xx^Π+√(λσ^2‖ h'‖^2_2V_xx^Π)h'(1-p)
is admissible. Then
V^Π(t,x;w) ⩽ V^Π(t,x;w), (t,x)∈ [0,T)×ℝ,
resp. V^Π(t,x;w) ⩽V^Π(t,x;w), (t,x)∈ [0,T)×ℝ.
Let Π={Π_s,s∈[t,T]} and Π ={Π _s,s∈[t,T]} be the open-loop control generated by the given feedback control policies Π and Π, respectively. By assumption, Π and Π are admissible. Applying Itô's formula, we have for any (t,x)∈ [0,T]×ℝ,
V^Π(s,X_s^Π) =V^Π(t,x)+∫_t^s
V_t^Π(v,X_v^Π)ṿ+∫_t^s V_x^Π(v,X_v^Π)X̣_v^Π
+12∫_t^sV_xx^Π(v,X_v^Π)<̣X^Π,X^Π>_v
=V^Π(t,x)+∫_t^sV_x^Π(v,X_v^Π)σ√(μ(Π_v)^2+σ(Π_v)^2)Ẉ_v
+∫_t^s[V_t^Π(v,X_v^Π)+ρσμ(Π_v)V_x^Π(v,X_v^Π)+σ^22(μ(Π_v)^2+σ(Π_v)^2)V_xx^Π(v,X_v^Π)]ṿ.
Let τ_n:=inf{s⩾ t:∫_t^s σ^2 V_x^Π(v,X_v^Π)^2(μ(Π_v)^2+σ(Π_v)^2)ṿ⩾ n} be a family of stopping times, then substituting s∧τ_n into (<ref>) and taking expectation we get
V^Π(t,x) =E[ V^Π(s∧τ_n,X_s∧τ_n^Π)-∫_t^s∧τ_n[V_t^Π(v,X_v^Π)+ρσμ(Π_v)V_x^Π(v,X_v^Π).
+.σ^22(μ(Π_v)^2+σ(Π_v)^2)V_xx^Π(v,X_v^Π)]ṿ|X_t^Π=x ].
On the other hand, by standard argument we have
V_t^Π(t,x)+ρσμ(Π)V_x^Π(t,x)+σ^22(μ(Π)^2+σ(Π)^2)V_xx^Π(t,x)-λΦ_h(Π)=0.
It follows that
V_t^Π(t,x)+min_Π'∈𝒫(ℝ)[ρσμ(Π')V_x^Π(t,x)+σ^22(μ(Π')^2+σ(Π')^2)V_xx^Π(t,x)-λΦ_h(Π')]⩽ 0.
By (<ref>), we know Π is the minimizer of (<ref>). Substituting Π into (<ref>) and bringing back to (<ref>) we have
V^Π(t,x)⩾E[V^Π(s∧τ_n,X_s∧τ_n^Π)-∫_t^s∧τ_nλΦ_h(Π_v)dv|X_t^Π=x].
Taking s=T in (<ref>) and sending n to ∞, we obtain
V^Π(t,x)⩾E[V^Π(T,X_T^Π)-λ∫_t^TΦ_h(Π_v)dv|X_t^Π=x]=V^Π(t,x).
The proof of regularizer logΦ_h is almost the same, so we omit it.
Let Π^0(u;t,x,w) be a feedback control which has quantile function
Q_Π^0(p)=Q_Π^0(p)=a(x-w)+c_1e^c_2(T-t)h'(1-p),
and {Π^n(u;t,x,w)} and {Π^n(u;t,x,w)} be the sequence of feedback controls updated by (<ref>) and (<ref>), respectively. Denoted by {V^Π^n(t,x;w)} and {V^Π^n(t,x;w)} the sequence of corresponding value functions. Then
lim_n→∞Π^n(·;t,x,w) =Π^*(·;t,x,w) weakly,
resp. lim_n→∞Π^n(·;t,x,w) =Π^*(·;t,x,w) weakly,
and
lim_n→∞V^Π^n(t,x;w) =V(t,x;w), (t,x)∈ [0,T),
resp. lim_n→∞V^Π^n(t,x;w) =V(t,x;w), (t,x)∈ [0,T)×ℝ,
for any (t,x,w)∈[0,T]×ℝ×ℝ, where Π^* and Π^* in (<ref>) and (<ref>) are the optimal controls, and V and V are the value functions given by (<ref>) and (<ref>).
Here we only provide the detailed proof for the case of Φ_h, and the results of logΦ_h can be derived in the same way.
Let {Π^0_s} be the open-loop control generated by Π^0. We can verify that {Π^0_s} is admissible.
The dynamic of wealth under Π^0 is
X̣_t^Π^0=ρσμ(Π^0)ṭ+σ√(μ(Π^0)^2+σ(Π^0)^2)Ẉ_t, X_t^Π^0=x,
and the value function under Π^0 is
V^Π^0(t,x)=E[∫_t^T-λΦ_h(Π_v^0)dv+(X_T^Π^0-w)^2|X_t^Π^0=x]-(w-z)^2.
By Feynman–Kac formula, we deduce that V^Π^0 satisfies the following PDE
V_t(t,x)+ρσμ(Π^0)V_x(t,x)+12σ^2(μ(Π^0)^2+σ(Π^0)^2)V_xx(t,x)-λΦ_h(Π^0)=0,
with terminal condition V^Π^0(T,x)=(x-w)^2-(w-z)^2. Solving this equation we obtain
V^Π^0(t,x;w)=(x-w)^2e^(2ρσ a+σ^2a^2)(T-t) +F_0(t),
where F_0(t) is a smooth function which only depends on t. Obviously, V^Π^0(t,x;w) satisfies the conditions of Theorem <ref>, so we can use (<ref>) to obtain Π^1 whose quantile function is
Q_Π^1(p) =-ρσ(x-w)+λ h'(1-p)2σ^2e^(2ρσ a+σ^2a^2)(T-t),
with
μ(Π^1)=-ρσ(x-w), and σ^2(Π^1)=λ^2‖ h'‖_2^24σ^2e^2(2ρσ a+σ^2 a^2)(T-t).
By repeating the above program with Π^1, we have
V^Π^1(t,x;w)=(x-w)^2e^-ρ^2(T-t) +F_1(t),
where F_1(t) is a smooth function which only depends on t. Using Theorem <ref> again we obtain Π^2 whose quantile function is
Q_Π^2(p)=-ρσ(x-w)+λ h'(1-p)2σ^2e^ρ^2(T-t),
with
μ(Π^2)=-ρσ(x-w), and σ^2(Π^2)=λ^2‖ h'‖_2^24σ^4e^2ρ^2(T-t).
By (<ref>)-(<ref>), we know that Π^2 is optimal.
The above theorem shows that when designing a RL algorithm, the distribution with the quantile form (<ref>) can be selected as the initial distribution to ensure the convergence.
§.§ The EMV algorithm
In this section, we aim to solve (<ref>) and (<ref>) by assuming that there is no knowledge about the underlying parameters. One method to overcome this problem is to replace the parameters by their estimations. However, as mentioned in Introduction, the estimations are usually very sensitive to the sample. We will give an offline RL algorithm based on the Actor-Critic algorithm in <cit.>, <cit.> and <cit.>. The Actor-Critic algorithm is essentially a policy-based algorithm, but additionally learns the value function in order to help the policy function learn better. Meanwhile, we use a self-correcting scheme in <cit.> to learn the Lagrange multiplier w.
Here, we only present the RL algorithm for the case of Φ_h to solve (<ref>).
When using logΦ_h as the regularizer, we only need to replace Φ_h by logΦ_h and modify the parameterization appropriately.
In continuous-time setting, we first discretize [0,T] into N small intervals [t_i,t_i+1], (i=0,1,...,N-1) whose length is equal to T/N=Δ t. We use policy gradient principle to update Actor; and for Critic, <cit.> showed that the time-discretized algorithm converges as Δ t → 0 as long as the corresponding discrete-time algorithms converges, thus we adopt a learning approach of temporal difference error (the TD error; see <cit.> and <cit.>).
Assume that Π is a given admissible feedback policy and let 𝒟={(t_i,x_t_i),i=0,1,...,N} be a set of samples, the initial sample is (0,x_0), then for i=1,2,...,N, we sample u_t_i-1 from Π_t_i-1 and get x_t_i at t_i.
On the one hand, we have
V^Π(t,x)=E[(X_T^Π-w)^2-λ∫_t^TΦ_h(Π_s)ds|X_t^Π=x]-(w-z)^2,
so the TD error at t_i is
δ_i=-λΦ_h(Π_t_i)Δ t+V^Π(t_i+1,X_t_i+1)-V^Π(t_i,X_t_i), i = 0,1,...,N-1.
On the other hand, based on (<ref>), we can parameterize the Critic value by
V^θ(t,x)=(x-w)^2e^-θ_2(T-t)-θ_1e^θ_0(T-t)-(w-z)^2.
For a single point t_i, we define the loss function as
L(θ)=12(U_t_i-V^θ(t_i,X_t_i))^2,
where U_t_i is the estimation of V(t_i,X_t_i). We take U_t_i a bootstrapping estimate -λΦ_h(Π_t_i)Δ t+V^θ(t_i+1,X_t_i+1) in (<ref>) as the temporal difference target which will not generate gradient to update the value function automatically. So the gradient of the loss function is
∇_θL(θ)=-(-λΦ_h(Π_t_i)Δ t+V^θ(t_i+1,X_t_i+1)-V^θ(t_i,X_t_i))∇_θV^θ(t_i,X_t_i).
Let α_θ be the learning rate of θ, then by (<ref>), we can get the gradient and the update rule of θ with a set of sample 𝒟:
∇θ =-∑_i=0^N-1∂ V^θ∂θ(t_i,x_t_i)[V^θ(t_i+1,x_t_i+1)-V^θ(t_i,x_t_i)-λΦ_h(Π_t_i^ϕ)Δ t],
and
θ⟵θ - α_θ∇θ.
Based on Theorem <ref>, we can parameterize the policy by Π ^ϕ with quantile function
Q_Π_t^ϕ(p)=-ϕ_0(x-w)+e^1/2 ϕ_1+1/2ϕ_2(T-t)h'(1-p).
By Lemma 2.3 of <cit.>, we know that
Φ_h(Π_t^ϕ)=∫_0^1(-ϕ_0(x-w)+e^1/2 ϕ_1+1/2ϕ_2(T-t)h'(p)^2)p̣.
Let g(t,x;ϕ)=∇_θV^Π ^ϕ(t,x) be the policy gradient of Π ^ϕ and p(t,ϕ)=Φ_h(Π_t^ϕ), together with Theorem 5 of <cit.>, g(t,x;ϕ) has the following representation:
g(t,x;ϕ)=E[∫_t^T{∂∂ϕlogΠ̇^ϕ_t(Ṿ^Π _ϕ(s,X_s^Π ^ϕ)-λ p(s,ϕ)ṣ)-λ∂ p∂ϕ(s,ϕ)ṣ}|X_t^Π ^ϕ=x.],
where Π̇^ϕ_t is the density function of Π^ϕ_t. Let α_ϕ be the learning rate of ϕ, then by (<ref>), we can also get the gradient and the update rule of θ with a set of sample 𝒟:
∇ϕ = ∑_i=0^N-1{∂∂ϕ.logΠ̇^ϕ(u_t_i|t_i,x_t_i)[V^θ(t_i+1,x_t_i+1)-V^θ(t_i,x_t_i)-λΦ_h(Π_t_i^ϕ)Δ t]
-.λ∂ p∂ϕ(t_i,x_t_i,ϕ)Δ t},
and
ϕ⟵ϕ - α_ϕ∇ϕ.
Let α_w be the learning rate of ϕ, then by the constraint E[X_T]=z we can get the standard stochastic approximation update rule:
w_n+1=w_n-α_w(1m∑_i=j-m+1^jx_T^(i) -z),
where x_T^(i) is the last point of sample i and j≡ 0 m.
We summarize the algorithm as pseudocode in Algorithm 1.
§ SIMULATION
In this section, we conduct simulations and test our algorithm presented in Algorithm 1. In our setting, we take investment horizon to be T=1 and time step to be Δ t=1/252, which can be interpreted as the MV problem considered over one-year period, and then the number of time grids is N=252 naturally. We can take the annualized interest rate to be r=2% and take the annualized return μ and volatility σ from {-50%, -30%, -10%, 10%, 30%, 50%} and {10%, 20%, 30%, 40%}, respectively. Let the initial wealth to be x_0=1 and the annualized target return on the terminal wealth is 40% which yields z=1.4.
For our algorithm, we take the number of episodes K=20000, and take the sample average size for Lagrange multiplier m=10. Based on Proposition <ref> and Remark <ref>, to control their exploration costs, the exploration weight λ is taken as 0.01 when we apply Φ_h as the regularizer, and 0.1 for logΦ_h being the regularizer. The learning rates are taken as α_θ=α_ϕ=α_w=0.01 with decay rate l(j)=j^-0.51.
Based on Examples <ref> and <ref>, we mainly investigate the simulation results for three exploration distributions: Gaussian, exponential distribution and uniform distribution. We present the mean and the variance of the last 200 terminal wealth, and the corresponding Sharpe ratio (mean-1/√(variance)). The simulation results of our algorithm are presented in Tables <ref>–<ref>.
For different values of μ and σ, we take means of every 100 terminal wealth for different h to show the tendency of the expectation of terminal wealth in Figures <ref> and <ref>, respectively.
We find that the algorithm performs more significantly as |μ| increases or as σ decreases with other parameters fixed. When μ<0, exponential distribution seems to be underperforming, but in fact after enough iterations, the sample mean will still fluctuate around 1.4. In addition, when |μ| is small and σ is large relatively, the performance is bad. This is because larger σ reflects higher level of randomness of the environment, and at this time the significance of exploration becomes smaller.
The performance under different λ with Gaussian is shown in Figures <ref> and <ref>. We can see that when ρ^2 is relatively larger, λ has a more significant impact on algorithm performance under regularizer Φ_h than logΦ_h. This is consistent with Remark <ref>. Finally, we show one sample trajectory of u_t_i under different h in Figure <ref>. It is clearly from Figure <ref> that the trajectories of u_t_i under different regularizer are different, and the data from exponential distribution is more spread out compared to the normal and uniform distributions. In particular, most data of exponential distribution are small while some data are very large, which may be the reason why exponential distribution sometimes underperforms. Since our parameters and target settings are the same as those in <cit.>, we can see that our RL algorithm based on Choquet regularizations and logarithmic Choquet regularizers perform on par with the one in <cit.>. Compared with the results that Gaussian is always the optimal in <cit.>, the availability of a large class of Choquet regularizers makes it possible to choose specific regularizers to achieve certain objective used exploratory samplers such as exponential, uniform and Gaussian.
§ CONCLUSION
For the first time, we applied the Choquet-regularized continuous-time RL framework proposed by <cit.> to practical problems. We studied the MV problem under Choquet regularization and its logarithmic form. Several different optimal exploration distributions of different h were given, and when ‖ h'‖_2 is fixed, the optimal exploration distributions have the same mean and variance. Unlike the infinite time horizon results in <cit.>, the variance decreases over time in the finite time horizon problem. At the same time, the mean of the optimal exploration distribution is related to the current state x and independent of λ and h, which is equal to the optimal action of the classical MV problem. The variance of the optimal exploration distribution is related to λ and h and independent of state x, and even independent of h under logarithmic regularization. These also showed the perfect separation between exploitation and exploration in the mean and variance of the optimal distributions as in <cit.> when entropy is used as a regularizer.
Further, we have obtained that the two regularization problems converge to the traditional MV problem, and compared the exploration costs of the two regularizations. We found that the exploration cost under the logarithmic Choquet regularization is consistent with the exploration cost under the entropy regularization, only related to λ and time range T, while the exploration cost under Choquet regularization is also related to market parameters. Through simulation, we compared the two kinds of regularization. In general, when the market fluctuates greatly and the willingness to explore is not strong, the cost of Choquet regularization is lower. On the contrary, it may be better to use logarithmic Choquet regularizers for regularization.
There are still some open questions. First of all, we regard λ as an exogenous variable. From the perspective of exploration cost, turning λ into endogenous and changeable can help us better control the exploration cost. As time goes by, the information we obtain through exploration will also increase, so the willingness to explore will also change, which also implies the rationality of the changing λ to time-related. Secondly, the current Choquet integral can only deal with one-dimensional action space, thus how to extend the Choquet regularizers to multi-dimensional situations to adapt to more problems is still a challenging problem. We will study these issues in the future.
Acknowledgements. This work was supported by the National Natural Science Foundation of China (No. 11931018 and 12271274)
10[Dai et al.Dai et al.2023]DDJ23
Dai, M., Dong, Y. and Jia, Y. (2023). Learning equilibrium mean‐variance strategy. Mathematical Finance. doi.org/10.1111/mafi.12402.
[DoyaDoya2000]D20
Doya, K. (2000). Reinforcement learning in continuous time and space. Neural Computation, 12(1), 219–245.
[Gilboa and SchmeidlerGilboa and Schmeidler1989]GS89
Gilboa, I. and Schmeidler, D. (1989). Maxmin expected utility with non-unique prior. Journal of Mathematical Economics, 18(2), 141–153.
[Gu et al.Gu et al.2016]GLGTL16
Gu, S., Lillicrap, T., Ghahramani, Z., Turner, R. E. and Levine, S. (2016). Q-prop: Sample-efficient policy gradient with an off-policy critic. arXiv: 1611.02247.
[Guo et al.Guo et al.2020]GXZ20
Guo, X., Xu, R. and Zariphopoulou, T. (2020). Entropy regularization for mean field games with learning. arXiv: 2010.00145.
[Haarnoja et al.Haarnoja et al.2017]HTAL17
Haarnoja, T., Tang, H., Abbeel, P. and Levine, S. (2017). Reinforcement learning with deep energy-based policies. In Proceedings of the 34th International Conference on Machine Learning, pages 1353–1361.
[Wang et al.Han et al.2023]HWZ23
Han, X., Wang, R. and Zhou, X. Y. (2023). Choquet regularization for continuous-time reinforcement learning. SIAM Journal on Control and Optimization, forthcoming.
[Hu and ChenHu and Chen2020]HC20
Hu, T. and Chen, O. (2020). On a family of coherent measures of variability. Insurance: Mathematics and Economics, 95, 173–182.
[Jia and ZhouJia and Zhou2022a]JZ22a
Jia, Y. and Zhou, X. Y. (2022a). Policy evaluation and temporal-difference learning in continuous time and space: A martingale approach. Journal of Machine Learning Research, 23(154), 1–55.
[Jia and ZhouJia and Zhou2022b]JZ22b
Jia, Y. and Zhou, X. Y. (2022b). Policy gradient and actor-critic learning in continuous time and space: Theory and algorithms. Journal of Machine Learning Research, 23(154), 1–55.
[Jiang et al.Jiang et al.2022]JSW22
Jiang, R., Saunders, D. and Weng, C. (2022). The reinforcement learning Kelly strategy. Quantitative Finance, 22(8), 1445–1464.
[Konda and TsitsiklisKonda and Tsitsiklis1999]KT99
Konda, V. and Tsitsiklis, J. (2000). Actor-critic algorithms. In Advances in Neural Information Processing Systems, pages 1008–1004.
[Li and Ng2000]LN00
Li, D. and Ng, W. L. (2000). Optimal dynamic portfolio selection: Multiperiod mean-variance formulation. Mathematical Finance, 10(3), 387–406.
[Li et al.2002]LZL02
Li, X., Zhou, X. Y. and Lim, A. E. (2002). Dynamic mean-variance portfolio selection with no-shorting constraints.
SIAM Journal on Control and Optimization, 40(5), 1540–1555.
[Liu et al.2020]LCLW20
Liu, F., Cai, J., Lemieux, C. and Wang, R. (2020). Convex risk functionals: Representation and applications.
Insurance: Mathematics and Economics, 90, 66–79.
[MarkowitzMarkowitz1952]M52
Markowitz, H. (1952). Portfolio selection. The Journal of Finance, 7(1), 77–91.
[Neu et al.Neu et al.2017]NGJ17
Neu, G., Jonsson, A. and Gómez, V. (2017). A unified view of entropy-regularized
markov decision processes. arXiv: 1705.07798.
[Pesenti et al.Pesenti et al.2020]PWW20
Pesenti, S., Wang, Q. and Wang R. (2020). Optimizing distortion risk metrics with distributional uncertainty. arXiv: 2011.04889.
[QuigginQuiggin1982]Q82
Quiggin, J. (1982). A theory of anticipated utility. Journal of Economic Behavior and Organization, 3(4), 323–343.
[Rao et al.Rao et al.2004]RCVW04
Rao, M., Chen, Y., Vemuri, B. C. and Wang, F. (2004). Cumulative residual entropy: A new measure of information. IEEE Transactions on Information Theory, 50, 1220–1228.
[Sutton and BartoSutton and Barto2018]SB18
Sutton, R. S. and Barto, A. G. (2018). Reinforcement learning:An introduction. Cambridge, MA: MIT Press.
[Sunoj and SankaranSunoj and Sankaran2012]SS12
Sunoj, S. M. and Sankaran, P. G. (2012). Quantile based entropy function. Statistics and Probability Letters, 82(6), 1049–1053.
[Wang et al.Wang et al.2020a]WZZ20a
Wang, H., Zariphopoulou, T. and Zhou, X. Y. (2020a). Reinforcement learning in continuous time and space: a stochastic control approach. Journal of Machine Learning Research, 21(1), 8145–8178.
[Wang and ZhouWang and Zhou2020]WZ20
Wang, H. and Zhou, X. Y. (2020). Continuous-time mean-variance portfolio selection: A reinforcement learning framework. Mathematical Finance, 30(4), 1273–1308.
[Wang et al.Wang et al.2020]WWW20b Wang, R., Wei, Y. and Willmot, G. E. (2020b). Characterization, robustness and aggregation of signed Choquet integrals. Mathematics of Operations Research, 45(3), 993–1015.
[ZiebartZiebart2010]Z10 Ziebart, B. D. (2010). Modeling purposeful adaptive behavior with the principle of maximum causal entropy. PhD thesis.
[ZhouZhou2021]Z21 Zhou, X. Y. (2021). Curse of optimality, and how do we break it. SSRN: 3845462.
[Zhou and LiZhou and Li2000]ZL00
Zhou, X. Y. and Li, D. (2000). Continuous-time mean-variance portfolio selection: A stochastic LQ framework. Applied Mathematics and Optimization, 42(4), 19–33.
|
http://arxiv.org/abs/2307.02506v1
|
20230705015632
|
Semi-classical description of electrostatics and quantization of electric charge
|
[
"Kolahal Bhattacharya"
] |
physics.gen-ph
|
[
"physics.gen-ph"
] |
]Semi-classical description of electrostatics and quantization of electric charge
St. Xavier's College (Autonomous), Kolkata-700016, India
[email protected]
In this work, we present an explanation of the electric charge quantization based on a semi-classical model of electrostatic fields. We claim that in electrostatics, an electric charge must be equal to a rational multiple of the elementary charge of an electron. However, the charge is quantized if the system has certain boundary conditions that force the wavefunction representing an electric field to vanish at specific surfaces. Next, we develop the corresponding model for the electric displacement vector. It is demonstrated that a number of classical results, e.g. bending of field lines at the interface of two dielectric media, method of images, etc. are all consistent with the predictions of this model. We also present the possible form of Gauss's law (or Poisson's equation), to find the wavefunctions of the field from a source charge distribution, in this model.
Keywords: charge quantization, semi-classical methods, anti-Hermitian operators, magnetic monopole.
§ INTRODUCTION
A prime example of the unresolved enigmas of theoretical physics is the quantization of electric charge. Millikan's famous oil drop experiment <cit.> in 1909 demonstrated that electric charge always appears as an integral multiple of the elementary electric charge e. This result continues to hold even today, with great experimental accuracy <cit.>. However, in spite of the extraordinary success of Maxwell's theory of electrodynamics and quantum electrodynamics (QED), the theoretical justification of this empirical result has remained unexplained until now. Moreover, the development of particle physics in the last seventy years has demonstrated that quarks, the constituent Fermions of Hadrons carry fractional charge ± e/3, ±2e/3 etc. There is no satisfactory explanation of these experimental results as well. In this paper, we will explain the charge quantization in the non-relativistic domain, as observed by Millikan.
In 1931, P. Dirac argued <cit.> that the charge quantization must happen if there is a magnetic monopole. He showed that the unobservability of phase in the quantum domain allows for singularities, manifested as sources of magnetic fields. This leads to the condition that the product of electric and magnetic charges should be an integral multiple of ħ c/2, where ħ represents the reduced Planck's constant and c denotes the speed of light in the vacuum. This marked the beginning of an organised search for a magnetic monopole somewhere in the universe that continues even today because no magnetic monopole has ever been located[1]. [1]Pierre de Maricourt of the thirteenth century tried to separate the poles of a magnet by breaking magnets into pieces <cit.>. So, this search is perhaps a millennium old.
There are other thought-provoking motivations to search for magnetic monopoles. For example, the presence of a magnetic monopole will lead to a symmetric extension of Maxwell's laws for classical electrodynamics that are invariant under duality transformation <cit.>. This symmetry will also suggest a generalised Lorentz force:
F⃗=q(ℰ⃗+v⃗×ℬ⃗)+g(ℬ⃗-v⃗/c^2×ℰ⃗)
acting on a dyonic particle with electric charge q and magnetic charge g moving with a velocity v in the electric field ℰ⃗ and magnetic field ℬ⃗. Now, if dyons exist, then space inversion is no longer a valid symmetry in electrodynamics, as noted by Ramsey <cit.>. Now, parity symmetry is broken maximally in weak interactions. This means that if magnetic monopoles exist, due to the quantization of electric charges, this will as well justify why parity is not a good symmetry of nature <cit.>. The magnetic monopole has also been proposed as a solution to the strong CP problem <cit.>. So, the issue of the quantization of electric charge is deeply connected to a large number of interesting questions in fundamental physics and therefore, there have been consolidated experimental efforts to search for magnetic monopoles. However, searches in the cosmic rays, bound matter, in colliders via direct and indirect ways never ever found the existence of monopoles <cit.>. The MoEDAL collaboration at CERN is an ongoing experiment and they recently reported the result of their first run <cit.>, where they ruled out the existence of dyons carrying a magnetic charge up to five units of the Dirac charge and an electric charge up to 200 times the electron’s charge for dyons with a mass limits between 870 and 3120 GeV. Perhaps this hints that the charge quantization problem can be approached in a different route which does not require the existence of a magnetic monopole.
In this paper, we approach the problem using a quantum physical model of electrostatics. In the recent past, there have been some works in understanding the quantum or semi-classical nature of electrostatic fields <cit.>. Both authors appear to agree that electrostatic fields could be described by non-travelling wavegroups, but otherwise, the frameworks are different. The second one among these two demystified the nonlocality problem of the Aharonov-Bohm effect which was a puzzle since 1959 <cit.>. After being proved experimentally <cit.>, this experiment was compared with the Michelson and Morley experiment of recent times <cit.>. Apart from providing an explanation of the nonlocality problem, the article <cit.> pointed out that electric charge must be a rational multiple of the elementary electric charge e. With this queue, the semi-classical model, introduced in <cit.> will be used in our present quest in the hope that it can throw some light.
We first obtain the prediction of the model in the context of the charge quantization problem. Then, we address another well-known piece of information that the electrostatic field lines, in some cases, could exhibit features very similar to the light rays in geometrical optics. For example, the bending of electric field lines at the interface between two dielectric media is analogous to the refraction of light. Similarly, in the examples of the method of images (which is an elegant method <cit.> to solve Laplace's equation for electrostatic potential under appropriate boundary conditions), the situations are very much similar to the reflection of light by mirrors.
This model <cit.> asserts that the electrostatic field can exhibit semi-classical nature when eΦt∼ħ where Φ and t represent electrostatic potential and time - over which a charge is subjected to the potential Φ. If eΦ t≫ħ, the classical nature of the fields manifests. On the other hand, if eΦt∼ħ, then in a source-free region, the wavefunctions of the electrostatic field satisfy a wave equation that has the form of a homogeneous Helmholtz equation.
An analogous situation arises during the transition from ray optics to wave optics <cit.>. In the limit where the wavelength of the light cannot be neglected, the wave nature of light is manifested. Historically, Huygens contemplated plane and spherical wavelets of light, envelopes of which proceed in the forward direction for the propagation of light. In modern formalism, these waves are identified as the solutions to the reduced wave equation <cit.>. Though this wave model of light lacks the polarisation picture, it can be used to explain the reflection and refraction of light. Therefore, the situations with optical analogies in electrostatics may indicate an underlying semi-classical model for classical electrostatics. Construction of such a mathematical model will be very exciting and it may reveal some unknown features of electrostatic field theory.
In <cit.>, the wavefunctions of the electrostatic field have been introduced merely as mathematical objects representing fields in the regions devoid of source charges. In the current work, we extend this formalism in the presence of source charges in section <ref> to find the equations corresponding to the wavefunctions that represent different components of the electrostatic field. In this formalism, we shall observe the presence of an anti-Hermitian operator that satisfies the spectral theorem and will investigate the classical limit of the model. Next, in section <ref>, we present the proof of the quantization of charge, as observed in experiments. To address the question of the similarity between electrostatic field lines and light rays in geometrical optics, we shall develop the semi-classical description for the electric displacement vector 𝒟⃗ in section <ref>. In the following section <ref>, we shall describe how the semi-classical model of ℰ⃗ and 𝒟⃗ fields helps in understanding the refraction and reflection of field lines across a boundary of two media of different dielectric constants. In section <ref>, we compare the relation of this formalism with the quantum limit of Gauss's law (or Poisson's equation), as presented in <cit.>. Finally, we will conclude with a discussion of the implication of these observations.
§ WAVE EQUATION IN ELECTROSTATICS
One can conceive a variational principle for the electrostatic field (and other curl-free vector fields) <cit.>:
δ∫_P_1^P_2ℰ ds=0.
In Eq.(<ref>), the integral is evaluated along a curve that is always superimposed with the local direction of the field. We find that the field lines satisfy the Euler-Lagrange equation:
∇ℰ=d/ds(ℰd r/ds),
exactly in a way similar to the light rays <cit.>. It has recently been shown that the electrostatic field may exhibit a semi-classical behaviour if eΦ t∼ħ <cit.>. Under source-free conditions, one can define the electric field operator as a momentum conjugate to the position coordinates:
p̂⃗̂ψ_E=-iγ̅∇ψ_E=(x̂p̂_x+ ŷp̂_y+ẑp̂_z)ψ_E=(x̂ℰ_x+ŷℰ_y+ẑℰ_z)ψ_E=ℰ⃗ψ_E,
where (p̂_x,p̂_y,p̂_z) denotes the momentum operators in (x,y,z) directions along which x̂,ŷ,ẑ are the unit vectors; γ̅≡γ/(2π)=ħ/(e· t) is a scaling factor, which represents 1/(2 π) times the minimum possible electrostatic potential γ in the problem. In a region devoid of source charge density ρ (where the conjugate momentum operators p̂_x etc. do not operate on the corresponding components of the field e.g. ℰ_x etc.), the non-travelling wavefunction ψ_E satisfies:
γ̅^2∇^2ψ_E+ℰ^2ψ_E=0.
We can readily verify that ψ_E= e^ieΦ t/ħ is a solution to this equation. The presence of the variable time factor (t) in ψ_E may appear contradictory in the context of standard electrostatics problems, where one is interested in the time-averaged electric field or potential at a test point, due to some given charge distribution. We will find that the classical results follow from the boundary conditions on ψ_E but they do not depend on ψ_E itself. In certain special circumstances, when we need to consider the electromagnetic communication between two bodies, the time factor can be expressed as L/c where L is the distance between the bodies, and c is the speed of light in free space.
The wave equation (<ref>) is valid in the region of space devoid of source charge. However, in the presence of source charge density ρ≠0, the conjugate momenta p̂_x, etc. can operate on the components of the field e.g. ℰ_x, etc. Not only that, the distribution of charges can be different in different directions. For example, along the axis of the charged plate capacitors, there is a non-zero divergence of the electric field at the capacitor plates, due to the presence of source charge. However, in the directions perpendicular to the axis, the field does not have divergence. For such situations, it is more meaningful to contemplate different wavefunctions of the electrostatic field along different directions:
-iγ̅∂ψ_E_x/∂ x =ℰ_xψ_E_x
-iγ̅∂ψ_E_y/∂ y =ℰ_yψ_E_y
-iγ̅∂ψ_E_z/∂ z =ℰ_zψ_E_z
In a medium of permittivity ϵ_0, wave equation for wavefunction ψ_E_z in z direction take the form:
-iγ̅∂^2ψ_E_z/∂ z^2 =∂ℰ_z/∂ zψ_E_z+ℰ_z∂ψ_E_z/∂ z
=ρ_z/ϵ_0ψ_E_z+ℰ_zℰ_zψ_E_z/-iγ̅
γ̅^2∂^2ψ_E_z/∂ z^2+ℰ_z^2 ψ_E_z=iγ̅ρ_z/ϵ_0ψ_E_z
[-γ̅^2/2(1/ϵ_0)∂^2/∂ z^2-ϵ_0ℰ_z^2/2]ψ_E_z=i (-1/2ρ_zγ̅)ψ_E_z
Here, we have denoted ∂ℰ_z/∂ z=ρ _z. For x and y components, it is possible to have corresponding equations. Eq.(<ref>) has the form of the time-independent Schrödinger's equation. The first and second terms on the left-hand side denote the kinetic and potential energy density. We notice that the inverse of the permittivity plays the role of mass of the ψ_E_z field. The (imaginary) energy density is given by the factor -iρ_zγ̅/2 on the right-hand side of Eq.(<ref>). However, unlike Schrödinger's equation, it cannot be interpreted as an equation describing the evolution of a wavefunction in an external potential barrier. Rather, this equation relates the electric field component and the corresponding wavefunction, to the related source charge distribution.
The presence of i on the right-hand side of Eq.(<ref>) shows that the operator on the left-hand side, acting on ψ_E_z is an anti-Hermitian operator (with imaginary eigenvalues), unlike the standard linear Hermitian operators associated with physical observables. In fact, it is similar to the complex scalar field operators in quantum field theory. From a mathematical point of view, it is a normal operator. Such an operator N̂ is defined on a complex vector space ℋ, such that it commutes with its Hermitian adjoint, i.e.: N̂N̂^†=N̂^†N̂. Both Hermitian, as well as anti-Hermitian operators, are examples of normal operators that assume the form of a diagonal matrix with respect to an orthonormal basis, in accordance with the spectral theorem. However, concrete examples of the latter are commonly not found. Naturally, the full potential of these operators has not been fully explored in physics discourses. There have been some pioneering works by Bender <cit.> on the possible use of non-Hermitian operators in quantum field theory. The idea has also been supported by R Penrose on p. 539 of `Road to Reality' <cit.>. In this paper, we will find tangible examples of these operators in the context of electrostatic field theory.
It may be interesting to explore the physical meaning of the imaginary energy density in Eq.(<ref>). Zhang <cit.> conjectured the electric charge as an imaginary form of energy. Using this, he showed a pathway of unification of gravitational and electrical forces classically. Similar or related ideas have been expressed by other authors <cit.>. However, these ideas appear to be more speculative. It seems that there may be some connection, but it will be premature to say that one implies the other. In fact, it seems that the sense in which they defined `imaginary energy' of a charged particle is somewhat different than the sense in which imaginary energy density appears in the current framework. About the nature of the solutions, we comment that in the absence of source charge, for which the right-hand-side of Eq.(<ref>) vanishes, the equation is just a simple harmonic oscillator equation with a position-dependent frequency, as pointed out in <cit.>. But in the presence of source charge, the actual solution must be worked out using boundary conditions.
Before we go forward, it is worthwhile to investigate the combined three-dimensional version of the problem. One way to accomplish that is by adding Eq.(<ref>) with its x and y counterparts. However, this does not give new mathematical insight into the system. On the contrary, if we choose to represent the components of the electrostatic field (ℰ_z etc.) in terms of the partial derivatives of the logarithms of the corresponding wavefunctions (ψ_E_z etc.) on the basis of Eq.(<ref>) etc. then by adding all the component equations, we can deduce an interesting result:
-iγ̅(x̂∂lnψ_E_x/∂ x+ŷ∂lnψ_E_y/∂ y+ẑ∂lnψ_E_z/∂ z)=(x̂ℰ_x+ŷℰ_y+ẑℰ_z)
-iγ̅(x̂∂/∂ x+ŷ∂/∂ y+ẑ∂/∂ z)·(x̂∂lnψ_E_x/∂ x+ŷ∂lnψ_E_y/∂ y+ẑ∂lnψ_E_z/∂ z)=∇·ℰ⃗=ρ/ϵ_0
-iγ̅(∂^2lnψ_E_x/∂ x^2+∂^2lnψ_E_y/∂ y^2+∂^2lnψ_E_z/∂ z^2)=ρ/ϵ_0
-iγ̅(x̂∂^2/∂ x^2+ŷ∂^2/∂ y^2+ẑ∂^2/∂ z^2)·(x̂lnψ_E_x+ŷlnψ_E_y+ẑlnψ_E_z)=ρ/ϵ_0
This equation demonstrates that in the vicinity of non-zero source charge, one must talk about a vector of wavefunctions (x̂lnψ_E_x+ŷlnψ_E_y+ẑlnψ_E_z) in three-dimensional space, instead of ψ_E which was used in the absence of source charge. If there is a spherical symmetry, then Eq.(<ref>) will assume a simpler form. We are tempted to conjecture that this equation can be regarded as the quantum mechanical version of Gauss's law (or Poisson's equation). It must be noted that we had to introduce a three-dimensional vector of wavefunctions and a new vector differential operator:
(x̂∂^2/∂ x^2+ŷ∂^2/∂ y^2+ẑ∂^2/∂ z^2)
that operates on it. The current author is not aware of any examples where such an operator has been applied.
Now, let us note that the complex conjugation of Eq.(<ref>) gives:
γ̅^2∂^2ψ_E_z^*/∂ z^2+ ℰ_z^2ψ_E_z^*=-iγ̅ρ_z/ϵψ_E_z^*.
Multiplying Eq.(<ref>) by ψ_E_z^* and Eq.(<ref>) by ψ_E_z, then adding the resulting two equations, we get
γ̅^2(ψ_E_z^*∂^2ψ_E_z/∂ z^2+ψ_E_z∂^2ψ_E_z^*/∂ z^2)+2ℰ_z^2ψ_E_z^*ψ_E_z =0
γ̅^2(ψ_E_z^*∂^2ψ_E_z/∂ z^2+ψ_E_z∂^2ψ_E_z^*/∂ z^2)+2(iγ̅∂ψ_E_z^*/∂ z)·(-iγ̅∂ψ_E_z/∂ z)=0
γ̅^2(ψ_E_z^*∂^2ψ_E_z/∂ z^2+ψ_E_z∂^2ψ_E_z^*/∂ z^2)+ 2γ̅^2∂ψ_E_z^*/∂ z∂ψ_E_z/∂ z=0
∂^2/∂ z^2|ψ_E_z|^2≡∂^2/∂ z^2(ψ_E_z^*ψ_E_z)=0,
where, in the second equality, we used Eq.(<ref>) and its complex conjugate. One can easily verify that in the absence of the source charge distribution ρ, the corresponding equation becomes:
∇^2|ψ_E|^2=0,
which can perhaps be anticipated. As such, ψ_E has a spherical symmetry and we do not need to distinguish between the directions in the source-free case. Invoking Born's probability interpretation, we find that in the source-free region, the probability density of finding ψ_E at a point in space is a Harmonic function (solution of Laplace's equation), from Eq.(<ref>). In the presence of a source charge, we must be concerned with finding the probability density of wavefunction of the electrostatic field in a given direction. That still remains a Harmonic function, in that direction. Perhaps this suggests a possible connection between the modulus square of the wavefunction of the electric field and the electrostatic potential. If the boundary conditions on the potential are the same as those on the wavefunction, (say, if both of them are equal to zero on a boundary), then the uniqueness theorem states that the modulus squared wavefunction is the same as the potential. In that case, one can find the electrostatic potential, by solving Eq.(<ref>). We comment that if only one component (say, z component) of the curl of a vector field is zero (or equivalently if the closed-loop line integral for that component of the vector field is zero), then it is possible to find the semi-classical model for that component.
We note that the Eq.(<ref>) admits both positive and negative signed exponents as basis wavefunctions, i.e. ψ_E= e^± iqΦ t/ħ. If q=-e, the normalised solution of Eq.(<ref>) will be Ψ _E=∫ u(ℰ⃗) e^± ieΦ t/ħdℰ⃗ where Φ=-∫ℰ⃗· d r. However, Eq.(<ref>) admits only ψ_E_z= e^-iq∫(ℰ_z dz)t/ħ as the basis wavefunctions. For q=-e, the corresponding normalizable solutions to Eq.(<ref>) will be ψ_E_z=∫ u(ℰ_z) e^ie(∫ℰ_z dz) t/ħdℰ_z.
Before we conclude this section, let us check the classical limit of Eq.(<ref>). In the investigation of the classical limit of quantum mechanics, a standard approach is to write the wavefunction as ψ=A e^ iS/ħ where A and S are real quantities. If we substitute this into the Schrödinger's equation and take the limit, we find that S (classical action) satisfies the Hamilton-Jacobi equation.
Now, in the context of Eq.(<ref>), we can proceed in a similar manner. Let us write the wavefunction as ψ_E_z=A e^ -iΦ/γ̅, noting that the electrostatic potential plays a role similar to the classical action. Substituting this function into Eq.(<ref>), we find:
γ̅^2∂^2ψ_E_z/∂ z^2 =γ̅^2d^2A/dz^2( e^-iΦ/γ̅)-2iγ̅dA/dz∂Φ/∂ z( e^-iΦ/γ̅)-iγ̅∂^2Φ/∂ z^2(A e^-iΦ/γ̅)-ℰ_z ^2(A e^-iΦ/γ̅)
=-ℰ_z^2(A e^ -iΦ/γ̅)+iγ̅ρ_z/ϵ_0(A e^-iΦ/γ̅)
If we take the limit γ̅→0 in Eq.(<ref>) and Eq.(<ref>), then we have:
lim_γ̅→ 0 -iγ̅∂^2Φ/∂ z^2(A e^-iΦ/γ̅) =lim_γ̅→ 0iγ̅ρ_z/ϵ_0(A e^-iΦ/γ̅)
∂^2Φ/∂ z^2 =-ρ_z/ϵ_0
This is Gauss's law in one dimension. If the same is done in all directions, we get the traditional differential form of Gauss's law in three dimensions.
§ PROOF OF QUANTIZATION OF ELECTRIC CHARGE
In <cit.>, it has been argued that the charge q in an electrostatic system, in general, should be a rational multiple of the elementary electric charge. This can be proved in the following way. Consider a normalizable wavefunction Ψ_E in regions with non-zero field value. We demand that Ψ_E must remain the same for a constant change in potential, just the way the classical electric field remains unaffected by a constant change in electrostatic potential. Referring to the form of Ψ_E, we notice that the Fourier coefficient u(ℰ⃗) will remain invariant under the transformation Φ→Φ+Φ_0 where Φ_0 is a constant. This leads to:
∫ u(-∇Φ) e^iqΦ t/ħ dℰ⃗=∫ u(-∇(Φ+Φ_0)) e^iq(Φ+Φ_0)t/ħdℰ⃗
e^iqΦ t/ħ= e^iq(Φ+Φ_0) t/ħ
In Eq.(<ref>), we can equate the integrands, because the integrals are equal for arbitrary boundaries (any pair of upper and lower limits) and a constant Φ_0. Now, for an integer n ∈𝐍, this leads to the following condition:
qΦ t/ħ =2nπ+q(Φ+Φ_0)t/ħ
qΦ_0t =-2nπ·ħ=-nh
q =-nh/Φ_0t=-n/Φ_0(γ· e)=-n/(Φ_0/γ)e=-n/Ne,
where in the last equation, we have used Φ_0=N γ, where N is an integer. The potential Φ_0 represents an area in the phase space constituted by coordinates and conjugate momenta (electric field). This area must be an integral multiple of the unit (minimum) potential γ. Since Φ_0 can be positive as well as negative, so N can also be both positive and negative.
This result shows that the charge should be a rational multiple of e, but does not explain why charges should be quantized. In the following, we provide more direct proof of the said quantization. Let us consider the problem in one dimension (say, in z direction).
If some charge is distributed on a conductor, e.g. to a plate of a parallel plate capacitor located at z=a (whose another plate at z=0 is grounded), then the charge density is given by ρ=σδ_D(z-a), where δ_D represents Dirac delta function. Then, from Eq.(<ref>), we can say:
-ħ^2/2(e^2 t^2/ϵ_0)d^2ψ_E_z/dz^2-ϵ_0/2ℰ_z^2ψ_E_z =-iγ̅σ/2δ_D(z-a)ψ_E_z
Let us check the boundary conditions of ψ_E_z and ℰ_z. Between the plates (0<z<a), ℰ_z≠0, and ψ_E_z=∫ a(ℰ_z) e^ie(∫ℰ_z dz)t/ħ dℰ_z. Exactly on the surface of the conductor z=a, ψ_E_z=0, otherwise the RHS of Eq.(<ref>) will diverge. Assuming that the electric field between the plates is a constant ℰ_0,
we are led to the condition that sin(eℰ_0a t/ħ)=0 (or cos(eℰ_0a t/ħ)=0). If we adopt the sin, we get:
(eℰ_0a t/ħ)=nπ
ℰ_0=nπħ/ea t=nπγ̅/a
Note that γ̅ has the unit of electrostatic potential. Since the electric field just outside the conductor is related to the charge density by σ =ϵ_0ℰ_0, it follows that the original charge Q(=σ A), given to the plate of area A at z=a, must be quantized as well. If we adopt cos, even then the electric field - and the source charge of that field - would be quantized.
There is no loss of generality in selecting a specific configuration of conductors. We could as well choose a charged conductor of arbitrary shape. Choose a point P on the surface and call the outward normal direction the z direction. The value of the field just outside P at a distance Δ (→0) from the surface, would still be ℰ_z=σ/ϵ _0. The integral ∫ℰ_z dz will reduce to ℰ_z Δ. This does not change any of the arguments used in Eq.(<ref>). We discuss the case of quantization of charge in dielectric systems in the next section.
The eigenvalue at the right-hand side of Eq.(<ref>) i.e. -iρ_zγ̅/2 must be quantized according to appropriate boundary conditions, exactly in the same way the energy eigenvalues of a particle in a potential well are quantized. This way, the boundary condition requires the quantization of the charge in the source distribution. This description, therefore, removes the need for magnetic monopoles due to the quantization of electric charge.
The preceding discussion suggests that the quantization of electric charge has two aspects. Fundamentally, it is not a quantized entity. In fact, it is a rational multiple of the elementary electric charge e, as far as electrostatics is concerned. However, it becomes quantized, when the boundary condition requires it to be so (through Eq.(<ref>)).
§ ELECTRIC DISPLACEMENT VECTOR 𝒟⃗
If we assume that the medium is a linear dielectric with the polarisation vector 𝒫⃗=ϵ_0 χ_eℰ⃗, where χ_e represents the electric susceptibility, then the electric displacement is given by 𝒟⃗=ϵℰ⃗= ϵ_0(1+χ_e)ℰ⃗. In general, ∇×𝒟⃗ =∇×𝒫⃗. Now, if in a given problem,
∇×𝒫⃗=0 (or, equivalently, ∮𝒫⃗· dl⃗=0), the dielectric potential Φ_D can be expressed in terms of free charge density ρ_f as:
Φ_D=ϵΦ=1/4π∫ρ_f/| r- r'|dτ'( if∇×𝒫⃗=0).
Based on <cit.>, in this case also one can conceive δ∫𝒟 ds=0, where this integral is evaluated along a curve, always superimposed with the local direction of 𝒟⃗. This can be used to obtain a semi-classical model for the field 𝒟⃗ in the limited cases where ∇×𝒫⃗=0. The eigenvalue equation for 𝒟⃗ field should be given by:
-iγ_D/2π∇ψ_D=𝒟⃗ψ_D,
where γ_D/(2π)(=γ̅_D, say) can be determined in the following way: the change in the action of a charge q introduced in a medium of permittivity ϵ(≠ϵ_0), if it is subjected to potential Φ=Φ_D/ϵ is Δ S=-qΦ t=-q(Φ_D/ϵ)t. We define γ̅_D as the minimum value of Φ_D corresponding to the minimum action ħ. If we take q=-e, then γ̅_D=ϵħ/(e· t)=ϵγ̅.
If only one (say, z) component of the electric displacement vector is irrotational i.e. (∂𝒟_y/∂ x-∂𝒟_x/∂ y)=0 [or, equivalently ∮𝒟_z dl=0 along a chosen closed contour], then we can get a semi-classical model with wavefunction ψ_D_z that represents only the z component of 𝒟⃗:
-iγ_D/2π∂ψ_D_z/∂ z =𝒟_zψ_D_z
γ̅_D^2∂^2/∂ z^2ψ_D_z+𝒟_z^2 ψ_D_z=iγ̅_Dρ_fψ_D_z.
Using the value of γ̅_D, Eq.(<ref>) (for linear dielectrics) reduces to:
γ̅^2∂^2ψ_D_z/∂ z^2+ℰ_z^2 ψ_D_z=iρ_f/ϵγ̅ψ_D_z
[-γ̅^2/2(1/ϵ)∂^2/∂ z^2-𝒟_zℰ_z/2]ψ_D_z =i(-1/2ρ_fγ̅)ψ_D_z.
The normalised solution of Eq.(<ref>) can be constructed as Ψ_D_z=∫ v(𝒟_z) e^-iΦ_D_z/γ̅_Dd𝒟_z=∫ v(𝒟_z) e^-iΦ_z/γ̅d𝒟_z. Comparing with the form of the solution to Eq.(<ref>) (discussed just before section <ref>), we find that the plane wave basis of the semi-classical wavefunction of the fields ℰ⃗ and 𝒟⃗ are identical. But the coefficients are different, as expected.
In principle, one can always do the same exercise for the vector field 𝒟⃗-𝒫⃗ (=ϵ_oℰ⃗), whose curl is zero. Not surprisingly, one finds the total charge density as the sum of free charge density ρ_f and bound charge density ρ_b (from -∇·𝒫⃗ term).
We make the following observation about the quantization of electric charge in the case of dielectrics. Unlike the conductors, here we may have free and bound charges and they might reside within the body as well as on the interface. In addition, the electrostatic field and the displacement vectors do not remain perpendicular to the interface. But the second point does not pose a serious problem, because typically we would still deal with the same basis wavefunctions in a given direction, as found just before (we shall see an example in the context of dielectric half-plane image problem). So, for surface charge distribution, which can be represented by a delta function, one can predict the existence of quantized charges. That conclusion will not hold for the smooth continuous volume charge distributions. Most likely, such configurations will not have quantized charges. However, if the volume charge is made up of many individual point charges embedded in the medium, then the source term is composed of a summation over delta functions located at those points. These point charges must then be quantized.
§ ELECTROSTATIC REFRACTION
Let us consider two halves of the full space filled with linear dielectric materials with dielectric constants ϵ_1 and ϵ_2. We consider the oblique incidence of the electric field line at the boundary (see the following Figure <ref>). We would like to approach this problem from the semi-classical description of the fields. To accomplish that, we make several observations:
(a) On the interface, the free charge density ρ_f=0, but the total charge density ρ which is defined as the sum of free charge density and the bound charge density, is not zero. In this context, boundary conditions are written as:
γ̅^2d^2/dx^2ψ_E_x+ℰ_x^2ψ_E_x =0
γ̅^2d^2/dy^2ψ_E_y+ℰ_y^2ψ_E_y =0
γ̅^2d^2/dz^2ψ_E_z+ℰ_z^2ψ_E_z=iγ̅σ_b/ϵ_0 δ_D(z)ψ_E_z,
where we denoted the wavefunctions corresponding to individual components of the electric field with separate subscripts. Referring to Eq.(<ref>), we note that the existence of the delta function at z=0 implies discontinuity in the first z derivative of ψ_E_z, which (from Eq.(<ref>)) also implies discontinuity in the z component of the electric field. However, along the tangential direction, the field is continuous, since there is no infinite jump in these directions. So, one has ℰ_1x,y=ℰ_2x,y at the interface.
(b) At the interface, we can show that ∮𝒫_zdl=0 (which also implies ∮𝒟_zdl=0) along a closed rectangular contour, going into and turning back from both sides of the interface, as can be seen with reference to Fig. <ref>:
∮𝒫_zdl =∫_A^M P_zdl+∫_M^B P_zdl+0∫_B^C P_zdl+∫_C^N P_zdl+∫_N^D P_zdl+0∫_D^A P_z dl
=ϵ_0(χ_e^(1)ℰ_z^1· AM+ χ_e^(2)ℰ_z^2· MB- χ_e^(2)ℰ_z^2· CN- χ_e^(1)ℰ_z^1· ND)=0.
This allows us to develop the semi-classical description for 𝒫_z and hence for 𝒟_z. Since ρ_f=0 on the interface, in this case, Eq.(<ref>) has a simple form:
γ̅_D^2d^2/dz^2ψ_D_z+𝒟_z^2ψ_D_z=0.
As before, we can argue in favour of continuity of 𝒟 _z across the interface due to the absence of infinite jump in the first derivative of ψ_D. So, we have D_1z=D_2z. Using 𝒟⃗=ϵℰ⃗, and the continuity of the tangential component of the electric field, we deduce the relation:
ϵ_1/ϵ_2=tanθ_1/tanθ_2.
Finally, we evaluate ∮𝒫_xdl=0 along the same contour for completeness:
∮𝒫_xdl =0∫_A^M P_xdl+0∫_M^B P_xdl+∫_B^C P_xdl+ 0∫_C^N P_xdl+0∫_N^D P_xdl+∫_D^A P_x dl
=ϵ_0(-χ_e^(2)ℰ_z^2· BC+ χ_e^(1)ℰ_z^1· DA)≠0.
Thus, the corresponding semi-classical method is not possible for the tangential components 𝒫⃗_x,y (and for 𝒟⃗_x,y).
§.§.§ Dielectric half-plane image problem
Let us now consider the Dielectric half-plane image problem which is covered in standard texts <cit.>. The electric field ℰ⃗_1 originates from a charge q located at a distance -z_0 in the medium with dielectric constant ϵ_1. The problem seeks to find the potential function that will satisfy the boundary conditions. The standard approach to the problem is to assume that there are two image charges. One of them (q') is located within the medium with dielectric constant ϵ_2 at a distance z_0 from the boundary and the other (q”) is placed at the same location as the original charge q. Solving the boundary conditions, the values of the image charges are:
q'=ϵ_1-ϵ_2/ϵ_1+ϵ_2q
q”=2ϵ_2/ϵ_1+ϵ_2q.
We intend to approach the problem from the semi-classical model of the electric fields. The wavefunction of the electric displacement vector Ψ_D^q, due to the real charge, will reflect from the interface and there will be also some transmission. Hence, the net wavefunction at z<0 is the sum of the wavefunctions due to the real charge and the wavefunction that is reflected:
Ψ_D^1(z)=∫ v(𝒟_1z) e^i∫ℰ_1zdz/γ̅ d𝒟_1z + ∫ v(𝒟_1z) e^-i∫ℰ_1zdz/γ̅ d𝒟_1z,
where denotes the amplitude of reflection back into the material with dielectric constant ϵ_1. On the other hand, the wavefunction at z>0 can be written as:
Ψ_D^2(z)=∫ v(𝒟_2z) e^i∫ℰ_2zdz/γ̅ d𝒟_2z,
where denotes the transmission amplitude. To investigate the boundary conditions, we must notice the values of Ψ_D^1,2 at a distance |Δ| →0 on either side of z=0. We have seen that D_ z is continuous across the boundary, i.e. D_1z= D_2z. So, we can drop the coefficients while comparing the wavefunctions at both sides close to the boundary. We can also assume that ℰ_1z and ℰ_2z vary sufficiently slowly with respect to z near the boundary. This implies that ∫ℰ_1z,2zdz≈ℰ_1z,2zz. Therefore, from Eq.(<ref>), at z=-Δ(<0) the functional dependence of Ψ_D^1 is given as:
ψ_D^1(z):= e^iℰ_1zz/γ̅+ e^-iℰ _1zz/γ̅.
And from Eq.(<ref>), we have at z=Δ(>0):
ψ_D^2(z):= e^iℰ_2z z/γ̅.
The boundary conditions at z=0 are:
(ψ_D^1)_z=0^- =(ψ_D^2)_z=0^+
(∂ψ_D^1/∂ z)_z=0^- = (∂ψ_D^2/∂ z)_z=0^+
The first condition yields:
1+ = .
The second condition implies:
(i/γ̅ℰ_1z e^i/γ̅ℰ_1zz-·i/γ̅ℰ_1z e^-i/γ̅ℰ_1zz)_z=0^- =(i/γ̅ℰ_2z e^i/γ̅ℰ_2zz)_z=0^+
ℰ_1z-·ℰ_1z =ℰ_2z.
In Eq.(<ref>), -·ℰ_1z and ℰ_2z are reflected and transmitted components of the electric field, respectively. From Eq.(<ref>) and Eq.(<ref>), the reflection amplitude at the interface can be calculated as:
𝒟_1z/ϵ_1-·𝒟_1z/ϵ_1 =(1+)𝒟_2z/ϵ_2
1/ϵ_1-/ϵ_1 =1/ϵ_2+ /ϵ_2
(1/ϵ_1-1/ϵ_2) =(1/ϵ_1+1/ϵ_2)
=-ϵ_1-ϵ_2/ϵ_1+ϵ_2.
The corresponding transmission amplitude can be calculated as:
= 1+=2ϵ_2/ϵ_1+ϵ_2.
It is worth pointing out that and are the amplitudes for the wavefunctions of the fields at the interface and are not the coefficients of the image charges required to solve the problem. They scale ψ_D^1,2 by a constant. In the Hilbert space, the resulting states do not represent any new state. Using the reflected and transmitted wavefunctions in Eq.(<ref>) and Eq.(<ref>) does not reproduce the values of the image charges, since the eigenvalue equation does not allow that. These scaled wavefunctions are the results of the boundary conditions and the bound charge present at the interface.
However, non-vanishing values of these amplitudes suggest that the reflected classical electric field at z<0 could be assumed to be due to an image charge q' located at the mirror image position +z_0, and the transmitted classical electric field at z>0 could be assumed to be due to an image charge q” that is located exactly at the position coincident with the original charge. Referring to Eq.(<ref>), we could identify that the reflected electric field ℰ_1z^q' due to q' is:
ℰ_1z^q'=- rℰ_1z^q q'=ϵ_1-ϵ_2/ϵ_1+ϵ_2q.
From the same equation, the transmitted electric field ℰ_1z^q” due to q” is:
ℰ_1z^q”=tℰ_2z^q q”=2ϵ_2/ϵ_1+ϵ_2q.
§.§ Infinite grounded conducting plane
Let us discuss the relevance of this concept in the context of the infinite grounded conducting plane image problem which can be thought of as the limiting case of the previous problem, in which ϵ_2→∞. In this case, Eq.(<ref>) shows that one needs to assume an image charge -q inside the conductor. The charge q” is non-zero, however, its contribution to electric potential is vanishing as explained in <cit.>.
We observe an important connection between ψ_E_z and potential Φ in this problem. Since the induced charge density on the surface can be expressed by a delta function, therefore, Eq.(<ref>) dictates that ψ_E_z=0 on z=0. Classically, in this problem, we
have ∇^2Φ=0 (Laplace's equation), along with the boundary condition Φ=0 on the conductor. This is exactly parallel to ∂^2|ψ_Ez|^2/∂ z^2=0 (Eq.(<ref>)), with the boundary condition ψ_Ez=0 (implying |ψ_Ez|^2=0) at z=0. Therefore, we conclude that in this problem, Φ∝|ψ_Ez|^2. This argument is also applicable for the grounded conducting sphere image problem <cit.> where one is asked to calculate the image charge and location when a real charge is placed in front of a grounded conducting sphere.
It may be noted that the superposition of ψ_Ez must be done in the quantum mechanical sense taking into account the phase, as demonstrated in <cit.>. However, in the square of the modulus of ψ_Ez, relevant in several electrostatics problems, the sensitivity to the phase is washed out. No wonder that electrostatic potential obeys the classical superposition principle, where one adds up the potentials algebraically, without any reference to phase.
§ SEMI-CLASSICAL LIMIT OF GAUSS'S LAW OR POISSON'S EQUATION
At this point, it is perhaps good to note the difference between the two frameworks that discuss the quantum theory of electrostatics and the possible expression of Gauss's law in this limit. First of all, we are not interested in the derivation of the classical version of Gauss's law or its higher-order corrections in the non-relativistic limit of QED which have been addressed <cit.>. We are specifically interested in the form of Gauss's law (or wave equation) in which the non-travelling wavefunctions (or non-travelling wavefunctions (or `electrostatic coherent state—a notion which involves (non-dynamical) longitudinal photons') of the electric field can be calculated from a given source charge distribution. The framework presented in <cit.> assumes that Gauss's law should hold in quantum theory as (∇·ℰ⃗)Ψ=ρΨ, or as its expectation value thereof (see the discussion at pp.3-4).
In this equation, the electric field is an operator.
On the other hand, the framework on electrostatic field theory, presented in <cit.> and extended in the present work, the electrostatic field is taken as a conservative vector field which can have a semi-classical nature in a limit. We called this framework semi-classical, because the main features of the quantum wavefunction are illuminated by classical physics. This framework seems to be consistent with classical physics. This framework provides another form of Gauss's law, given in Eq.(<ref>). Most likely, the difference arises due to the fact that the latter is a semi-classical model, as opposed to the former which derives from quantum field theory. However, we comment that to check consistency with the classical results, the semi-classical model should be a better starting point. In this case, we must deal with a vector of wavefunctions representing the three components of the electrostatic field. Eq.(<ref>), when solved, would give the wavefunctions representing three different components of the electrostatic field due to a source charge distribution ρ, just like solving the Gauss's law (or Poisson's equation) in electrostatics can be used to evaluate the electric field in a problem.
In principle, there can be a spherical symmetry in the source charge distribution, for which Eq.(<ref>), Eq.(<ref>), and Eq.(<ref>) will fuse into the basic Eq.(<ref>). In that case, the semi-classical version of Gauss's law (or Poisson's equation) takes a simpler form:
-iγ̅1/ψ_E∇ψ_E =-iγ̅∇(lnψ_E)=ℰ⃗
-iγ̅∇^2(lnψ_E) =∇·ℰ⃗=ρ/ϵ_0
∇^2(lnψ_E) =iρ/γ̅ϵ_0.
This equation should be interpreted as a method to find ψ_E, the wavefunction of an electrostatic field due to a spherically symmetric ρ. As expected, symmetry leads to considerable simplification of the problem. We comment that the eigenvalue equation Eq.(<ref>) can be thought of as the analogue of the gradient equation: ℰ⃗=-∇Φ, because when the divergence operator is applied to it, we get the Gauss's law (or the Poisson's equation).
§ SUMMARY AND DISCUSSIONS
In this paper, we gave an explanation of the quantization of electric charge without requiring the existence of magnetic monopole, based on a semi-classical model of curl-free vector fields <cit.> developed in the context of resolving the nonlocality problem of the Aharonov-Bohm effect <cit.>. Through this exercise, we resolved an open problem in physics that remained a mystery for about the last hundred years. Apart from charge quantization, the semi-classical model of static conservative fields has been found to resolve the nonlocality problem of the Aharonov-Bohm effect (which was also an open problem since 1959), Aharonov-Casher effect and He-McKellar-Wilkens effect; for deriving the magnetic flux quantum, etc. All these effects, except for the electrostatic Aharonov-Bohm effect have been experimentally validated. The reason why the electrostatic Aharonov-Bohm effect could not be observed is usually attributed to the difficulty in reducing the electric field to zero in the experimental setup, in contradiction with the preamble of the original thought experiment. On the other hand, the consistency of this model with known results from classical physics further emphasises its validity. The plane waves of the electrostatic fields are found to be similar to the plane waves used in geometrical optics. The author was thrilled to find that the electrostatic potential and the modulus square of the wavefunctions representing an electrostatic field could be related if they are subjected to the same boundary condition. Its implication for the difference between the quantum and classical superposition principles (discussed at the end of subsection <ref>) must be appreciated. It should be possible to deduce the equation corresponding to Eq.(<ref>) in the context of the passage of a magnetic field line between two media with different magnetic permeabilities, based on a semi-classical model on magnetostatics.
In the literature, we found another paper on the quantum physical model of electrostatics proposed by Kay <cit.>, based on his previous paper <cit.>. These works are based on quantum electrodynamics and describe the wavefunctions of electric fields as coherent states. The author talks about two frameworks in which Gauss's law in electrostatics holds as an operator equation ((∇·ℰ⃗)Ψ=ρΨ) or its expectation. However, it is not clear if this model can be used to achieve the tasks we performed.
At this point, it may be interesting to note the relationship between the semi-classical model with QED, in which electrostatic force arises due to the interaction between bodies mediated by the virtual photons that exist for a very short time scale determined by the uncertainty principle <cit.>. Though QED is undoubtedly the most successful theory, it is perhaps safe to say that the time-invariant picture of classical electrostatics is not intuitive from the dynamical description of the fields in quantum electrodynamics [see the discussion after Eq.(1) in the introduction of <cit.>]. In addition, quantization of electromagnetic Hamiltonian by treating it effectively as a harmonic oscillator may be difficult in a frame where there is only an electrostatic field, but no magnetic field. This is perhaps the key difference between QED and the current formulation. To reproduce the present model from QED, one needs to come out of the traditional photon picture, by transforming to a reference frame where only an electrostatic field is present. It may not be possible to directly take a limit and reproduce the model. This is because the variational principle δ∫_P_1^P_2ℰds=0 <cit.> which is the basis of the semi-classical model, is an independent component that was not known explicitly to be a part of the standard electrostatics framework. This principle looks like Fermat’s principle, but cannot be derived from it. The standard theory of QED does not explicitly have this piece of information. However, this relationship is definitely worth exploring. It is possible that combining this model with QED will lead to a more complete theory.
Finally, we comment that the semi-classical model lacks the important aspect of spin. This aspect arises in the QED based-model <cit.> naturally. We did not need it for the problems discussed in this paper. A more complete model, however, can be formulated if spin can be taken into account. It may be possible to accomplish this using the square-root operator, as shown in the context of light rays in optics <cit.>.
§ ACKNOWLEDGEMENT
The author acknowledges the anonymous reviewers for providing useful suggestions; Prof. Anwesh Mazumdar and Prof. Sudipto Roy for their support and encouragement.
§ DATA AVAILABILITY STATEMENT
No new data were created or analysed in this study.
§ REFERENCES
unsrt
|
http://arxiv.org/abs/2307.00878v1
|
20230703091558
|
Crystal Structures and Phase Stability of the Li$_2$S-P$_2$S$_5$ System from First Principles
|
[
"Ronald L. Kam",
"KyuJung Jun",
"Luis Barroso-Luque",
"Julia H. Yang",
"Fengyu Xie",
"Gerbrand Ceder"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
Geometric renormalization of weighted networks
M. Ángeles Serrano
August 1, 2023
==============================================
§ ABSTRACT
The Li2S-P2S5 pseudo-binary system has been a valuable source of promising superionic conductors, with α-Li3PS4, β-Li3PS4, HT-Li7PS6, and Li7P3S11 having excellent room temperature Li-ion conductivity > 0.1 mS/cm. The metastability of these phases at ambient temperature motivates a study to quantify thermodynamic accessibility. Through calculating the electronic, configurational, and vibrational sources of free energy from first principles, a phase diagram of the crystalline Li2S-P2S5 space is constructed. Well-established phase stability trends from experiments are recovered, such as polymorphic phase transitions in Li7PS6 and Li3PS4, and the metastability of Li7P3S11 at high temperature. At ambient temperature, it is predicted that all superionic conductors in this space are indeed metastable, but thermodynamically accessible. Vibrational and configurational sources of entropy are shown to be essential towards describing the stability of superionic conductors. New details of the Li sublattices are revealed, and are found to be crucial towards accurately predicting configurational entropy. All superionic conductors contain significant configurational entropy, which suggests an inherent correlation between superionic conductivity and high configurational entropy.
§ INTRODUCTION
The global transition to sustainable energy sources necessitates the continued development of energy storage technologies that enable increased deployment of intermittent energy sources (ie wind and solar power) and electrification of transportation <cit.>. Lithium (Li) all solid-state batteries (ASSB) can significantly improve the safety and energy density compared to conventional Li-ion batteries using organic liquid electrolytes <cit.>. Discovery and development of novel superionic conductors with Li-ion conductivity on the order of organic liquid electrolytes (>0.1 mS/cm) is crucial towards enabling ASSBs to have similar power densities as conventional Li-ion batteries <cit.>. The pseudo-binary Li2S-P2S5 composition space has proven to be a particularly rich source of promising Li superionic conductors. Several crystalline compounds can be synthesized by combining Li2S and P2S5 precursors in varying ratios (Figure <ref>)<cit.>, with the notable phases being the α, β, and γ polymorphs of Li3PS4, high-temperature (HT) and low-temperature (LT)-Li7PS6, and Li7P3S11. Among these, α-Li3PS4, β-Li3PS4, HT-Li7PS6, and Li7P3S11 are superionic conductors <cit.>. Although amorphous phases with these compositions also exist<cit.>, the focus of our study will be on understanding the relative phase stability of the crystalline phases only.
The crystalline phases in the Li2S-P2S5 space are composed of periodically arranged PS4 tetrahedra, which are either isolated or form P2S7 ditetrahedra. Li atoms are located in between these units and coordinated by S atoms. The Li7PS6 polymorphs also contain free S atoms that are only coordinated with Li atoms. Different phases can be identified by their distinct orientation of PS4 and P2S7 groups, which are shown in Figure 1. The Li3PS4 and Li7PS6 polymorphs are all composed of isolated PS4 groups. In γ-Li3PS4, these PS4 groups are uni-directional, with all apexes facing the same direction (apexes face out of the page in Figure <ref>). In β-Li3PS4, PS4 groups are arranged in alternating zig-zag chains, with each chain containing apexes that face the same direction, while apexes in the adjacent chain face the opposite direction (Figure <ref>) <cit.>. The α-Li3PS4 polymorph also contains PS4 with oppositely facing apexes, but these are arranged in alternating columns (Figure <ref>). In both Li7PS6 polymorphs, all PS4 groups face the same direction, but differ in their spatial distributions <cit.>. In LT-Li7PS6, PS4 are arranged with orthorhombic symmetry (Figure <ref>) , while in HT-Li7PS6 the PS4 are arranged with face centered cubic (FCC) symmetry (Figure <ref>) <cit.>. Li7P3S11 is composed of both P2S7 and PS4 units (Figure <ref>)<cit.>.
According to previous experimental and computational studies, the superionic conductor phases are all metastable at ambient temperature <cit.>. Among the Li3PS4 polymorphs, γ is the stable phase at room temperature but has low Li conductivity, while β and α are the high temperature fast-conducting phases <cit.>. β-Li3PS4 has been stabilized at room temperature as nanoporous particles from solution-state synthesis <cit.>. This phase has also been stabilized through mechanochemical synthesis involving ball milling to form an amorphous phase, and a subsequent heat treatment to recrystallize <cit.>. An analogous Si-doped Li_3.25Si_0.25P_3.75S_4 structure, where Si substitutes into phosphorus (P) sites, has also been stabilized at room temperature <cit.>. α-Li3PS4 has recently also been stabilized at room temperature via a rapid heating and quenching technique <cit.>. This discovery indicates that the energy differences between the three Li3PS4 polymorphs at room temperature should be small, allowing for the metastable α and β to be thermodynamically accessible at ambient temperature. HT-Li7PS6 is only stable at elevated temperatures (T > 483 K) <cit.>, but has been successfully stabilized at room temperature through halide atom substitution into S sites, typically to form the Li6PS5X composition (X = Cl, Br, or I) <cit.>. Synthesis of Li7P3S11 usually requires ball-milling to its amorphous form before recrystallization above its glass transition temperature of around 500 K <cit.>. Heat treatment at higher temperatures (T > 800 K) is not possible, as Li7P3S11 phase-separates to Li4P2S6 and Li3PS4 <cit.>.
The metastable nature of these superionic conductors motivates our first-principles study with the objective to understand their thermodynamic accessibility at finite temperature, rationalize experimental trends, and potentially propose new synthesis procedures. To model the free energy of each phase, we consider contributions from the electronic structure, configurational disorder, and vibrational modes. We find that including both configurational and vibrational entropy is necessary to correctly predict free energies, in agreement with a previous study on the Li_1+2xZn_1+xPS_4 system <cit.>.
We model configurational Li-vacancy disorder with well-established lattice model methods <cit.>, which have been previously used to study a range of alkali-ion intercalation oxides and solid electrolytes <cit.>. To properly model the configurational disorder in HT-Li7PS6, α-Li3PS4, β-Li3PS4, and Li7P3S11, we require accurate structural models to define the set of distinct sites that Li can occupy, which we refer to as the Li sublattice. There are conflicting reports about the specific sites that make up the Li sublattices arising from different characterization techniques. More specifically, in α-Li3PS4, β-Li3PS4, and HT-Li7PS6, neutron diffraction (ND) refinements <cit.> have identified more Li sites and increased site disorder as compared to X-ray diffraction (XRD) refinements
<cit.>. In Li7P3S11, XRD and ND identify fully ordered, but entirely different Li sublattices <cit.>. A more recent ab-initio molecular dynamics (AIMD) study proposing 15 potential Li sites in Li7P3S11 introduces uncertainty to the exact state of Li order, since these new sites can in principle be partially occupied <cit.>. Because of these conflicting reports, we dedicate a large portion of this study towards clarifying the Li arrangement in these structures, the details of which we find to be essential for recovering experimental thermodynamic trends.
For each disordered phase, we assess the validity of various proposed Li sublattices, primarily by analyzing atomic relaxation distances and comparing Li site disordering behavior to experimental reports. Upon obtaining the most representative Li sublattice, we train a cluster expansion (CE), which can rapidly evaluate total energies of any Li-vacancy configuration within the given Li sublattice <cit.>. Using the CE, we perform Monte Carlo (MC) sampling to determine the configurational entropy, free energy, and Li site disordering behavior as a function of temperature.
The CE formally represents the energy of a disordered crystal structure as a summation over contributions from local, multi-site (cluster) configurations and their associated interaction energies<cit.>. The expression for CE energy is shown in equation <ref>, where σ is the vector encoding the species occupying each lattice site, β is the index for a symmetrically distinct cluster, J_β is the effective cluster interaction (ECI) energy, and ⟨Φ(σ)⟩_β is the correlation function describing the crystal-averaged cluster configuration. The ECI are determined from regularized linear regression techniques, using a training set of distinct DFT-relaxed configurations and energies <cit.>.
E(σ) = ∑_β J_β⟨Φ(σ)⟩_β
Monte Carlo (MC) sampling is then performed using the CE in the canonical ensemble to predict Li site disorder, identify new ground state (lowest energy) structures, and calculate configurational thermodynamic properties through thermodynamic integration (more in Methods).
Vibrational free energy contributions are captured in the ground state of each phase, by performing harmonic phonon calculations <cit.>. By incorporating the contributions to the free energy from electronic structure, vibrational entropy, and configurational entropy, we assess thermodynamic stability in the Li2S-P2S5 phase space, recovering well-established experimental observations.
This paper is organized as follows. 1) We first present the pseudo-binary Li2S-P2S5 phase diagram. The thermodynamic stability of each phase at finite temperature is evaluated and potential synthesis procedures for metastable phases are proposed. 2) For each composition, we discuss the appropriate choice of refined structure for each polymorph by comparing the validity of previously proposed models. Phase stability trends between polymorphs are examined in detail, with a focus on identifying phase transitions and quantifying the contributions of vibrational and configurational entropy towards stability. 3) In the discussion, we draw further connections to previously proposed experimental synthesis strategies, and explore a potential correlation between superionic conductivity and high configurational entropy.
§ RESULTS
§.§ Phase stability in the Li2S-P2S5 system
The pseudo-binary Li2S-P2S5 phase diagram is presented in Figure <ref> and the energies above the hull (E_hull) as a function of temperature are shown in Figure <ref>. The convex hull is a typical construction to obtain stable phases and represents the collection of thermodynamic ground states into which all other phases have a driving force to convert. All computed formation free energies used to construct the phase diagram are shown in SI Figure S7. At 0 K, the only stable phases on the convex hull are γ-Li3PS4 and the endpoints, Li2S and P2S5 (Figure <ref>). At 700 K, HT-Li7PS6 is stabilized and appears on the hull. Since reported synthesis procedures for LT and HT-Li7PS6 typically do not require mechanical milling or quenching <cit.>, it may be surprising that they are unstable at 300 K—13 and 12 meV/atom above the hull respectively (Figure <ref>). It is likely that the thermodynamically favored phase separation of HT-Li7PS6 to Li2S and Li3PS4 is kinetically hindered at room temperature. Instead, HT-Li7PS6 is found to transform to LT-Li7PS6 upon cooling, a potentially more facile process as it merely involves shifting the PS4 locations (Figure 1). Thus, an appropriate solid-state synthesis procedure would be to perform sufficiently high temperature (T > 600 K) synthesis to stabilize HT-Li7PS6, before a relatively rapid cooling process to bypass the phase separation to Li2S and Li3PS4.
For the Li3PS4 composition, our calculations in Figure <ref> predict phase transformations from γ → β → α with increasing temperature, which is consistent with experiments. Since β-Li3PS4 is less than 1 meV/atom above the hull at 300 K (Figure <ref>), it is plausible that nanoporous synthesis and mechanical milling techniques can lead to its stabilization at room temperature <cit.>. The α-Li3PS4 polymorph is only slightly less stable than β at 300 K (E_hull = 4 meV/atom), which explains why α can also be stabilized at ambient temperature through a rapid heating and quenching procedure <cit.>. Rapid heating of the Li3PS4 glass to temperatures in the stability range of β enables nucleation of metastable α particles that are only slightly less stable than β, which is possible by the Ostwald step rule <cit.>. Rapid quenching can then obstruct the commonly observed direct transition from α to γ <cit.>, which is possible as their energy difference is only 4 meV/atom at 300 K.
Li7P3S11 (red curve in Figure <ref>) is metastable across all temperatures as its energy is never low enough to be on the convex hull, which agrees with prior experimental studies <cit.>. At 300 K, it is 4 meV/atom above the convex hull. As temperature increases to 500 K, its E_hull decreases to a minimum of 1.4 meV/atom. Further increases in temperature lead to greater E_hull. Thus, an ideal synthesis temperature should be around 500 K, corresponding to the minimum E_hull. This temperature is remarkably close to its experimentally observed glass transition temperature and helps rationalize why heat treatments near this temperature have been successful for recrystallization <cit.>. The increasing instability with respect to temperature helps explain the experimentally observed tendency to phase separate to Li3PS4 and Li4P2S6 at temperatures greater than 800 K <cit.>. The source of this instability is the competition with α-Li3PS4, its neighboring stable point, which lowers its free energy more with increasing temperature, therefore increasing the convex hull depth (Figure <ref>). We will show in the next section that this arises from the high configurational entropy in α-Li3PS4.
§.§ Li3PS4 polymorphs
γ-Li3PS4 (Pnm2_1) is the stable polymorph at room temperature and has a very low Li conductivity of 3 (10^-4) mS/cm <cit.>. The reported XRD and ND refinements are in excellent agreement with each other, showing an ordered Li sublattice comprising of fully occupied Li1 (4b) and Li2 (2a) sites. Since there is no ambiguity in these refinements, we use this structure to model γ-Li3PS4 <cit.>.
§.§.§ β-Li3PS4 structure
Upon heating, γ transforms to β-Li3PS4 at around 575 K, crystallizing in the orthorhombic Pnma space group, which leads to a lattice volume expansion by ∼3% <cit.>. The zig-zag ordering of PS4 units generates a different Li sublattice with more sites than in γ, leading to the potential for disorder. At around 600 K, XRD refinements have reported Li atoms occupying Li1' (8d), Li2' (4b), and Li3' (4c) sites, with fractional occupancies of 1, 0.7, and 0.3, respectively (XRD refined sites are labeled with apostrophes and ND refined sites without apostrophes for clarity in this discussion)<cit.>. A more recent ND refinement proposes a slightly different model, with reported site splitting of Li1’ (8d) to Li1A (8d) and Li1B (8d), and splitting of Li2' (4b) to Li2 (8d), while retaining its Pnma symmetry <cit.>. The 4 distinct Li sites refined by ND are all partially occupied.
We analyze the geometric discrepancy between XRD and ND refinements of β-Li3PS4 by inspecting the Li coordination environments in the XRD and ND sites. In Figure <ref>, we show (i) the unit cell, (ii) splitting of Li1' (8d), and (iii) splitting of Li2' (4b). The splitting of the Li1' (8d) site (grey in Figure <ref>ii) in fact yields two distinct sites: Li1A (8d) and Li1B (8d) (green and brown in Figure <ref>ii, respectively). Li1A (8d) is essentially identical to Li1' (8d), while Li1B (8d) is its face sharing neighbor 1.7 Å away. The emergence of Li1B as a new Li site can be detected by ND, while in XRD it has not been detected, likely due to the small X-ray scattering factor of Li. Since Li1A and Li1B sites are not related to each other, we will refer to Li1A as Li1 (8d) and Li1B as Li4 (8d) in the following discussion. Li2' (4b) (grey in Figure <ref>iii), with square planar coordination, splits into two neighboring and face-sharing Li2 (8d) sites (orange in Figure <ref>iii), each with 5 fold coordination. XRD was unable to distinguish the two neighboring Li2 (8d) sites, which are only 1.3 Å apart, and instead identified just one Li2' (4b) site.
To assess the accuracy of XRD and ND refinements of β-Li3PS4, we examine for all atomic positions in the DFT relaxed configurations the deviation from their XRD and ND refined sites. This is measured by calculating the normalized root mean squared (NRMS) displacement of relaxed atomic locations from the ND and XRD refined β-Li3PS4 lattices. The atoms of a relaxed structure are mapped back to a refined lattice site to construct the "refined" structure. The atoms of the relaxed and refined structures are then placed on an averaged lattice (in Cartesian coordinates) that minimizes the NRMS displacement, which is defined in equation <ref>, where Δ x_i is the displacement of atom i between the DFT-relaxed structure and ND or XRD-refined model in Cartesian coordinates, N is the number of atoms, and V is the cell volume <cit.>.
NRMS displacement = √(∑_i ^N Δ x_i^2 / N)/(V/N)^1/3
In Figure <ref>, we show violin plots of the distributions of NRMS displacement from the ND-refined structure (blue) and XRD-refined structure (orange). NRMS displacement from the ND structure is significantly smaller compared to the XRD structure, since the 3rd quartile of the ND and the 2nd quartile of the XRD distributions do not overlap (Figure <ref>). The distribution of relaxations from the ND structure also has a smaller range, and thus less probable outliers, suggesting that the ND refinement is more accurate.
To gain insight into the physical nature of ND and XRD refined sites in β-Li3PS4, we examine two low-energy structures that were previously proposed as the ground state in separate first-principles studies <cit.>. These highly similar structures are shown in Figures <ref> and <ref>. One contains fully occupied Li1' (8d) and Li2' (4b) sites, which yields well-ordered linear chains of Li1' and Li2' atoms along [010] and [001], and retains the Pnma symmetry of the underlying lattice—we will refer to this as the XRD ground state (XRD-GS) (Figure <ref>). The other proposed structure is the true, lowest energy ground state in our data set, which is reported to have fully occupied Li1' (8d) and Li2' (4b) sites, but the square planar coordinated Li2' atoms are displaced off-center to a neighboring 5-fold coordination environment, characteristic of the ND refined Li2 (8d) site—we will refer to this as the ND ground state (ND-GS) (Figure <ref>). The Li2 chain of atoms in ND-GS is staggered along [010], which leads to decreased symmetry (P2_12_12_1) compared to XRD-GS (Pnma). The Li site fractional occupancies of ND-GS can be described in the basis of the ND refined sites as x_Li1 = 1, x_Li2 = 0.5, and x_Li3 = x_Li4 = 0.
Although Li2' (4b) is located merely 0.6 Å from a neighboring Li2' (8d) site, the decrease in site energy is substantial, as ND-GS is 3.4 meV/atom lower than XRD-GS. Furthermore, when comparing phonon dispersion spectra, we find that XRD-GS is dynamically unstable with 2 nearly degenerate optical imaginary modes (Figure <ref>), while ND-GS is dynamically stable with no imaginary modes (Figure <ref>), agreeing well with previous reports <cit.>. When visualizing the XRD-GS imaginary optical modes at the Γ wave vector, we observe a collective motion of Li2' atoms (Figure <ref>). This indicates that the XRD refined Li2' (4b) site is a high energy transition state for Li hopping between two neighboring Li2 (8d) sites. These findings highlight the importance of distinguishing fine details of the Li sublattice, as substantial differences in physical behavior can arise when site locations are slightly perturbed.
The thermodynamic disordering behavior of the XRD and ND refined structures at elevated temperature are also compared. We fit separate cluster expansions on each lattice and perform MC simulations to predict the Li site disorder as a function of temperature. In Figure <ref>, the Li site fractional occupancies across temperature are plotted. The XRD structure begins to disorder from XRD-GS at approximately 900 K, and by 1000 K changes in the Li fractional occupancies are still relatively small, yielding poor agreement with the experimental XRD refinement (Figure <ref>). The ND structure begins to disorder from ND-GS at a lower temperature of about 600 K, and by 1000 K has significant changes in its Li fractional occupancies, highlighted by Li1 (8d) and Li3 (4c) having occupancies of 0.8 and 0.3, respectively. These values show reasonable agreement with the ND refinement at 620 K (0.7 and 0.3) (triangles in Figure <ref>). Our simulations on both the XRD and ND structures underestimate the experimentally reported configurational disorder. However, the ND structure is predicted to have greater disorder and thus better agreement with experiment, suggesting that the ND refinement is more accurate. Specifically, introducing the Li4 (8d) site and increasing multiplicity of Li2' (4b) to Li2 (8d) generates more configurational states that appear essential towards accurately describing the thermodynamics of this phase.
§.§.§ α-Li3PS4 structure
At high temperature (T > 725 K), β transforms to the orthorhombic Cmcm α-Li3PS4, increasing symmetry (Cmcm is a supergroup of Pnma) and slightly decreasing in density (1.6%) <cit.>. ND refinements report a Li sublattice containing Li1 (16h), Li2 (8e), and Li3 (4c) sites with high degree of disorder, as indicated by the isotropic Li fractional occupancies of around 0.4 <cit.>. An earlier refinement with XRD was deemed inconclusive, as only 1/3 Li atoms in the formula unit were refined to 1 distinct site, and there were large errors in the atomic displacement parameter (ADP) <cit.>. The ND refinement shows significant improvement by locating 2.9/3 Li and containing lower error in ADP <cit.>. Therefore, we use the ND-refined structure, which contains 3 tetrahedral Li sites over which Li atoms can disorder, to construct our cluster expansion for α-Li3PS4. The disordered unit cell and local Li coordination of α-Li3PS4 are shown in Figures <ref> and <ref>, respectively.
We can observe that α-Li3PS4 contains a well-connected 1D channel of face-sharing Li1-Li2-Li1 sites along [010] (Figure <ref>), which can be associated with fast Li-ion conduction <cit.>. The Li3 sites, which edge-share with Li1, serve to bridge adjacent Li1-Li2-Li1 channels.
From MC simulated annealing, we find the ground state of α-Li3PS4 to be 3.2 meV/atom (26 meV/f.u.) above the ground state of the β polymorph, and 8.0 meV/atom (64 meV/f.u.) above the γ polymorph. The ground state of α-Li3PS4 is shown in Figure <ref>, which contains a slight monoclinic distortion (lattice angle γ = 86.4). Li atoms only occupy the Li1 (16h) sites and form distorted linear Li chains along [010] and [001] (Figure <ref>). This indicates that Li1 (16h) sites are the most stable, and their face-sharing Li2 (8e) neighbors are higher energy intermediate sites that facilitate rapid Li diffusion. Similarly, the Li3 (4c) sites are higher energy intermediate sites that connect adjacent Li1-Li2-Li1 channels and promote 3D conductivity <cit.>.
MC simulations show that Li starts to occupy Li2 (8e) sites at 200 K, and Li3 (4c) sites at 300 K (Figure <ref>). α-Li3PS4 thus begins to disorder at a much lower temperature compared to β-Li3PS4. By 600 K, Li atoms already occupy a significant fraction of each Li site, whereas β-Li3PS4 only begins to disorder at this temperature. Thus, the α polymorph contains much greater configurational disorder compared to β. This is in qualitative agreement with the experimental ND refinement, which shows very isotropic Li fractional occupancies of around 0.4 for each site at 775 K (triangles in Figure <ref>).
§.§ Li3PS4 phase stability
Using the structural models we validated for the Li3PS4 polymorphs, we assess the stability of each polymorph across temperature by calculating and comparing their free energy. In Figure <ref>, we plot the free energy of α and β relative to γ-Li3PS4. Since γ contains well-ordered Li, we assume it to only create vibrational entropy. At 0 K, the polymorphs ranked in order of decreasing stability are γ, β, and α. The γ-β transition is predicted to occur at 370 K and the β-α transition occurs at 460 K (Figure <ref>). This order of phase transitions matches with experiments, though the predicted transition temperatures are 200-300 K below experimentally observed values. In experiments, it is also commonly observed that α directly transforms to γ without forming β upon cooling <cit.>, which we predict would occur at 420 K. At this temperature, the free energy differences among the polymorphs are very small (< 1 meV/atom), which helps rationalize why a direct transition can occur, especially if the transformation to γ is more kinetically favorable than forming β. We note that the r^2SCAN density functional <cit.> is required to predict the correct order of Li3PS4 polymorph stability, since γ is predicted to be unstable across all temperatures when using PBE <cit.> (SI Figure S1), which was also reported in previous first-principles calculations <cit.>.
When configurational entropy contributions are neglected (dotted lines in Figure <ref>), the free energy of α always lies above β, such that the only accessible transition is γ-β. This is attributed to the highly similar vibrational free energy profiles of α and β. After α begins to disorder at around 200 K, its configurational entropy increases faster than β, which drives the increased stability of α at high temperature. Furthermore, the exclusion of configurational entropy only slightly increases the γ-β transition temperature to 390 K, since β has low configurational entropy at this temperature. The main source of stability for β-Li3PS4 is thus vibrational entropy.
To rationalize the distinctly greater vibrational entropy in β and α compared to γ-Li3PS4, we compare the phonon density of states (DOS) in each phase, which are shown in Figure <ref>. β and α-Li3PS4 contain significantly larger DOS at 1-2 THz and around 6 THz (Figure <ref>). The projected DOS (pDOS) shows that for all phases, the 1-2 THz region is dominated by sulfur (S) modes, which are activated at low temperature around 100 K. From visualizing these modes, we observe that they mainly correspond to librations of the PS4 groups. Furthermore, γ has no vibrations at 6 THz, whereas the high temperature phases contain significant DOS near this frequency. This frequency lies in the region between 5 to 8 THz (240 to 380 K), where β (Figure <ref>) and α (Figure <ref>) have roughly equal projected density of Li and S phonon modes, whereas in γ there is a significantly larger projected density of S modes than Li (Figure <ref>). The activation of larger amplitude Li modes at around room temperature contributes to greater thermodynamic stability, and potentially towards high Li mobility in β and α-Li3PS4. This finding is consistent with prior reports highlighting the relation between fast Li-ion conductivity and vibrational entropy in some superionic conductors <cit.>.
Since the differences in low-temperature vibrational modes are likely dictated by the bonding within the S sublattice, we examine the electronic density of states of each ground state, which are shown in Figure <ref>. For all three polymorphs, the manifold of valence bands below the Fermi level dominantly consists of S 3p states which are spread over an energy range of ∼3 eV. The relatively large band widths indicate that these states are delocalized in character and should represent long-range van der Waals interactions among S atoms in separate PS4 units (Figure <ref>). The lower energy core band manifold consists of mostly P 3p and S 3p states, which we attribute to P-S binding between the PS4 groups. The core band states are spread over a narrower energy range of ∼1 eV, indicating that these states are more localized.
A key difference in electronic structure is observed in the γ polymorph, which has a larger energy gap between the core and valence band states, arising from narrower band widths in the core and valence band manifolds (Figure <ref>). The narrower core band widths can arise from stronger hybridization of S 3p and P 3p states in neighboring PS4 units, leading to more localization. This stronger hybridization between S and P atoms may lead to smaller interaction between S 3p states on neighboring PS4 groups, which contributes to decreased valence band widths. It appears that the uni-directional PS4 arrangement and denser hcp-type anion packing in γ-Li3PS4<cit.> facilitates more isotropic and localized P-S bonding states to inhibit facile S motion. These factors would contribute to greater S sublattice stiffness and reduced density of low-frequency S vibrational modes.
§.§ Li7PS6 polymorphs
Experiments show that orthorhombic LT-Li7PS6 (Pna2_1) is well-ordered and transforms to the higher symmetry cubic HT-Li7PS6 (F-43m) phase at 483 K <cit.>. According to XRD refinements, HT-Li7PS6 contains a disordered Li sublattice with one distinct Li1 (48h) site, which corner-shares with PS4 units and face-shares with its nearest Li1 neighbor <cit.>. No ND refinement has yet been reported on the HT-Li7PS6 phase; however, ND refinements have been reported for a Cl-doped analogue Li6PS5Cl <cit.>. An additional Li2 (48h) site was identified in Li6PS5Cl that edge-shares with PS4 units, and face-shares with its nearest Li1 and Li2 neighbors to form a cage-like Li substructure (Figure <ref>), while the sublattices of the PS4 and isolated S atoms are identical. Since Cl substitutes a fraction of S atoms without causing much change in lattice parameters, we presume that the Li sites in the doped and pristine phases are very similar and comparable. <cit.>.
As done with the β-Li3PS4 phase, we compare the NRMS atomic relaxations (Equation <ref>) of the DFT relaxed configurations starting from either the ND or XRD refined atomic positions of HT-Li7PS6, the distributions of which are plotted in Figure <ref>. We observe that there is a much smaller NRMS atomic displacement from the ND-refined lattice (mean of 0.12) compared to the XRD-refined lattice (mean of 0.19), indicating that the ND positions for Li are closer to the energy minimum.
We model Li-vacancy disorder in HT-Li7PS6 by fitting a CE using the ND refinement of Li6PS5Cl containing Li1 (48h) and Li2 (48h) sites, with all Cl atoms replaced by S atoms. Through MC simulated annealing, we identify a ground state ordering, shown in Figure <ref>, which contains 6 Li atoms in the unit cell (out of 28 Li) occupying Li2 sites, as evidenced by their edge-sharing with PS4 (orange in Figure <ref>). The prominence of Li2 as a stable site in the ground state provides further evidence that the structure refined by ND is more accurate and that the Cl doping does not influence the location of Li sites.
We perform MC simulations to predict the Li site occupancies as a function of temperature, which are plotted in Figure <ref>. The fraction of Li occupying Li1 is greater at all simulated temperatures, in reasonable agreement with Li site occupancy of Li6PS5Cl measured by ND at ambient temperature <cit.>. The preference of Li going to Li1 sites could be explained by its corner-sharing with PS4, which can reduce the repulsive interaction with P cations compared to the edge-sharing Li2 sites <cit.>.
The HT-Li7PS6 ground state structure was found to be 10.4 meV/atom more stable than the ordered LT-Li7PS6 structure proposed by XRD (shown in SI Figure S2), suggesting that the XRD refinement for LT-Li7PS6 may not be accurate <cit.>. To seek a more representative LT-Li7PS6 structure, we perturb its XRD refined structure through an AIMD simulation. The structure is heated to 800 K for 2 ps, held for 30 ps, and annealed to 100 K for 20 ps. Samples along the AIMD trajectory are relaxed, from which we identify a significantly more stable structure that is 1.2 meV/atom below the HT-Li7PS6 ground state. This new LT-Li7PS6 ground state (shown in Figure <ref>) has a slight monoclinic distortion (lattice angle β = 91), resulting from a small relaxation of the PS4 units away from a parallel arrangement, and some Li are shifted to new coordination environments. We also find a large spread of energies among the sampled structures that were relaxed (Figure <ref>), indicating that LT-Li7PS6 is likely configurationally disordered as well. We leave further analysis of the LT-Li7PS6 Li sublattice for future investigation. But our investigation confirms that, with our reassignment of the Li sites, LT-Li7PS6 is the ground state at low temperature.
Using our newly proposed ground states, we predict the phase stability of the Li7PS6 polymorphs at finite temperatures. In Figure <ref>, we plot the free energy of HT-Li7PS6 relative to LT. HT-Li7PS6 becomes stable at 270 K, with the majority of its stabilization relative to LT-Li7PS6 arising from configurational entropy contributions (Figure <ref>). Our predicted transition temperature is roughly 200 K below its experimentally observed value of 480 K. The likely cause for this understabilization of LT-Li7PS6 is that we may not have identified its true ground state yet, and that it contains significant configurational entropy contributions that have been neglected from our model because of the lack of a precise Li sublattice.
§.§ Li7P3S11
Li7P3S11 crystallizes in the low symmetry P-1 space group, and is composed of PS4 and P2S7 units. In both XRD and ND refinements, the Li sublattice is ordered with 7 distinct sites <cit.>. However, each refinement reports Li atoms occupying a different set of sites (the structures are shown in SI Figure S4). From our DFT calculations, we find that the XRD refined structure is substantially more stable than the ND refined structure by 9 meV/atom. A more recent first-principles study by Chang and coworkers proposed a disordered Li sublattice with 8 additional Li sites, identified from AIMD simulations <cit.>. The authors enumerated structures based on the disordered Li sublattice and reported a ground state (SI Figure S4c) that is 16 meV/f.u. more stable than the XRD-refined structure, using the PBE functional. This value is qualitatively consistent with our calculations using r^2SCAN, which yield an energy difference of 22 meV/f.u. (1 meV/atom).
We train a CE on the previously reported disordered Li7P3S11 lattice containing 15 distinct Li sites, which are a sum of the sites identified from XRD and AIMD. The unit cell is shown in Figure <ref>, from which we can observe that the possible Li sites include a range of planar and tetrahedral coordination environments with varying degrees of distortion. Through MC simulated annealing, we uncover a new ground state ordering (SI Figure S4d) that is 21 meV/f.u. (1 meV/atom) more stable than the ground state previously proposed by Chang and coworkers <cit.>.
Since crystallographic refinements have not reported the existence of configurational disorder in this phase, it is important to quantify the degree of disorder, and compare this with other superionic conductors <cit.>. To that end, we calculate the configurational entropy as a function of temperature with MC simulations for Li7P3S11, and compare it to that of α-Li3PS4, β-Li3PS4, and HT-Li7PS6, which are plotted in Figure <ref>. Li7P3S11 (red) is predicted to contain significant configurational entropy that is greater than β-Li3PS4 (light blue), and lower but comparable to α-Li3PS4 (dark blue) and HT-Li7PS6 (green). This result corroborates the additional Li sites in the disordered Li sublattice identified from AIMD <cit.>. We remark that all superionic conductors in this phase space contain significant configurational entropy that is of the same order of magnitude, indicating a potential correlation between superionic conductivity and configurational entropy.
§ DISCUSSION
Through applying a range of first-principles techniques to capture electronic, configurational, and vibrational sources of free energy in the pseudo-binary Li2S-P2S5 system, we recover all experimentally observed polymorph phase transitions in Li3PS4 and Li7PS6, and well-established trends such as the metastability of Li7P3S11. An accurate assessment of the configurational entropy required precise information on the possible Li sites in these structures. We find that ND refinements tend to contain more accurate details about Li sites and degree of disorder, compared to XRD refinements. Our first principles calculations show that these details from ND are critical towards predicting physically accurate dynamical stability and thermodynamic behavior.
Vibrational and configurational sources of entropy are shown to be crucial towards describing phase stability trends. Among the Li3PS4 polymorphs, the superionic conductors α and β have distinctly greater vibrational entropy compared to γ, which has low Li conductivity. We attribute this to the softness of the anion sublattice, as α and β-Li3PS4 contain significantly more low-frequency S vibrational modes, mainly corresponding to librations of the PS4 group. The potential electronic origin of the stiffer anion sublattice in γ-Li3PS4 lies in the stronger hybridization of the P 3p and S 3p states near the Fermi level. We postulate that these subtle differences in longer range binding between PS4 units are the reason why a meta-GGA level of theory is required to predict the correct order of Li3PS4 polymorph stability, as the SCAN family of density functionals have been shown to be superior at capturing medium-range van der Waals interactions.<cit.>. These findings can potentially motivate new design principles for novel superionic conductors based on features of the phonon and electronic band structure <cit.>.
Configurational sources of entropy are also essential towards describing phase stability trends. The polymorphic phase transitions involving α-Li3PS4 and HT-Li7PS6 can only be predicted when accounting for configurational disorder, which in turn requires accurate assessment of possible sites that Li can access. Furthermore, all superionic conductors in this phase space contain a significant amount of configurational entropy. β-Li3PS4 has the lowest configurational entropy, and coincidentally its bulk ionic conductivity has been reported to be low (8.9 10^-3 mS/cm), with only its nanoporous form having high Li conductivity (0.16 mS/cm) <cit.>. The high temperature α polymorph has considerably greater configurational entropy and room temperature Li conductivity of ∼2 mS/cm <cit.>. Meanwhile, the γ polymorph has the lowest ionic conductivity and contains no configurational disorder. This observation suggests an inherent correlation between fast Li mobility and high configurational entropy.
This trend is observed in many other systems as well. We show that HT-Li7PS6 has high configurational entropy, comparable to α-Li3PS4, and it is experimentally shown to have greater Li conductivity than LT-Li7PS6 <cit.>. This trend is not unique to sulfide superionic conductors, as the oxide garnet Li7La3Zr2O12 (LLZO) has a low-temperature ordered tetragonal phase with low Li conductivity, and a high-temperature disordered superionic conductor with increased cubic symmetry <cit.>. We observe that superionic conductors tend to be high temperature polymorphs with increased symmetry arising from the configurational disorder. These phases must be entropically stabilized at high temperature, which lends further support that high entropy is favorable towards achieving a superionic conducting state.
We can rationalize the origin of high configurational entropy by analyzing Li site energies. β-Li3PS4 and its higher symmetry α-Li3PS4 polymorph are ideal systems to compare, as they have the same number of Li atoms and Li sites per unit cell. A first order approximation for the Li site energy is the site's effective cluster interaction (ECI) energy (J_ECI) obtained from the CE using an orthonormal basis, which are plotted in Figure <ref>. It can be shown from the cluster decomposition framework that this is a unique and physical value to describe the energy of Li occupying a particular site <cit.>. This approximation can be justified by the observation that single-site ECI tend to be much larger in magnitude than the multi-site pair and triplet ECI (SI Figure S5); single-site ECI thus carry most of the weight in the total energy. We also calculate a site energy normalized by its multiplicity (Ĵ_ECI), which would provide a better estimate of the energy contribution of the site per unit cell. This is described in Equation <ref>, where M is the multiplicity of a distinct Li site and N is the total number of Li sites per unit cell.
Ĵ_ECI = J_ECI·M/N
In Figure <ref>, we plot the standard deviation of Ĵ_ECI in each Li3PS4 phase, showing that α contains a significantly smaller spread of Ĵ_ECI (8 meV) compared to the β polymorph (41 meV). Thus, in α the Li atoms will have a comparable energetic preference for occupying all sites. Many configurations will then have similar energy, which contributes towards its greater configurational entropy. The larger Li site energy spread in β means that Li atoms will tend to order by occupying the lowest energy sites and thus have smaller configurational entropy.
The potential connection between high Li mobility and configurational entropy suggests a rather obvious design strategy of doping superionic conductor phases to increase configurational disorder. Indeed, there have been many examples where cation or anion doping improves Li conductivity and enables room temperature phase stability. These include adding Si into Li3PS4 to form Li_3.25P_0.75Si_0.25S_4 in the β-Li3PS4 structure <cit.>, adding Cl or other halogen atoms (X) to Li7PS6 to form Li6PS5X in the HT-Li7PS6 structure <cit.>, and doping Al or Ga into Li7La3Zr2O12 to stabilize its high-temperature cubic structure <cit.>. We have shown that the disorder arising from only Li and vacancies can generate substantial configurational entropy that can dictate phase stability trends. Introducing doping would generate additional disorder in the non-Li cation or anion sublattices, which should considerably increase configurational entropy and provide greater thermodynamic stability at lower temperatures.
Previous studies have also shown that adding dopant species can alter the Li site energy landscape to facilitate dramatic improvements in ionic conductivity. Zeng and co-workers demonstrated that high-principal element cation doping can boost ionic conductivity by orders of magnitude <cit.>. Through first principles calculations, they showed that distortions to Li environments introduced by dopants can lead to Li site energy levels that are more closely spaced, promoting Li-ion percolation. It is possible that the soft degrees of freedom for libration of the PS4 units as seen in several polymorphs further generates the distribution of temporary site energies which leads to low energy barrier percolation pathways<cit.>. Similarly, Wang and co-workers found that adding Br into Li3YCl6 to form Li_3YBr_1.5Cl_4.5 introduced a larger variety of closely spaced octahedral Li site energy levels, leading to a lower order-disorder transition temperature and increased Li conductivity <cit.>. These previous studies highlight that engineering a more uniform Li site energy landscape will facilitate more facile Li-ion migration. We can synthesize this with our finding that smaller variance in Li site energies necessarily leads to greater configurational disorder as well, which illustrates why the phenomena of superionic conductivity and high configurational entropy should be intrinsically linked. This rationalizes why introducing dopants has been, and should continue to be, an essential design principle for discovering superionic conductors with improved Li conductivity and thermodynamic accessibility.
Accurately modeling configurational disorder in each phase could only be achieved after clarifying the details of Li sublattices. We demonstrate that ND refinements of Li sublattices in α-Li3PS4, β-Li3PS4, and the Li6PS5Cl analogue of HT-Li7PS6 contain critical details such as site splitting and additional sites that XRD could not detect. These additional sites likely lead to more low-energy configurations that are vital for describing thermodynamic behavior. The deficiencies of XRD refinements can be attributed to Li having poor XRD sensitivity due to its small X-ray scattering factor, while the negative scattering length of Li neutrons leads to greater sensitivity in ND. Despite its known limitations, XRD often yields reasonable results in many Li-containing materials, such as Li transition metal oxide cathodes, and remains a standard technique in characterizing Li battery materials. We speculate that the spurious XRD refinements highlighted in this study stem from very high Li mobility, which would smear the detected Li electron density and thus further deteriorate sensitivity. The close agreement between ND and XRD refinements of γ-Li3PS4 can then be explained by its low Li conductivity <cit.>. Our discovery of configurational disorder in LT-Li7PS6 highlights that there may still be additional details about the Li substructures that are yet to be uncovered, which should motivate further experimental and computational studies to refine the Li atomic arrangements.
Although we have predicted the phase stability trends and rationalized them on the basis of configurational and vibrational contributions, our predicted phase transition temperatures tend to underestimate experimentally observed values by about 200 K. The phase stability trends in this system are described on a rather fine energy scale on the order of 10 meV/atom. To highlight the sensitivity of the energy scale, subtle changes such as hypothetically shifting the free energy curve of β-Li3PS4 up by 3 meV/atom can already increase the γ-β transition temperature to its experimentally observed window. These small energy differences are easily within the bounds of error in our computational techniques. Specifically, it is known that semi-local density functionals, such as the GGA and r^2SCAN functionals used in this study, struggle to capture long-range dipole-induced dipole interactions <cit.>, which are likely to be prominent within the S sublattice in these materials. Furthermore, there is remnant self-interaction error in density functional approximations <cit.>, which can be mitigated by using more computationally expensive hybrid functional techniques <cit.> or many-body treatments of electron correlation <cit.>.
The error from CE configurational energies is compounded onto the DFT error since the CEs are trained on DFT data. On the basis of cross validation (CV) root mean squared error (RMSE), CE energy error ranges from 1 to 5 meV/atom, depending on the phase (SI Figure S6). Furthermore, anharmonic corrections to phonon calculations may yield key differences in the band dispersion and resulting vibrational free energy, as previously demonstrated in the sodium thiophosphate (Na3PS4) analogue <cit.>. The facile and long-range nature of Li hopping modes are a potential source of anharmonicity in superionic conductors. Finally, we have treated the configurational and vibrational entropy contributions as independent, as is common in first-principles alloy theory <cit.>. A more accurate, but significantly more computationally intensive approach, would be to also include the configurational-dependence of the vibrational entropy, as can be formally done with the CE approach <cit.>.
§ CONCLUSION
A phase diagram of the pseudo-binary Li2S-P2S5 system has been constructed from first-principles calculations. Well-established experimental trends, such as the phase transitions among Li3PS4 and Li7PS6 polymorphs, and the metastability of Li7P3S11 are recovered. The superionic conductors α-Li3PS4, β-Li3PS4, HT-Li7PS6, and Li7P3S11 are all predicted to be metastable at 300 K (E_hull = 4, 1, 12, and 4 meV/atom, respectively). We find that accounting for both vibrational modes and Li configurational degrees of freedom are essential for describing phase stability trends. Physically accurate evaluation of configurational entropy could only be made after clarifying the details of the Li sublattices in the superionic conductors. We demonstrate that these phases all contain significant configurational entropy, which suggests a correlation between high Li configurational entropy and fast Li conduction. Engineering a more uniform Li site energy landscape through doping should thus be an essential design principle for discovering novel superionic conductors with improved thermodynamic stability and Li conductivity at ambient temperature.
§ METHODS
All electronic structure calculations were performed using the Vienna ab-initio simulation package (VASP) <cit.>. For the ground state structures of each phase, ionic relaxations were performed with 1e-05 eV convergence in the total energy and 1e-02 eV/Å in the forces, initially using the generalized gradient approximation (GGA) functional as parameterized by Perdew, Burke, and Ernzerhof (PBE) <cit.> and projector augmented wave (PAW) potentials <cit.>. The GGA-converged structure was further relaxed with the meta-GGA r^2SCAN functional <cit.>, with a k-point spacing dependent on the band gap of the PBE calculation, a scheme proposed by Kingsbury and co-workers <cit.>. The final reported formation energies were obtained from a static calculation with denser k-point spacing of 0.2 Å^-1. Applying increased meta-GGA level of theory was essential for capturing physical polymorph phase stability, as γ-Li3PS4 and β-Li3PS4 had nearly identical electronic formation energies using PBE (SI Table SI). The relaxed structure, total energy, and calculation details for each phase's ground state are attached in the form of a Pymatgen ComputedStructureEntry JSON file in the attached folder.<cit.>
CE construction and MC sampling were performed with the smol Python package <cit.>. The primitive structures used to construct the CE for each phase are described in SI Table SII-SV. CEs were trained on superstructures relaxed using the PBE functional only, to limit the computational cost. It has been previously shown that similar schemes of mixing levels of theory can yield physically accurate phase diagrams <cit.>. We simultaneously parameterize the CE with an additional electrostatic energy term, which was calculated from the bare Coulomb interaction between idealized Li^+, P^5+, and S^2- point charges with the Ewald summation method. CE fitting was performed in a piece-wise manner, where the initial fit only trained the point correlation functions and effective dielectric constant (ϵ), using L2 norm penalized linear regression. The residual of the initial fit was used to train the pairs and higher-order effective cluster interactions (ECI) with penalization of the L1 norm. We observed that this method yields improved fit stability and a more physical ϵ, which is attributed to decreased regularization of ϵ <cit.>. MC sampling was performed in supercells for each phase in the canonical ensemble, with decreasing temperatures starting at 1000 K. Supercells were constructed to contain at least 200 Li sites and have similar lattice parameters. At least 40000 MC passes were performed at each temperature. To calculate configurational free energy, the average internal energy (⟨ E ⟩) at each temperature was integrated over inverse thermal energy (β = 1/kT) (Equation <ref>).
β F_config(T) = β F_config(T=T_0) + ∫_β_0^β⟨ E ⟩ dβ
New ground states were found from simulated annealing, using a similar procedure of canonical MC sampling at decreasing temperature, but with unit cells and smaller supercells.
Harmonic phonon calculations were performed on the ground state of each phase, with the frozen phonon method using Phonopy and VASP <cit.>. Structures were relaxed with PBE to a stricter convergence criteria of 1e-7 eV in energy and 1e-3 eV/Å in the forces. Atomic displacements were generated on supercells, which were created such that each lattice parameter is greater than 12 Å and nearly equal to each other. For P2S5 only, we calculate the phonon properties using density functional perturbation theory (DFPT) as implemented in VASP <cit.>, because we observed that the frozen phonon method yielded many imaginary modes, which we attribute to a strongly anharmonic potential energy surface. Non-analytical correction to modes near the Γ wave vector was performed by incorporating the dielectric properties, to account for longitudinal optical and transverse optical (LO-TO) mode splitting in polar ionic materials in the long wavelength limit <cit.>. Dielectric permittivity and Born effective charge tensors were computed with DFPT <cit.>, using a denser reciprocal space discretization of 0.125 Å^-1 to ensure convergence of dielectric properties <cit.>. The vibrational free energies and phonon total density of states for each phase (if not already shown in the previous sections) are plotted in SI Figures S8 and S9, respectively.
Using the electronic, configurational, and vibrational free energies, the formation free energies and resulting phase diagrams were computed with Pymatgen <cit.>. The formation free energies of phases relative to the Li2S and P2S5 end points are shown in SI Figure S7.
§ ACKNOWLEDGEMENTS
The authors would like to thank Prof. Kristin Persson, Prof. Geoffrey Hautier, and Sunny Gupta for useful insights on phonon calculations. This work was supported by the Assistant Secretary of Energy Efficiency and Renewable Energy, Vehicle Technologies Office of the US Department of Energy (DOE), under contract no. DE-AC02-05CH11231 under the Advanced Battery Materials Research (BMR) Program. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under contract no. DE-AC0205CH11231, and the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by the National Science Foundation grant number ACI1053575.
|
http://arxiv.org/abs/2307.02755v1
|
20230706032649
|
NiCrAl piston-cylinder cell for magnetic susceptibility measurements under high pressures in pulsed high magnetic fields
|
[
"Katsuki Nihongi",
"Takanori Kida",
"Yasuo Narumi",
"Nobuyuki Kurita",
"Hidekazu Tanaka",
"Yoshiya Uwatoko",
"Koichi Kindo",
"Masayuki Hagiwara"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
AIP/123-QED
Sample title]NiCrAl piston-cylinder cell for magnetic susceptibility measurements under high pressures in pulsed high magnetic fields
Center for Advanced High Magnetic Field Science (AHMF), Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan
Center for Advanced High Magnetic Field Science (AHMF), Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan
Center for Advanced High Magnetic Field Science (AHMF), Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan
Department of Physics, Tokyo Institute of Technology, Meguro-ku, Tokyo 152-8551, Japan
Innovator and Inventor Development Platform (IIDP) Tokyo Institute of Technology Nagatsuda, Midori-ku, Yokohama 226-8502, Japan
The Institute for Solid State Physics (ISSP), The University of Tokyo, Kashiwa, Chiba 277-8581, Japan
The Institute for Solid State Physics (ISSP), The University of Tokyo, Kashiwa, Chiba 277-8581, Japan
Center for Advanced High Magnetic Field Science (AHMF), Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan
[email protected]
We developed a metallic pressure cell made of nickel-chromium-aluminum (NiCrAl) for use with a non-destructive pulse magnet and a magnetic susceptibility measurement apparatus with a proximity detector oscillator (PDO) in pulsed magnetic fields of up to 51 T under pressures of up to 2.1 GPa. Both the sample and sensor coil of the PDO were placed in the cell so that the magnetic signal from NiCrAl would not overlay the intrinsic magnetic susceptibility of the sample. A systematic investigation of the Joule heating originating from metallic parts of the pressure cell revealed that the temperature at the sample position remains at almost 1.4 K until approximately 80 % of the maximum applied magnetic field (H_ max) in the field-ascending process (e.g., 40 T for H_ max of 51 T). The effectiveness of our apparatus was demonstrated, by investigating the pressure dependence of the magnetization process of the triangular-lattice antiferromagnet Ba_3CoSb_2O_9.
[
Masayuki Hagiwara
August 1, 2023
=====================
§ INTRODUCTION
Extreme conditions, such as high pressure, high magnetic field, and low temperature, are occasionally required to search for new properties and phenomena in condensed-matter materials. For instance, the ground states of geometrically frustrated magnets (GFMs) infinitely degenerate at low temperatures, and exotic physical phenomena such as a quantum spin-liquid state, and quantum phase transitions have been reported under extreme conditions<cit.>. In GFMs, a high magnetic field lifts the degeneracy and sometimes induces exotic magnetic phases. High pressure alters the magnetic anisotropy and exchange interactions between magnetic ions in a magnetic material by shrinking its crystal lattice. Recently, the triangular-lattice antiferromagnet Cs_2CuCl_4, one of the GFMs, was reported to exhibit multiple magnetic-field-induced phase transitions under high pressure at low temperatures<cit.>. Therefore, experimental techniques that can be used under these extreme conditions are desirable to clarify the physical properties of condensed-matter materials.
The development of measurement techniques under multiple extreme conditions has been undertaken at pulsed high magnetic field facilities. Thus far, the magnetization curves of several magnetic materials measured by a conventional induction method using pick-up coils were reported under pressures of up to 0.95 GPa in pulsed magnetic fields of up to 50 T<cit.>. In these studies, a non-destructive pulse magnet and a self-clamped piston-cylinder cell (PCC) made of beryllium-copper (CuBe) or nickel-chromium-aluminum (NiCrAl) were utilized. The magnetization signal was detected by winding pick-up coils with approximately100 turns around the exterior of the PCC (Fig.<ref>(c)). Therefore, the measurement signals were degraded by the low sample filling rate in the pick-up coils and the noise induced by the eddy current in the metallic parts of PCC caused by pulsed magnetic fields. Moreover, the eddy current causes Joule heating, resulting in the temperature rise of the sample. Hamamoto et al. reported the effect of pressure on the metamagnetic transition in CeRh_2Si_2 above 6 K in pulsed high magnetic fields using a CuBe PCC<cit.>. The temperature dependence of the metamagnetic transition field on CeRh_2Si_2 was reported to be almost independent of the temperature, at least below 15 K, but the temperature change of the sample during the magnetic-field sweep was unknown. In magnetic materials such as GFMs with a low Néel temperature T_ N, the magnetic properties are often sensitive to temperature changes at low temperatures and the measurements to determine these properties need to be taken below the temperature of liquid helium (∼ 4.2 K). However, it is difficult to use the aforementioned apparatus to study GFMs.
To suppress the Joule heating, the cell body of the PCC was made of NiCrAl alloy with a lower conductivity than the CuBe alloy. In addition, the tensile strength of the NiCrAl alloy (∼ 2.37 GPa at room temperature (RT)) is higher than that of the CuBe alloy (∼ 1.35 GPa at RT)<cit.>. However, the magnetic susceptibility of the NiCrAl alloy was approximately ten times larger than that of the CuBe alloy<cit.>. Therefore, the practical use of a NiCrAl PCC is limited to materials with large magnetization magnitudes. To overcome these problems, we developed magnetometry based on a radio frequency (RF) technique using a proximity detector oscillator (PDO) <cit.>.
The PDO is an inductance (L)-capacitance (C) self-resonating LC tank circuit based on the widely available proximity detector chip used in modern metal detectors. This device can detect the magnetic susceptibility and/or electrical conductivity of a sample in pulsed high magnetic fields<cit.>. In this technique, the inductance change of a small sensor coil with tens of turns in the LC tank circuit is measured when a magnetic field is applied. The resonance frequency of the LC tank circuit at zero field was f_0 =1/2π√(LC). When a sample is placed in the sensor coil, L changes depending on its magnetic susceptibility and/or electrical conductivity of the sample in the magnetic field. Hereafter, we call this technique as LC method. The LC method detects the change in the resonance frequency (Δ f) corresponding to the change in L. When the sample is a magnetic insulator, Δ f is proportional to the change in the dynamic magnetic susceptibility (χ = Δ M/ Δ H), as follows:
Δ f/ f_0=-Δ L/2L∝ - 1/2V_ s/V_ c 4 πχ,
where V_ s is the volume of the sample inside the sensor coil, and V_ c is the inside volume of the sensor coil. According to Eq. <ref>, the absolute value of Δ f increases as the sample filling rate increases against the sensor coil (V_ s/V_ c). The sensor coil typically consists of only 5∼30 turns with a diameter as small as 300 μm. Therefore, an effective approach is to place the small sensor coil, including the sample, inside the small interior space of a high-pressure cell, because the sensor coil does not detect the magnetization of the pressure cell.
Magnetic susceptibility measurements, conducted under high pressure by utilizing the LC method in static magnetic fields, have been reported <cit.>. However, such measurements in pulsed magnetic fields were rarely reported. Recently, Sun et al. developed a diamond anvil cell (DAC) fabricated mainly of insulating composites that minimize Joule heating in pulsed high magnetic fields. They performed magnetic susceptibility measurements of the quantum antiferromagnet [Ni(HF_2)(pyz)_2]SbF_6 in pulsed magnetic fields of up to 65 T under pressure of up to 5.4 GPa by the LC method<cit.>. Because of the small sample space in this pressure cell (less than 0.01 mm^3), the sensor coil was limited to a diameter of 150 μm and a maximum of four turns, and the sample size was too small, complicating attempts to increase the sensitivity of the measurement by increasing the number of turns.
In this study, we designed a NiCrAl PCC that suppresses the effect of Joule heating on a sample in pulsed high magnetic fields and established a magnetic susceptibility measurement system based on the LC method for use under multiple extreme conditions. Although the PCC generally generates lower pressures than a DAC, the sensitivity of the measurements can be increased by adjusting the number of turns of the coil because of the larger interior space in the PCC. To demonstrate the effectiveness of this apparatus for the study of GFMs, we examined the magnetization processes of the triangular-lattice antiferromagnet Ba_3CoSb_2O_9, a GFM with T_ N = 3.8 K at 1.4 K. The magnetic susceptibility was measured under high pressure in pulsed high magnetic fields.
§ PRESSURE CELL DESIGN AND SETUP
Figure <ref>(a) shows a schematic view of the NiCrAl PCC for the magnetic susceptibility measurements in pulsed high magnetic fields. The cylinder of the PCC, pressure-clamp bolts, plugs, and piston backups were made of NiCrAl alloy. The pressure in the sample space was determined from the pressure dependence of the superconducting transition temperature of Sn <cit.>. The pressure cell was inserted into a SQUID magnetometer (Quantum Design, MPMS-XL 7), and the change in the superconducting transition temperature of the Sn manometer was investigated under high pressure. The outer diameter of the cylinder was 8.6 mm, allowing compatibility with the SQUID magnetometer with an inner bore diameter of 9 mm. Moreover, this size was also suitable for insertion into a ^4He cryostat with an inner bore diameter of 10 mm in a liquid-helium bath. The length of the cylinder was 65 mm; therefore, the length of the sample space was 10 mm under maximum pressure.
A cross-sectional view of the sample space in the PCC is shown in Fig.<ref>(b). The pressure medium was Daphne 7373 (Idemitsu Kosan Co., Ltd.). The sample space is filled with Daphne 7373 sealed by NiCrAl plugs with O-rings, Teflon rings, and Cu rings. Cu wires (∼ 100 μm) pass through the stepped hole of the lower plug filled with STYCAST 2850FT to prevent leaking pressure medium. At RT, the pressure medium remained in the liquid state up to a pressure of approximately 2 GPa. For this pressure medium, the pressure difference between 4.2 and 300 K is reported to be approximately 0.15 GPa, irrespective of the initial pressures at 300 K<cit.>. The sample is usually molded to a height of 5 mm and a diameter of 1.4 mm or less. A Teflon tube with inner and outer diameters of 1.6 and 1.8 mm, respectively, and a length of approximately 10 mm covers the sample and the sensor coil to prevent direct contact between the sample and the inner wall of PCC. The Sn manometer is inserted in the Teflon tube. High pressure was applied to the pressure cell through the piston that was clamped using a pressure clamp bolt at RT. In our preliminary experiments, a NiCrAl PCC with inner and outer diameters of 2.0 and 6.0 mm, respectively, generated pressure of 0.8 GPa for a maximum applied force of nearly 300 kgf. The advantage of this arrangement is that the applied force can be increased by increasing the thickness of the PCC cylinder. In practice, setting the inner diameter to 2.0 mm and expanding the outer diameter to 8.6 mm enabled a maximum applied force of approximately 1000 kgf. Consequently, the NiCrAl PCC has achieved a maximum pressure of P = 2.10 ± 0.02 GPa.
Figure <ref> shows a block diagram of the magnetic susceptibility measurement apparatus for pulsed magnetic fields under high pressure using the PDO. Pulsed magnetic fields were generated using a non-destructive pulse magnet and a capacitor bank installed at the AHMF at Osaka University. The pulse magnet with a bore diameter of 17∼18 mm is immersed in liquid nitrogen to lower the electrical resistance and cool down the magnet after the high-field generation. The pulse magnet was capable of generating pulsed magnetic fields of up to 51 T with a pulse duration of 35 milliseconds (ms). The glass Dewar container consisted of a liquid-helium bath containing the PCC with the sample, a vacuum insulation space, and a liquid nitrogen bath. The sample space can be cooled to 1.4 K at the lowest by evacuating the liquid helium bath with liquid ^4He.
The design of the PDO circuit surrounding the metal shield box, shown in Fig.<ref>, was based on designs in previous reports of Refs.MMAltarawneh, Altarawneh, Ghannadzadeh. To obtain an intense PDO signal, the sensor coil (L_ s) with 40 μm diameter Cu wire was directly wound around the sample to get V_ s/V_ c≈ 1 in Eq.1 and the number of turns was adjusted accordingly. In this study, the sensor coil was wound to ∼25 turns for the small sample (typical size is ∼ 1×1× 5 mm^3) that can be inserted into the PCC. The sensor coil placed in the helium bath was connected to the PDO circuit in the metal shield box at RT with a coaxial cable (Lake Shore Crytronics Inc., Ultra-Miniature Coaxial Cable type C) of approximately 1 m. The resonance frequency in the entire PDO circuit, including the sensor coil and coaxial cable depends on the effective inductance (L_ eff) composed of L_ s, L_1 and L_2; the mutual inductance L_ m among the coils; and the connecting coaxial cable (L_ coax). The total effective inductance L_ eff is given by,
L_ eff = L_1(1- L_ m^2/L_1(L_2+L_ s+L_ coax)).
In this setup, the resonant frequency in zero field (f_0) was 35∼42 MHz. The output signals (f (μ_0H) = f_0+Δ f) measured in pulsed magnetic fields were amplified and sent to two-stage frequency mixing (f_1, f_2), and were filtered to remove high-frequency components. The frequency of the output signal (∼42 MHz) loaded into the digitizer is down-converted to 1.2 MHz. The signal was stored in the digitizer at a rate of 50 MS/s (MS: mega-samples), with one wave consisting of approximately 300 data points, which was sufficient to construct the correct waveform. The average frequency at each point for the descrete magnetic field was made from 3∼5 successive waves. Consequently, the actual sampling rate corresponded to approximately 240∼400 kS/s (kS: kilo-samples).
§ EFFECT OF JOULE HEATING
To evaluate the amount of heat transferred from the heated pressure cell to a sample in the presence of high magnetic field, we investigated the temperature change in the sample space in pulsed magnetic fields utilizing a commercially available RuO_2-tip resistor (KOA Co. Ltd, typical resistance is 560 Ω at RT) as a thermometer. The magnetoresistance of this RuO_2-tip resistor was calibrated in pulsed magnetic fields below 10 K, and the tip resistor was placed in the sample space filled with Daphne 7373 or on the outer wall of the PCC. The PCC was inserted into the glass Dewar container filled with liquid ^4He (∼1.4 K) as shown in Fig.<ref>.
Figure <ref>(a) shows the temperature changes from the initial temperature T_0 = 1.4 K on the outer wall of the PCC in pulsed magnetic fields as a function of time and the profile of this magnetic fields, which reached a maximum field of 51.0 T, and a duration of 35 ms. The temperature on the outer wall of the PCC rapidly increased as soon as the pulsed magnetic field was generated and exceeded the maximum calibration temperature of 10 K at approximately 20 ms. The thermal equilibrium state between 6 and 15 ms in Fig.<ref>(a) may be a temporary suppression of the temperature increase owing to the endothermic effect of the evaporation of liquid ^4He by Joule heating. Figure <ref>(b) shows the temperature changes from 1.4 K and 4.2 K at the sample position inside the PCC in pulsed magnetic fields as a function of time. At the maximum field of 51.0 T, the temperature at the sample position remained almost 1.4 K until nearly 6.5 ms (approximately 40 T in the field-ascending process). After approximately 6.5 ms, the temperature increased slowly to reach approximately 8 K at 40 ms (approximately zero T). Since the sample is covered with a Teflon tube (the thermal conductivity of Teflon at 2 K is of the order of 10^-4 (J/cm·s·K) <cit.>), and the remaining space is filled with Daphne 7373, the Joule heating from the metal parts of the PCC (the thermal conductivity of NiCrAl at 2 K is of the order of 10^-3 (J/cm·s·K)) is transmitted to the sample position with some delays. Therefore, regardless of the maximum magnetic field, the temperature hardly increased until approximately 6.5 ms, after which it increased slowly. At 40 ms, the temperatures at the sample position were 8, 7, and 6 K for H_ max = 51.0, 41.6, and 27.1 T, respectively. This is because the sweep rate of pulsed magnetic fields (dH/dt) increases with increasing the maximum field and the Joule heating becomes large accordingly. At the initial temperature T_0 = 4.2 K, the temperature at the sample position gradually increased until about 2.5 ms (approximately 20 T in the field-ascending process), whereupon it increased rapidly. In pulsed magnetic fields of up to 51.0 T, the period of time after which the temperature at the sample position started to increase, was longer at 1.4 K than at 4.2 K. This may be owing to the high thermal conductivity of superfluid helium below 2.17 K that surrounds the PCC immersed in liquid ^4He.
§ STUDY OF A TRIANGULAR-LATTICE ANTIFERROMAGNET
We investigated the magnetic susceptibility of Ba_3CoSb_2O_9, one of the triangular-lattice antiferromagnets (TLAs), using the apparatus developed in this study. The Co^2+ ions with the effective spin S = 1/2 form an equilateral triangular lattice in the ab plane, indicating both intra- and inter-layer antiferromagnetic exchange interactions <cit.>. Below T_ N = 3.8 K, the magnetic structure at zero field shows a 120^∘ spin structure in the ab plane. For H ∥ ab plane, as shown in Fig.<ref>(a), successive quantum phase transitions occur from the Y coplanar state to the up-up-down (uud) state, and from the uud state to the V state, followed by the V^' state<cit.>. In this experiment, a plate-shaped single-crystal sample of Ba_3CoSb_2O_9 was placed inside a sensor coil with 25 turns, which was directly wound in the direction perpendicular to the c axis of Ba_3CoSb_2O_9 (inset of Fig.<ref>(b)). The value of f_0 of the PDO was approximately 37 MHz at 4.2 K.
Figure <ref> (b) shows the changes in the resonance frequencies versus the applied magnetic field (Δ f-H) for H ∥ ab plane at 1.4 K and 10 K under ambient pressure without the PCC. The curves of Δ f vs H exhibit both field ascending and descending processes. The value of Δ f consists of both the change in the magnetoresistance of the sensor coil and that of coaxial cable in the presence of the magnetic field as the background <cit.>. The Δ f-H curve at 1.4 K indicates distinct frequency shifts corresponding to the changes in the magnetic susceptibility at H_ c1 = 9.4 T, H_ c2 = 15.7 T, H_ c3 = 22.7 T, and H_ sat = 31.8 T when compared to the Δ f-H curve at 10 K above T_ N.
To obtain the intrinsic magnetic susceptibility of Ba_3CoSb_2O_9, we subtracted the fitting function determined from Δ f at 10 K, for which the difference from the background data is much greater than that at T_ N from Δ f at 1.4 K, and then adjusted the data such that the value of the subtracted Δ f_ sub above H_ sat is constant at zero. The comparison between the Δ f_ sub-H curve and the field derivative of the magnetization (dM/dH) obtained using the conventional induction method is shown in Fig.<ref>(c). The Δ f_ sub-H curve agrees very well with dM/dH obtained by the induction method<cit.>. The dip between H_ c1 and H_ c2 corresponds to the uud phase, which exhibits a magnetization plateau at one-third of the saturation magnetization in the magnetization curve. The cusps at H_ c3 and H_ sat are associated with the magnetic transition from the V to the V^' phase and the saturation field.
Figure <ref>(a) demonstrates the Δ f_ sub-H curves of Ba_3CoSb_2O_9 for H ∥ ab plane at 1.4 K in pulsed magnetic fields of up to 51 T under pressures of up to 1.97 GPa. The Δ f_ sub-H curve at ambient pressure in the PCC agrees remarkably well with that without the PCC as shown in Figs. <ref>(a) and (b), but the noise in the former case exceeds that for the latter. This was probably caused by the poor connection between the sensor coil and the Cu wires passing through the stepped hole of the lower plug. Since pulsed high magnetic fields with the maximum field of 51 T reached approximately 40 T at 6.5 ms from the start of pulsed magnetic field generation, Δ f_ sub up to H_ sat is not affected by the increase in the sample temperature as a result of Joule heating.
With increasing pressure, the peak at H_ c2 shifted to a higher magnetic field, whereas the peaks at H_ c1and H_ sat stayed almost in place with increasing pressure up to 1.97 GPa. The peak position at H_ c3 does not change against pressure, but the peak at H_ c3 became obscure by the background and was too weak to detect above 1.58 GPa.
Based on the pressure dependence of H_ sat, the intra-layer antiferromagnetic exchange interactions did not change significally. Therefore, the expansion of the uud phase may be accompanied by increasing the effects of thermal and/or quantum fluctuations caused by the relative decrease of the interplanar antiferromagnetic exchange interactions, which enhances the two dimensionality in Ba_3CoSb_2O_9. Another possibility may be a tilting of the sample direction against the magnetic field from the ab plane to the c axis caused by the application of pressure<cit.>.
Detailed clarification of the pressure effect on the magnetism in Ba_3CoSb_2O_9 for H ∥ ab plane would require an expansion of the pressure region to beyond 2.1 GPa. The PCC in this study was designed as used in a pulse magnet with a bore diameter of 17∼18 mm. We plan to develop a new PCC with a maximum pressure of 4 GPa by decreasing the inner diameter of the PCC utilized in this study. However, this would shorten the time of heat transfer from the inner wall of pressure cell to the sample position, thus enabling the temperature in the sample space to increase at lower magnetic fields than those in the present study. When we use a pulse magnet with a duration of approximately 200 ms based on our future plan, the magnetic-field sweep rate in the field ascending process would be lowered to approximately 1/5 of that of the pulse magnet used in this study. This long duration might suppress the increase of the sample temperature in the PCC, and thus magnetic susceptibility measurements under higher pressures than 2.1 GPa could be conducted in high magnetic fields.
§ SUMMARY
In summary, we developed an apparatus for magnetic-susceptibility-measurements in pulsed magnetic fields of up to 51 T under pressures of up to 2.1 GPa. The temperature at the sample position in our PCC changed slightly until approximately 40 T in the field-ascending process in pulsed high magnetic fields up to the maximum 51 T at 1.4 K. We performed the magnetic susceptibility measurements of the triangular-lattice antiferromagnet Ba_3CoSb_2O_9 in pulsed high magnetic fields under high pressures by the LC method using the PDO technique. We succeeded in observing a change in the resonance frequency that corresponded to the field derivation of the magnetization over the saturation field.
We would like to thank D. Yamamoto for useful discussions. This study was supported by the Sasakawa Scientific Research Grant from the Japan Science Society and JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2125. This work was supported by JSPS KAKENHI Grant Numbers JP17H06137, JP17K18758, JP21H01035 and 22K03511.
|
http://arxiv.org/abs/2307.13650v1
|
20230705085645
|
RamanSPy: An open-source Python package for integrative Raman spectroscopy data analysis
|
[
"Dimitar Georgiev",
"Simon Vilms Pedersen",
"Ruoxiao Xie",
"Álvaro Fernández-Galiana",
"Molly M. Stevens",
"Mauricio Barahona"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cs.MS",
"physics.data-an"
] |
1]Dimitar Georgiev
2,3]Simon Vilms Pedersen
2]Ruoxiao Xie
2]Álvaro Fernández-Galiana
[2]Molly M. [email protected]
[4]Mauricio [email protected]
[1]Department of Computing, Imperial College London,
United Kingdom
[2]Department of Materials, Department of Bioengineering & Institute of Biomedical Engineering, Imperial College London,
United Kingdom
[3]Present address: University of Southern Denmark,
Odense,
Denmark
[4]Department of Mathematics, Imperial College London,
United Kingdom
Raman spectroscopy is a non-destructive and label-free chemical analysis technique, which plays a key role in the analysis and discovery cycle of various branches of science. Nonetheless, progress in Raman spectroscopic analysis is still impeded by the lack of software, methodological and data standardisation, and the ensuing fragmentation and lack of reproducibility of analysis workflows thereof. To address these issues, we introduce , an open-source Python package for Raman spectroscopic research and analysis. provides a comprehensive library of ready-to-use tools for spectroscopic analysis, which streamlines day-to-day tasks, integrative analyses, as well as novel research and algorithmic development. is modular and open source, not tied to a particular technology or data format, and can be readily interfaced with the burgeoning ecosystem for data science, statistical analysis and machine learning in Python.
: An open-source Python package for integrative Raman spectroscopy data analysis
*
================================================================================
Raman spectroscopy (RS) is a powerful sensing modality based on inelastic light scattering, which provides qualitative and quantitative chemical analysis with high sensitivity and specificity <cit.>. RS yields a characterisation of the vibrational profile of molecules, which can help elucidate the composition of chemical compounds, biological specimens and materials <cit.>. In contrast to most conventional technologies for (bio)chemical characterisation (e.g., staining, different omics, fluorescence microscopy and mass spectrometry), RS is both label-free and non-destructive, thereby allowing the acquisition of rich biological and chemical information without compromising the structural and functional integrity of the probed samples. This advantage has enabled a broad range of applications of RS in biomedical and pharmaceutical research <cit.>, including in the imaging of cells and tissues <cit.>, the chemical analysis of drug compounds <cit.>, and the detection of disease <cit.>.
An area of topical interest is the frontier of Raman spectroscopy, chemometrics and artificial intelligence (AI), with its promise of more autonomous, flexible and data-driven RS analytics <cit.>. There has been a recent surge in the adoption of AI methods in Raman-based research <cit.>, with applications to RS now spanning domains as broad as the identification of pathogens and other microbes <cit.>; the characterisation of chemicals, including minerals <cit.>, pesticides <cit.> and other analytes <cit.>; the development of novel diagnostic platforms <cit.>; as well as the application of techniques from computer vision for denoising and super-resolution in Raman imaging <cit.>.
As new hardware, software and data acquisition RS technologies continue to emerge <cit.>, there is a pressing need for an integrated RS data analysis environment, which facilitates the development of pipelines, methods and applications, and bolsters the use of RS in biomedical research. Yet, the full deployment of RS and its capabilities is still hindered by practical factors stemming from the restrictive, functionally disparate, and highly encapsulated nature of current commercial software for RS data analysis.
RS data analysis often operates within proprietary software environments and data formats, which have induced methodological inconsistencies and reduced cross-platform and benchmarking efforts, with growing concerns around reproducibility. These restrictions have also
hampered the adoption of new AI technologies into the field <cit.>. As a consequence, researchers increasingly resort to developing in-house scripts for RS analysis in Python <cit.>, further adding to methodological fragmentation and lack of standardisation <cit.>.
In response to these challenges, we have developed - a modular, open-source framework for integrated Raman Spectroscopy analytics in Python. is designed to systematise day-to-day workflows, enhance algorithmic development and validation, and accelerate the adoption of novel AI technologies into the RS field.
Firstly, serves as a platform for general-purpose RS analytics supporting the RS data life cycle by providing a suite of ready-to-use modules for data loading, preprocessing, analysis and visualisation. By design, these functionalities are not tied to any specific technology or data type, thereby allowing integrative and transferable cross-platform analyses.
Secondly, addresses challenges in data preprocessing by facilitating the compilation of reproducible pipelines to streamline and automatise preprocessing protocols.
Thirdly, helps bridge the gap between RS data and state-of-the-art AI technologies within the extensive machine learning (ML) ecosystem in Python. Complemented by direct access to Raman datasets, preprocessing protocols and performance metrics, this provides the foundation for AI model development and benchmarking.
The codebase of is hosted at <https://github.com/barahona-research-group/RamanSPy> with extended documentation (<https://ramanspy.readthedocs.io>), which includes tutorials and example applications, and details about the real-world research applications presented in this paper.
§ RESULTS
as a platform for general Raman spectroscopy analytics
is based on a modular, object-oriented programming (OOP) infrastructure, which streamlines the RS data analysis life cycle (Fig. <ref>a) and allows users to compile diverse analysis workflows with a few lines of reusable, user-friendly code (Fig. <ref>b). The framework adopts a scalable array-based data representation, which accommodates different spectroscopic modalities, including single-point spectra, Raman imaging data, and volumetric scans. Experimental data can be loaded through custom loaders built into or through standard tools available in Python. The data representation functions as a common data container that defines the interface between RS data management and manipulation within , allowing us to unify data standards across setups and vendors, independent of instrumental origin and acquisition modality.
also provides an extensive toolbox for preprocessing, analysis and visualisation. The preprocessing suite includes techniques for denoising, baseline correction, cosmic spike removal, normalisation and background subtraction, among others. Likewise, the analysis toolbox includes modules for decomposition (useful for dimensionality reduction), clustering and spectral unmixing. also includes a set of data visualisation tools.
All these modules are organised into an extensible class structure, which standardises their application across projects and datasets to facilitate transferable analysis workflows.
We showcase the core features of by
analysing volumetric Raman spectroscopic data from a human leukaemia monocytic (THP-1) cell <cit.> (Fig. <ref>). The aim is to investigate the cell phenotype in a label-free manner using RS and methods from chemometrics. We load the data using built-in tools, and perform a spectral preprocessing protocol comprising spectral cropping to the fingerprint region (700–1800 cm^-1), cosmic spike removal, denoising, baseline correction and normalisation (see SI).
Using the visualisation tools in the package, we inspect data quality (Fig. <ref>b) and perform initial exploratory analysis by examining, e.g., data slices across wavenumber bands (Fig. <ref>c). The analysis proceeds to spectral unmixing based on: (i) N-FINDR <cit.> for endmember detection, and (ii) fully constrained least squares (FCLS) <cit.> for component quantification. This process is exploited to demix signal contributions from different cellular components and study their morphological organisation within the THP-1 cell. Following the peak assignment in <cit.>, we distinguish endmember components related to lipids (band 1008 cm^-1), nucleic acid (band 789 cm^-1), cytoplasm (bands 1066, 1134, 1303, 1443 and 1747 cm^-1), and the background (Fig. <ref>e). Finally, we produce fractional abundance reconstructions based on the extracted endmembers, which we can examine on a single-layer level (Fig. <ref>f) and across the entire volume (Fig. <ref>g) to localise cellular organelles within the cell.
enables automated pipelining of spectral preprocessing protocols
Experimental RS data is susceptible to non-specific signal artefacts (e.g., cosmic rays, autofluorescence background, variability in instrumentation), which can severely affect downstream analyses. Preprocessing is therefore a critical step in any spectroscopic analysis workflow <cit.>.
Yet, due to a lack of standardisation and frameworks for general-purpose pipelining <cit.>, researchers tend to utilise variable preprocessing protocols, often dispersed across different software systems, thus affecting reproducibility and validation <cit.>.
To facilitate the creation of reproducible protocols, incorporates a pipelining infrastructure, which systematises the process of creating, customising and executing preprocessing pipelines (Fig. <ref>a). Users can use a specialised class, which defines a generic, multi-layered preprocessing procedure, to assemble pipelines from selected built-in preprocessing modules or other in-house methods.
To reduce overhead, the constructed pipelines are designed to function exactly as any single method, i.e., they are fully compatible with the rest of the modules and data structures in the package. Furthermore, pipelines can be easily saved, reused and shared to foster the development of a repository of preprocessing protocols. As a seed to this repository, provides a library of assembled preprocessing protocols (custom pre-defined, or adapted from the literature <cit.>), which users can access and exploit.
To illustrate the pipelining functionalities, we use to construct three preprocessing protocols
by compiling selected methods in the desired order of execution, and applying them out-of-the-box to data loaded into the platform (Fig. <ref>c-e). We use them to preprocess Raman spectroscopic data from <cit.> (Fig. <ref>b).
Note how the three pipelines yield substantially different results, reinforcing the importance of consistency in the selection of preprocessing protocols. Pipeline II was deemed the most robust, and consequently added to the protocols library in as default.
facilitates AI integration and validation of next-generation Raman data analytics
To help accelerate the adoption of AI technologies for RS analysis, is endowed with a permeable architecture, which streamlines the interface between Raman spectroscopic data and the burgeoning ML ecosystem in Python. This is complemented by tools for benchmarking, such as datasets and performance metrics, which support the evaluation of new models and algorithms. We show below two examples of 's capabilities for ML integration and benchmarking.
First, allows the seamless integration of standard Python AI/ML methods (e.g., from scikit-learn <cit.>, PyTorch <cit.> and tensorflow <cit.>) as tools for RS analysis (Fig. <ref>a).
As an illustration, we use to construct a deep learning denoising procedure based on the one-dimensional ResUNet model - a fully convolutional UNet neural network with residual connections <cit.>. To do this, we simply wrap within the pre-trained neural network (trained on spectra from MDA-MB-231 breast cancer cells, available at <https://github.com/conor-horgan/DeepeR>) as a custom denoising method. Once wrapped, the denoiser is automatically compatible with the rest of and can be readily employed for different applications. For instance, we replicate the results in <cit.>, and show in Fig. <ref>b-c that the application of this deep-learning denoiser to the low signal-to-noise ratio (SNR) test set from <cit.> consistently outperforms the commonly-used Savitzky-Golay filter <cit.>, as quantified by various metrics also coded within (e.g., mean squared error (MSE), spectral angle distance (SAD) <cit.> and spectral information divergence (SID) <cit.>).
Applying this pipeline to new data only involves changing the data source.
Taking advantage of this transferability, we test the denoiser on unseen volumetric Raman data from another cell line (THP-1 <cit.>), with added Gaussian noise (see SI).
In this case, Fig. <ref>d-e shows improved performance especially according to the MSE metric, which is dependent on normalisation, but with lower significance according to scale-invariant and information-theoretic metrics, also available in . This example emphasises the importance of incorporating robust validation criteria within data analysis workflows.
Secondly, the data management backbone of ensures a direct data flow to the rest of the Python ecosystem, i.e., data can be loaded, preprocessed, and analysed in and then exported to conduct further modelling and analysis elsewhere (Fig. <ref>a). As an example application, we perform AI-based bacteria identification using Raman measurements <cit.> from 30 bacterial and yeast isolates (Fig. <ref>b). After loading and exploring the spectra with , we interface the data with the lazypredict Python package <cit.> and benchmark 28 different ML classification models (including logistic regression, support vector machines and decision trees) on the task of predicting the species from the spectrum. The models were trained on a high-SNR dataset (100 spectra per isolate) and tested on an unseen high-SNR testing set of the same size. Our benchmarking analysis in Fig. <ref>c finds logistic regression as the best-performing model, achieving a classification accuracy of 79.63% on the species-level classification task (Fig. <ref>d), and 94.63% for antibiotic treatment classification (Fig. <ref>e).
To further assist validation against previous results, provides access to a library of curated datasets, which can be integrated into analysis and benchmarking workflows. This lays the foundation for a common repository of RS data and reduces barriers to data access, especially for ML teams with limited access to RS instruments <cit.>. The dataset library in already includes data loaders for Raman data from bacterial species <cit.>, cell lines <cit.>, COVID-19 samples <cit.>, multi-instrument Surface Enhanced Raman Spectroscopy (SERS) measurements of adenine samples <cit.>, wheat lines <cit.>, minerals <cit.>, and will continue to be expanded.
§ DISCUSSION
In this paper, we have introduced - a computational framework for integrative Raman spectroscopic data analysis. offers a comprehensive collection of tools for spectroscopic analysis designed to systematise the RS data analysis life cycle, reducing typical overheads of analysis workflows and improving methodological standardisation.
The package also lays the foundations of a common repository of standardised methods, protocols and datasets, which users can readily access and exploit within the framework to conduct different benchmarking studies.
Furthermore, is fully compatible
with frameworks for data science and machine learning in Python, thereby facilitating the adoption and validation of advanced AI technologies for next-generation RS analysis. Lastly, we remark that, while our focus here has been on Raman spectroscopy, many of the tools in are of broad applicability to other vibrational spectroscopy techniques, including infrared (IR) spectroscopy.
§ METHODS
§.§ Installation
has been deposited in the Python Package Index (<https://pypi.org/project/ramanspy>). This means it can be directly installed via the common package installer pip for Python:
[language=bash]
pip install ramanspy
To access the functionalities of the package after installation, users only need to import in their Python scripts. One can import the whole package:
[language=Python, breaklines]
import ramanspy
# or import ramanspy as rp
or individual modules or methods:
[language=Python, breaklines]
# individual modules
from ramanspy import load, preprocessing
# individual methods
from ramanspy.analysis.unmix import NFINDR
§.§ Core infrastructure
Data management
Data in is represented by a set of custom data container classes based on scalable, computationally efficient array programming <cit.>, which correspond to different spectroscopic modalities. This includes the generic SpectralContainer class, as well as the more specialised Spectrum, SpectralImage and SpectralVolume classes representing on single-point spectra (1D), imaging data (3D), volumetric data (4D) respectively. These classes define data-specific information and behaviour in the background to allow a smooth, user-friendly experience, regardless of the data of interest.
The containers can be initialised by providing the corresponding intensity data, the spectral axis (in cm^-1) and other relevant (meta) data, which will become properties of the constructed object. For instance:
[language=Python, breaklines]
raman_spectrum = ramanspy.Spectrum(intensity_data, spectral_axis, *args, **kwargs)
raman_image = ramanspy.SpectralImage(intensity_data, spectral_axis, *args, **kwargs)
Once created, data containers can be manipulated, visualised, saved and loaded as needed using the built-in tools in .
Note that for the most part, users would not need to manually populate these containers. Instead, they can take advantage of the data loading functionalities that provides.
Data loading
To support data loading, offers easy-to-use data loaders compatible with experimental Raman spectroscopic data from a range of instrumental vendors in the area. These loaders - available within ramanspy.load - automatically parse relevant data files and return the appropriate spectral container. As an example, users can load MATLAB files exported from WITec's ProjectFOUR/FIVE software using the following command:
[language=Python, breaklines]
raman_object = ramanspy.load.witec(<PATH>)
A full list of the data loaders built into is available as part of the documentation of the package at <https://ramanspy.readthedocs.io/en/latest/loading.html>.
Raman data can also be loaded via established data-loading tools in Python. For instance, one can use pandas' csv loader to load a spectrum from a .csv file with two columns storing the intensity data and the spectral axis by using:
[language=Python, breaklines]
import pandas as pd
data = pd.read_csv(csv_filename)
raman_spectrum = ramanspy.Spectrum(data["<intensity_column>"], data["<axis_column>"])
Spectral preprocessing
Preprocessing logic in is defined by the PreprocessingStep class, which defines most of the necessary preprocessing infrastructure in the background to ensure a smooth, data-agnostic experience via a single point of contact specified through their apply() method.
Yet, as with data loading, for the most part, users are not expected to use this class to manually implement and optimise such preprocessing methods themselves. Instead, the package provides a comprehensive toolbox of ready-to-use preprocessing methods, which users can access, customise and employ to compile a wide variety of preprocessing procedures. These preprocessing procedures are given as predefined classes within ramanspy.preprocessing which extend the PreprocessingStep class. To use these built-in methods, users need to create an instance of the selected technique. For instance:
[language=Python, breaklines]
denoiser = ramanspy.preprocessing.denoise.SavGol(*args, **kwargs)
baseline_corrector = ramanspy.preprocessing.baseline.ASLS(*args, **kwargs)
normaliser = ramanspy.preprocessing.normalise.MaxIntensity(*args, **kwargs)
Note that offers full control over relevant parameters, which can be supplied during initialisation via the *args and **kwargs arguments.
As the methods inherit all operational logic defined within the parent PreprocessingStep class, they can be directly accessed and used on any data loaded in the framework through their apply() method:
[language=Python, breaklines]
preprocessesd_objects = denoiser.apply(<spectral object or collection of spectral objects>)
preprocessesd_objects = baseline_corrector.apply(<spectral object or collection of spectral objects>)
A full list of the methods for spectral preprocessing built into is available as part of the documentation of the package at <https://ramanspy.readthedocs.io/en/latest/preprocessing.html>.
If needed, users can also incorporate any in-house method into by manually creating instances of the PreprocessingStep class which wrap the given method. This can be done as follows:
[language=Python, breaklines]
def preprocessing_func(intensity_data, spectral_axis, *args, **kwargs):
# Preprocess intensity_data and spectral_axis
...
return updated_intensity_data, updated_spectral_axis
# wrapping the function together with the relevant *args and **kwargs
custom_preprocessing_method = ramanspy.preprocessing.PreprocessingStep(preprocessing_func, *args, **kwargs)
Then, the custom preprocessing method is fully compatible with the rest of 's functionalities and out-of-the-box applicable to any data integrated within the package via its apply() method:
[language=Python, breaklines]
custom_preprocessing_method.apply(<spectral object or collection of spectral objects>)
Note that this class structure implies that these instances can then be saved (e.g. as pickle files) and, therefore, reused and shared as required afterwards.
Spectral analysis
As with preprocessing classes, users can access any built-in analysis method (available within the ramanspy.analysis sub-module) by creating an object instance of the corresponding class (again - with full control over relevant parameters) as follows:
[language=Python, breaklines]
nmf = ramanspy.analysis.decompose.NMF(*args, **kwargs)
kmeans = ramanspy.analysis.cluster.KMeans(*args, **kwargs)
unmixer = ramanspy.analysis.unmix.NFINDR(*args, **kwargs)
Once created, instances can be similarly accessed via their apply() method on any data loaded in .
[language=Python, breaklines]
cluster_maps, cluster_centres = kmeans.apply(<spectral object or collection of spectral objects>)
abundance_fractions, endmemebrs = unmixer.apply(<spectral object or collection of spectral objects>)
A full list of the methods for spectral analysis built into is available as part of the documentation of the package at <https://ramanspy.readthedocs.io/en/latest/analysis.html>.
Visualisation
The package also provides various visualisation tools available within the ramanspy.plot sub-module. As an example, one can plot spectra using the spectra function:
[language=Python, breaklines]
ramanspy.plot.spectra(<spectra or collection of spectra>)
ramanspy.plot.show() # or plt.show() after import matplotlib.pyplot as plt
Note that these functions are highly customisable. This can be done by providing relevant parameters to control the plot generation, as well as through matplotlib's customisation workflow.
[language=Python, breaklines]
import matplotlib.pyplot as plt
plt.figure(figsize = (5, 5))
ax = ramanspy.plot.spectra(<spectra or collection of spectra>, title="<str>", label="<str or list[str]>")
ax.set_ylabel("<str>") # adding a label to the y-axis
plt.show() # or ramanspy.plot.show()
A full list of the methods for data visualisation built into is available as part of the documentation of the package at <https://ramanspy.readthedocs.io/en/latest/plot.html>.
§.§ Preprocessing pipelines
Pipelining behaviour is defined by the Pipeline class in , which ensures that pipelines are accessible, simple-to-use and fully compatible with the rest of .
Creating a custom preprocessing pipeline
To assemble a preprocessing pipeline, one simply needs to stack relevant methods (built-in or custom) into the intended order of execution. For instance:
[language=Python, breaklines]
preprocessing_pipeline = ramanspy.preprocessing.Pipeline([
ramanspy.preprocessing.denoise.SavGol(*args, **kwargs),
ramanspy.preprocessing.baseline.ASLS(*args, **kwargs),
ramanspy.preprocessing.normalise.MaxIntensity(*args, **kwargs),
custom_preprocessing_method(*args, **kwargs) # custom in-house method
])
Constructed pipelines can then be applied exactly as single methods via their apply() method to any data loaded within .
[language=Python, breaklines]
preprocessesd_objects = preprocessing_pipeline.apply(<spectral object or collection of spectral objects>)
As pipelines in are objects, they can also be directly saved in a convenient file format, such as pickle files. As such, they can then be reloaded, reused and shared as needed.
Access a predefined preprocessing pipeline
also provides a collection of built-in preprocessing pipelines.
To access them, one can select the desired protocol from ramanspy.preprocessing.protocols as follows:
[language=Python, breaklines]
preprocessing_pipeline = ramanspy.preprocessing.protocols.PROTOCOL_X
A pre-defined Pipeline instance will be returned, which can similarly be employed directly through its apply() method.
A full list of the protocols for spectral preprocessing built into is available as part of the documentation of the package at <https://ramanspy.readthedocs.io/en/latest/preprocessing.html#established-protocols>.
§.§ AI integration
Integrate AI methods into
To integrate new techniques for spectral preprocessing and analysis, users can take advantage of the extensible architecture of and wrap models and algorithms into custom classes. For instance, one can create a new denoiser method based on a PyTorch model for denoising by simply creating a function, which defines how the model can be used to preprocess a generic intensity data array, and then wrapping the method within a PreprocessingStep instance.
[language=Python, breaklines]
def nn_preprocesing(intensity_data, wavenumber_axis):
intensity_data = v.reshape(-1, intensity_data.shape[-1])
output = model(torch.Tensor(intensity_data).unsqueeze(1)).cpu().detach().numpy()
output = np.squeeze(output).reshape(intensity_data.shape)
return output, wavenumber_axis
nn_denoiser = ramanspy.preprocessing.PreprocessingStep(nn_preprocesing)
Integrated methods are automatically rendered fully compatible with the rest of 's functionalities in the background, so one can simply use the apply() method of the constructed denoiser to preprocess any data loaded within as any built-in preprocessing class.
Export data from to AI frameworks
The data management core of allows a direct interface with the entire Python ecosystem, including frameworks for statistical modelling, machine learning and deep learning. To do that, users can simply feed relevant data from to functions and tools they want to use elsewhere. For instance, one can pass the intensity data stored in a spectral container to a specific model from the scikit-learn <cit.> framework for statistical and ML modelling directly via their fit() method:
[language=Python, breaklines]
model.fit(spectral_container.spectral_data)
§.§ Datasets
To access the Raman spectroscopic datasets available in , users can employ custom data-loading methods built into under ramanspy.datasets. These would automatically parse the relevant data into the corresponding spectral container. For instance, one can load the bacteria data from <cit.> using the following function:
[language=Python, breaklines]
data_container, labels = ramanspy.datasets.bacteria(dataset="train", <PATH>)
Note that, depending on where each dataset was deposited and the license it was deposited under, some of these methods will automatically download the given dataset, whereas others may require the manual download of the data. Users are pointed to the documentation of each method for instructions on how to properly load each dataset.
A full list of the datasets built into is available as part of the documentation of the package at <https://ramanspy.readthedocs.io/en/latest/datasets.html>.
§.§ Metrics
Users can likewise readilyaccess relevant spectroscopic metrics, such as MSE, SAD and SID, from ramanspy.metrics. These can be used to measure the similarity between spectra by using the respective method:
[language=Python, breaklines]
ramanspy.metrics.SID(spectrum_I, spectrum_II)
A full list of the metrics built into is available as part of the documentation of the package at <https://ramanspy.readthedocs.io/en/latest/metrics.html>.
§ DECLARATIONS
§.§ Data availability
All data used in this article are previously published open-access data that have been deposited by the respective authors online. Instructions on how to access, download and load the datasets provided in are available in the documentation at <https://ramanspy.readthedocs.io/en/latest/datasets.html>.
§.§ Code availability
The codebase of is open-source and hosted on GitHub at <https://github.com/barahona-research-group/RamanSPy>. The package can be installed via pip using 'pip install ramanspy'. Documentation, including detailed tutorials and examples, is available at <https://ramanspy.readthedocs.io>. The scripts used to produce the analysis results presented in this paper are also provided as executable Jupyter Notebook examples at <https://github.com/barahona-research-group/RamanSPy/tree/3dd2c1e09420c5ac473a72ebd6ed06a91c30a85c/paper_reproducibility> and as part of the documentation of at <https://ramanspy.readthedocs.io/en/latest/auto_examples/index.html>.
§.§ Acknowledgments
D.G. is supported by UK Research and Innovation [UKRI Centre for Doctoral Training in AI for Healthcare grant number EP/S023283/1].
S.V.P. gratefully acknowledges support from the Independent Research Fund Denmark (0170-00011B).
R.X. and M.M.S. acknowledge support from the Engineering and Physical Sciences Research Council (EP/P00114/1 and EP/T020792/1).
A.F.G. acknowledges support from the Schmidt Science Fellows, in partnership with the Rhodes Trust.
M.M.S. acknowledges support from the Royal Academy of Engineering Chair in Emerging Technologies award (CiET2021\\94).
M.B. acknowledges support by the EPSRC under grant EP/N014529/1, funding the EPSRC Centre for Mathematics of Precision Healthcare at Imperial College London, and under grant EP/T027258/1.
The authors thank Dr Akemi Nogiwa Valdez for proofreading and data management support.
Figures were created with BioRender (<www.biorender.com>).
§ SUPPLEMENTARY INFORMATION
Several of our examples are based on data from <cit.> which provided volumetric RS scans across 4 distinct THP-1 cell lines. Here, we only used the first scan (scan '001').
§.§ Cell phenotyping via spectral unmixing.
The raw THP-1 data from <cit.>
used for the spectral unmixing procedure in Fig. <ref> was re-exported as MATLAB files from the WITec Project FIVE software. The MATLAB files were then loaded into followed by spectral preprocessing with a protocol consisting of: (1) spectral cropping to the 700-1800cm^-1 region; (2) cosmic rays removal with the algorithm in <cit.>; (3) denoising with a Savitzky-Golay filter polynomial order 3 and kernel size 7 <cit.>; (4) baseline correction with asymmetric least squares <cit.>; and (5) Global MinMax normalisation to the interval [0,1].
After preprocessing, we performed spectral unmixing in using N-FINDR <cit.> (number of endmembers set to 5) and FCLS <cit.>. We concluded the analysis by visualising the results corresponding to the top 4 endmembers.
§.§ Preparing THP-1 data for deep learning denoising.
The denoising analysis on the data in Fig. <ref>d-e was performed on the middle depth layer (fifth layer out of 10) of the THP-1 volumetric scan from <cit.>. This layer consisted of a 40 × 40 image scan, i.e., 1600 spectra. To be consistent with the original paper <cit.>, we conducted exactly the same preprocessing protocol described there. Namely, we utilised the WITec Project FIVE software to crop the data to the region 500-1800cm^-1, followed by baseline correction using the ‘shape’ method with α=500.
To assess the performance of the deep learning denoiser, we created `low-SNR spectra' by adding Gaussian noise to the original spectra.
Each spectrum was MinMax-normalised to the range 0–1 and Gaussian noise with a standard deviation σ=0.15 was added. This resulted in spectra of similar noise levels to those in <cit.>.
These noisy samples were used as the input to the model and the uncontaminated data was taken as ground-truth targets.
We then MinMax-normalised each spectrum (both inputs and targets) and compared the performance of the neural network denoiser against six Savitzky-Golay filters <cit.>. To make all models comparable, and to correct for potential artefacts of how the model was trained originally in <cit.>, all denoising metrics were computed after MinMax-normalising the denoised outputs of each denoiser to the range 0–1 again.
|
http://arxiv.org/abs/2307.02865v1
|
20230706090458
|
PLIERS: a Popularity-Based Recommender System for Content Dissemination in Online Social Networks
|
[
"Valerio Arnaboldi",
"Mattia Giovanni Campana",
"Franca Delmastro",
"Elena Pagani"
] |
cs.IR
|
[
"cs.IR",
"cs.LG"
] |
PLIERS: a Popularity-Based Recommender System for Content Dissemination in Online Social Networks
Valerio Arnaboldi^1, Mattia G. Campana^1,2, Franca Delmastro^1,
and Elena Pagani^2,1
^1IIT-CNR - Via G. Moruzzi 1, 56124, Pisa, ITALY
^2Computer Science Department, University of Milano, Milano, ITALY
{v.arnaboldi, m.campana, f.delmastro}@iit.cnr.it, [email protected]
August 1, 2023
=================================================================================================================================================================================================================================================================================
§ INTRODUCTION
In this paper, we present PLIERS (PopuLarity-based ItEm Recommender System), a novel Tag-based Recommender systems (tbrss) <cit.> based on folksonomies <cit.>.
It relies on the assumption that a user is mainly interested in items and tags with popularity similar to that of the items she already owns, and that the similarity between items/tags can also highlight a semantic relationship between them.
To evaluate PLIERS, we performed a set of experiments on real OSN datasets, demonstrating that it outperforms state-of-the-art solutions (described in Section <ref>) in terms of personalization, relevance, and novelty of recommendations by better describing the human behavior in selecting new interesting contents.
§ NOTATION AND RELATED WORK
Formally, a folksonomy can be represented with three node sets: users U = {u_1, … , u_n}, items I = {i_1, …, i_m} and tags T = {t_1, …, t_k}.
Each binary relation between them can be described using adjacency matrices, A^UI, A^IT, A^UT respectively for user-item, item-tag and user-tag relations. If the user u_l has collected the item i_s, we set a^UI_l,s = 1, a^UI_l,s = 0 otherwise. Similarly, a^IT_s,q = 1 if i_s is tagged with t_q and a^IT_s,q = 0 otherwise. Furthermore, a^UT_l,q = 1 if u_l owns items tagged with t_q, and a^UT_l,q = 0 otherwise. The three matrices can be represented as a tripartite graph G^T=(U,I,T,E) where U, I, and T are set of nodes representing users, items, and tags respectively, and E is the set of edges between nodes corresponding to the elements equal to 1 in the matrices.
A bipartite graph G^B=(U,V,E) may be used instead of a tripartite graph, with U the set of users, and V the set of either items or tags. In the following, we will consider bipartite user-item graphs with n users and m items where an edge between the user u_l and the item i_s indicates that u_l owns i_s.
ProbS <cit.> assigns a generic resource to each item i_s held by a target user u_t. The resource is evenly split amongst the users directly connected to the item. Subsequently, each user evenly splits the portion of the resource received amongst the items connected to her. The final score f^P_j of each item i_j is given by the sum of the portions of resources that are assigned to it after the two steps, or, more formally:
f^P_j = ∑_l = 1^n∑_s = 1^ma_l,ja_l,sa_t,s/k(u_l)k(i_s) j = 1, 2, …, m
where k(u_l) = ∑_j = 1^m a_l,j is the number of items collected by the user u_l and k(i_s) = ∑_j=1^n a_s,j is the number of users interested in the item i_s. The set of f^P_j values determines a ranking of contents concerning the interests of u_t.
ProbS tends to recommend items with the highest popularity.
HeatS <cit.> uses rules opposite to those of ProbS. Each resource is first split amongst the items related to each user, and then amongst the users connected to each item. The score of the item i_j for the target user u_t is:
f^H_j = 1/k(i_j)∑_l = 1^n∑_s = 1^ma_l,ja_l,sa_t,s/k(u_l) j = 1, 2, …, m
HeatS tends to recommend non-popular items.
Hybrid (ProbS + HeatS) <cit.> calculates a linear combination of ProbS and HeatS
using an hybridization parameter λ∈ [0,1] such that by setting λ = 0 we obtain the pure HeatS, and with λ = 1 we get instead ProbS. The value of λ may be difficult to select in real situations.
PD and BHC <cit.> try to correct ProbS and HeatS. Preferential Diffusion (PD) divides the ProbS scores by the degree of the recommended item, with an exponent ϵ used as a parameter to control the normalization. Biased Heat Conduction (BHC) multiplies the HeatS score of each recommended item by its popularity, using an exponent γ similar to ϵ. An optimal tuning of the parameters could be difficult to achieve in practice.
§ PLIERS
PLIERS is inspired by ProbS and shares with it the same two steps. In addition, PLIERS normalizes the value obtained by ProbS when comparing an item i_j with one of the items of the target user, i_s, by multiplying the score by the cardinality of the intersection between the set of users connected to i_j and the set of users connected to i_s, divided by k(i_j) (i.e., the popularity of i_j). In this way, items with popularity similar to the popularity of the items of the target user, and which possibly share the same set of users, are preferred.
The score of the item i_j is then:
f^PL_j = ∑_l = 1^n∑_s = 1^ma_l,j a_l,s a_t,s/k(u_l) k(i_s) | U_s ∩ U_j |/k(i_j) j = 1,…,m
where U_j is the set of users connected to the item i_j and k(i_j) is the popularity degree of the item i_j. The normalization introduced in PLIERS favours items whose popularity (i.e. number of connected users) is similar to that of the items already owned by the target user. All the procedures above can be equally applied to user-tag graphs, leading to the same considerations.
§ EXPERIMENTAL RESULTS
We compared PLIERS with reference tbrss: HeatS, ProbS, Hybrid with λ=0.5; PD with ϵ=-0.85 and BHC with γ=0.8 as in <cit.>. We used three benchmark datasets containing user-tag bipartite graphs. We assessed the accuracy of the obtained recommendations by calculating the level of personalization in terms of popularity of the recommended tags and the appropriateness of recommendations with respect to the users' interests. We performed also a link prediction task on the datasets <cit.>. It consists in randomly removing a few links from the graph and to calculate the degree to which the recommendations coincide with the removed links. A good recommender system should be able to approximate the original graph, although removing links changes the structure of the graph, and a complete reconstruction is not possible, particularly with sparse graphs.
Datasets Description.
We used three bipartite user-tag graphs obtained from Twitter <cit.>, MovieLens and Delicious <cit.>. The graphs extracted from these datasets are very large (i.e., 1.6M users and 30.2M tags for Twitter, 1.9K users and 40.9K tags for Delicious, and 8.7K users and 39.2K tags for MovieLens). Due to memory constraints, we sampled portions of these graphs with maximum size of 5,000 users. Table <ref> summarizes the characteristics of the obtained samples, where U, T, and L are respectively the number of users, tags, and links. k(T) is the average tag degree in the graph and p(T_U) is the average popularity of the tags for the average user. From Table <ref>, we can note that tags in Twitter are connected, on average, to fewer users than in the other datasets (i.e., k(T) is lower). This could lead to less accurate results in terms of link prediction.
Metrics.
We defined an index V (variance), to calculate the average difference in terms of popularity between the recommended tags and those already owned by the users:
V = 1/n∑_l=1^n1/r_l∑_q=1^r_l√((k(t_q) - p(T_u_l))^2)
where n is the number of users in the network, r_l is the number of recommended tags for user u_l and p(T_u_l) = 1/z∑_j=1^z k(t_j) is the mean popularity of the tags originally linked to the user u_l with z the number of those tags. The overlap O measures the percentage of users connected to both the recommended tag and one of the tags of the target user, averaged for all the tags of the user and then for all the users. It gives us an idea of the potential interest for the users in the recommended tags. It is defined as:
O = 1/n∑_l=1^n1/r_l∑_q=1^r_l1/z∏_k=1^z J(U_i_q, U_i_k)
where U_i_q is the set of users connected to the item i_q and J(S_1, S_2) is the Jaccard's index, that measures the percentage of overlap between two generic sets S_1 and S_2. A good system should provide both a low V and a high O.
For link prediction, we used three standard metrics. The recall (R) index measures the number of recovered links within the first L recommendations for each user divided by L. The precision (P) measures the number of recovered links within the first L recommendations divided by the total number of recovered links, for each user. The novelty (N) index measures the capacity of a recommender system to generate novel and unexpected results, generally related to items with low popularity, quantified by measuring the average popularity of the first L recommended items. A good system should have high P and R, and low N.
Results and Discussion.
Table <ref> shows the values of V and O for the different datasets and tbrss. We highlight in bold the values better than those achieved by PLIERS. We note that PLIERS always yields the better trade-off. As far as V is concerned, PLIERS obtains values very close to the best results for two traces, and it always outperforms both ProbS and Hybrid. It yields the best O, or very close to the best with Twitter. With Delicious, HeatS, PD, and BHC perform better than PLIERS in terms of V. Yet, with this trace, PLIERS supplies an overlap that largely outperforms those of the solutions yielding better V. These results tell that PLIERS is able to recommend tags whose popularity is comparable with those of the tags already owned by the users, and also of higher (or similar) relevance than the other solutions.
Figure <ref> depicts the results of the link prediction task. As in <cit.>, we removed 10% of the links. From the figure, we note that PLIERS again supplies the best trade-off. Its R and P are always very similar to the results of ProbS and Hybrid. In the case of Twitter, PLIERS' P and R are worse than those of ProbS and Hybrid, but in this case tags are connected, on average, to fewer users than in the other graphs and the removal of random links has a higher impact on the graph structure, having a negative impact on the recommendations. In this case, recommending tags with high popularity (as done by ProbS and Hybrid) is probably more effective. However, the level of personalization is clearly worse than the one obtained by PLIERS, as shown by the V index. For the N index, PLIERS is always better than ProbS and Hybrid, and reaches a value that is closer to the value of p(U_T). Hence, PLIERS is able to recommend tags of comparable popularity to that of the target user.
§ CONCLUSIONS
In this work, we proposed a new tag-based recommender systems called PLIERS that recommends tags or items with popularity as similar as possible to those already owned by the users. We compared PLIERS with other reference systems in the literature. The results indicate that PLIERS recommends tags with popularity closer to that of tags owned by the users than the other solutions. In case of link prediction, PLIERS performs very well, with results comparable to the other existing recommender systems in terms of precision and recall, but providing better novelty in the recommendations.
§ ACKNOWLEDGMENT
This work was partially funded by Registro.it within the Collective Awareness Participatory Platform research project (CAPP) and by EIT Digital within GameBus project.
abbrv
|
http://arxiv.org/abs/2307.00724v2
|
20230703030944
|
LXL: LiDAR Excluded Lean 3D Object Detection with 4D Imaging Radar and Camera Fusion
|
[
"Weiyi Xiong",
"Jianan Liu",
"Tao Huang",
"Qing-Long Han",
"Yuxuan Xia",
"Bing Zhu"
] |
cs.CV
|
[
"cs.CV"
] |
LXL: LiDAR Excluded Lean 3D Object Detection with 4D Imaging Radar and Camera Fusion
Weiyi Xiong1,
Jianan Liu1,
Tao Huang, Senior Member, IEEE,
Qing-Long Han, Fellow, IEEE,
Yuxuan Xia, and
Bing Zhu2, Member, IEEE
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
W. Xiong and B. Zhu are with the School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, P.R. China. Email:
[email protected] (W. Xiong);
[email protected] (B. Zhu).
J. Liu is with Vitalent Consulting, Gothenburg, Sweden. Email: [email protected].
T. Huang is with the College of Science and Engineering, James Cook University, Smithfield QLD 4878, Australia. Email: [email protected].
Q.-L. Han is with the School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Melbourne, VIC 3122, Australia. Email: [email protected].
Y. Xia is with the Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden. Email: [email protected].
1Both authors contribute equally to the work and are co-first authors.
2Corresponding author.
August 1, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
As an emerging technology and a relatively affordable device, the 4D imaging radar has already been confirmed effective in performing 3D object detection in autonomous driving. Nevertheless, the sparsity and noisiness of 4D radar point clouds hinder further performance improvement, and in-depth studies about its fusion with other modalities are lacking.
On the other hand, most of the camera-based perception methods transform the extracted image perspective view features into the bird's-eye view geometrically via “depth-based splatting" proposed in Lift-Splat-Shoot (LSS), and some researchers exploit other modals such as LiDARs or ordinary automotive radars for enhancement. Recently, a few works have applied the “sampling" strategy for image view transformation, showing that it outperforms “splatting" even without image depth prediction. However, the potential of “sampling" is not fully unleashed.
In this paper, we investigate the “sampling" view transformation strategy on
the camera and 4D imaging radar fusion-based 3D object detection.
In the proposed model, LXL,
predicted image depth distribution maps and radar 3D occupancy grids are utilized to aid image view transformation, called “radar occupancy-assisted depth-based sampling".
Experiments on VoD and TJ4DRadSet datasets show that the proposed method outperforms existing 3D object detection methods by a significant margin without bells and whistles. Ablation studies demonstrate that our method performs the best among different enhancement settings.
4D imaging radar, camera, multi-modal fusion, 3D object detection, deep learning, autonomous driving
square
§ INTRODUCTION
Perception plays a pivotal role in autonomous driving since subsequent procedures, such as trajectory prediction, motion planning and control, rely heavily on accurately perceiving the environment. Key tasks in this domain encompass segmentation <cit.><cit.>, object detection <cit.>, and tracking <cit.>, with 3D object detection being the most widely researched area.
The approach to performing 3D object detection in autonomous driving varies based on the type of sensor employed. LiDARs, cameras, and radars are commonly utilized sensors characterized by distinct data structures and properties. LiDAR data is in the form of point clouds, providing precise 3D geometric information regarding an object's shape, size, and position. Meanwhile, camera images offer dense and regular data, supplying rich semantic information.
However, the high cost of LiDARs prohibits their widespread adoption in household vehicles, and cameras are susceptible to challenging lighting and weather conditions.
In contrast, radars are cost-effective and resilient to external factors, making them vital for robust detection in current advanced driver assistance systems (ADAS) and autonomous driving <cit.>. Moreover, radars hold promise for future applications in cooperative perception.
However, conventional automotive radars, when used alone, lack height information and generate sparse point clouds, posing challenges for 3D object detection. The emergence of 4D imaging radars has led to the generation of higher-resolution 3D point clouds <cit.>. Although there is still a notable disparity in density and quality compared to LiDAR point clouds, several studies <cit.><cit.><cit.> have explored 4D radar-based detection and demonstrated its feasibility.
In 3D object detection, researchers increasingly turn to multi-modal fusion techniques to overcome the limitations associated with single-modal data to improve the overall performance. One prominent approach involves independently extracting bird's-eye-view (BEV) features from different sensor modalities and integrating them into a unified feature map.
The utilization of BEV representation offers numerous advantages. Firstly, it enables more efficient processing compared to point-based or voxel-based methods. Additionally, leveraging mature 2D detection techniques can facilitate learning processes. Furthermore, occlusion, a common challenge in other representations like the range-view, is mitigated in BEV. Notably, using BEV representation simplifies and enhances the effectiveness of multi-modal fusion strategies.
Despite the benefits of using BEV representation for multi-modal fusion in 3D object detection, transforming images from a perspective view (PV) to BEV is intricate. Current approaches can be categorized into geometry-based <cit.><cit.><cit.> and network-based methods <cit.><cit.>. Geometry-based approaches, relying on explicit utilization of calibration matrices, offer a more straightforward learning process than network-based approaches. One widely employed geometry-based method is “depth-based splatting". Initially introduced in Lift-Splat-Shoot (LSS) <cit.>, this method lifts image pixels into 3D space guided by predicted pixel depth distributions.
Several enhancements have been proposed to improve its performance. For example, BEVDepth <cit.> generates a “ground-truth" depth map from LiDAR points to supervise image depth prediction, whereas CRN <cit.> employs a 2D radar occupancy map to assist view transformation. Another approach, called “sampling", has demonstrated superior performance even without explicit depth prediction, as exemplified by Simple-BEV <cit.>.
However, unlike the common practice of “splatting", few studies have explored the combination of “sampling" with predicted depths. Moreover, the potential of “sampling" in conjunction with other modalities remains largely unexplored, indicating untapped opportunities for further improvement in this area.
Despite the growing interest in multi-modal fusion techniques for 3D object detection, the specific integration of 4D imaging radar and cameras has received limited attention in the existing literature.
Existing methods designed for LiDAR-based fusion, such as the popular “splatting" approach, is applicable to 4D imaging radar and camera fusion, but their enhancements like BEV-Depth <cit.> may fail due to the distinct characteristics of radar point clouds. Specifically, when point clouds from 4D radars instead of LiDARs can be accessed, the generated depth maps in BEV-Depth may suffer from sparsity and imprecision of radar points.
Additionally, methods devised specifically for radars, such as the technique presented in CRN <cit.>, may introduce computational complexity and hinder the real-time inference capabilities of the model.
Therefore, there is a clear need to address this research gap by developing novel fusion methods tailored for 4D imaging radar and camera fusion.
In this study, we aim to enhance the existing “sampling" method by leveraging the unique advantages of 4D imaging radar.
By conducting extensive ablation studies, we show how 4D imaging radar can assist in image view transformation and demonstrate its impact on the overall 3D object detection performance.
The contributions of this work are threefold:
* Our proposed approach, LXL, is designed to perform 4D imaging radar and camera fusion-based 3D object detection.
This is an early attempt in this field, and serves as the latest benchmark for subsequent studies.
* A “radar occupancy-assisted depth-based sampling" feature lifting strategy is proposed in our view transformation module. It utilizes bi-linear sampling to get image features for pre-defined voxels, followed by two parallel operations: one combines image 3D features with the information from predicted image depth distribution maps, and the other exploits estimated radar 3D occupancy grids.
This design enhances the underdeveloped “sampling" strategy by introducing predicted depth distribution maps and radar 3D occupancy grids as assistance, leading to more precise feature lifting results.
* Experiments show that LXL outperforms state-of-the-art models on the View-of-Delft (VoD) <cit.> and TJ4DRadSet <cit.> datasets by 6.7% and 2.5%, respectively, demonstrating the effectiveness of LXL.
In addition, comparisons of different feature lifting and radar assistance strategies are made through ablation studies, showing the superiority of the proposed view transformation module.
The rest of the paper is organized as follows. Section <ref> reviews recent works on camera-based, camera and ordinary automotive radar fusion-based, and 4D imaging radar-based 3D object detection methods. Section <ref> details our proposed model, with focus on the view transformation module. Experimental settings and performances of our model and the corresponding analysis are provided in section <ref>. Finally, we summarize the work in this paper and point out the future research direction in Section <ref>.
§ RELATED WORK
§.§ 3D Object Detection with Cameras
The camera-based 3D object detection work can be mainly categorized into three types.
The first type involves directly estimating 3D bounding boxes based on image PV features <cit.>. However, due to the inherent lack of depth information in images, the performances of these methods are limited.
The second type focuses on directly transforming PV features into BEV and predicting bounding boxes on the top-down view <cit.><cit.>. This approach requires additional information, such as pixel heights or depths, to achieve accurate view transformation. A classical algorithm, inverse perspective mapping (IPM) <cit.>, projects PV features onto the BEV with the assumption that all pixels lie on the ground. However, its performance is limited as the assumption is not always true.
Other methods <cit.><cit.> utilize transformers to learn view transformation and reduce the impact of inaccurate depth estimation.
The third type involves lifting pixels into a point cloud or voxels <cit.><cit.><cit.><cit.><cit.> and applying networks designed for LiDAR-based 3D object detection. These approaches assume or estimate pixel depth or depth distributions to guide the 2D-to-3D projection. Pseudo-LiDAR <cit.> is a pioneering work that transforms images into point clouds based on regressed depth. However, these networks cannot be trained end-to-end, limiting their performance. CaDDN <cit.>, BEVDet <cit.>, and BEVDepth <cit.> employ a technique where they discretize the depth space into bins and treat the depth estimation task as a depth bin classification problem. Subsequently, image features are lifted into voxels based on the estimated depth distribution. In contrast, M^2BEV <cit.> adopts a different strategy by assuming a uniform depth distribution instead of predicting depth probabilities. This approach mitigates the computational burden while allowing for effectively lifting image features into voxels.
It is important to note that most of the methods belonging to the third type also detect objects on the BEV. Projecting image features from PV to BEV alleviates the occlusion problem and facilitates multi-modal fusion. As a result, these types of methods have become more prevalent in recent years.
§.§ 3D Object Detection with Camera and Ordinary Automotive Radar Fusion
Ordinary automotive radars cannot measure height information, posing a challenge for models to estimate 3D bounding boxes accurately from 2D radar points solely. Consequently, researchers have explored the fusion of 3D radar and camera data for improved 3D object detection, with earlier works including <cit.><cit.><cit.>.
Recent advancements in this field have introduced novel approaches to fuse camera and radar data. For instance, Simple-BEV <cit.> lifts image pixels to 3D voxels and concatenates them with radar BEV features before reducing the height dimension. To enhance the model's resilience against modal failure, CramNet <cit.> transforms the image foreground into a point cloud and utilizes it along with radar points for subsequent bounding box prediction. Given the relatively low reliability of image depth estimation, ray-constrained cross-attention is proposed in CramNet to refine the 3D location of pixels. Another work, RCBEV <cit.>, employs a spatial-temporal encoder to extract features from accumulated radar sweeps and introduces a two-stage multi-modal fusion strategy.
In contrast to many previous approaches focusing on feature-level fusion, RADIANT <cit.> adopts a result-level fusion strategy, merging depth predictions from radar and camera heads to achieve lower localization errors.
Attention mechanisms and transformers have also been leveraged to enhance camera and radar fusion performance. MVFusion <cit.> and CRN <cit.> utilize cross-attention to fuse camera and radar features. Additionally, they employ one modal as a guide to process the other modal. Specifically, MVFusion <cit.> derives a Semantic Indicator from images to extract image-guided radar features, while CRN <cit.> projects radar points onto images and generates occupancy maps within the view frustum, facilitating the depth-based PV to BEV transformation of image features. CRAFT <cit.> primarily focuses on image detection and utilizes radar measurements to refine image proposals through the Spatio-Contextual Fusion Transformer. TransCAR <cit.> incorporates a transformer decoder where vision-updated queries interact with radar features.
§.§ 3D Object Detection with 4D Imaging Radars
With advancements in 4D imaging radars, which can generate 3D point clouds, there is a possibility of regressing 3D bounding boxes using radar modality alone. However, despite the availability of a few datasets providing 4D radar point clouds <cit.><cit.><cit.> and even 4D radar tensors <cit.><cit.>, the research in this area remains limited. Most of the existing works in this field incorporate modules from LiDAR-based models, such as using the backbone of SECOND <cit.> or the detection head of CenterPoint <cit.>. Nevertheless, 3D point clouds generated by 4D imaging radars are typically sparser and noisier than LiDAR point clouds, often leading to lower model performance <cit.>.
For example, <cit.> applies PointPillars <cit.> to perform 3D object detection using 4D imaging radars and achieves reasonable results. RPFA-Net <cit.> modifies the pillarization operation of PointPillars <cit.> by replacing the PointNet <cit.> with a self-attention mechanism to extract global features, thereby enhancing the model's orientation estimation capability. Using a spatial-temporal feature extractor on multi-frame radar point clouds and employing an anchor-based detection head, RadarMFNet <cit.> achieves more accurate detection than single-frame methods.
To leverage the data from other sensors such as LiDARs and cameras, researchers start to explore the fusion of these modalities with 4D imaging radars to achieve improved results. For instance, InterFusion <cit.> and M^2-Fusion <cit.> employ attention mechanisms <cit.> to fuse pillarized LiDAR and radar features. <cit.> utilizes self-supervised model adaptation (SSMA) blocks <cit.> for a pixel-level fusion of image, radar BEV, and radar front view (FV) features. Similarly, RCFusion <cit.> incorporates an interactive attention module to fuse radar and image BEV features.
As 4D radar and camera fusion-based 3D object detection remains an area requiring further investigation, this paper aims to inspire subsequent researchers to explore this domain.
§ PROPOSED METHOD
§.§ Overall Architecture
The overall architecture of our model is depicted in Fig. <ref>. The model comprises four main components: the radar branch, image branch, fusion module, and detection head. Each component plays a crucial role in the 3D object detection process:
1. The radar branch is responsible for processing radar point clouds as an input. It extracts radar BEV features, which capture essential information from the radar modality. Additionally, the radar branch generates 3D radar occupancy grids, representing the radar points' occupancy status within the scene.
2. The image branch focuses on extracting multi-scale image PV features. These features encode relevant visual information from the image modality. We employ predicted image depth distribution maps and 3D radar occupancy grids to assist in transforming the image PV features into the BEV domain. By aligning the image features with the radar BEV representation, effective fusion with the radar features is enabled.
3. The fusion module is a key component in integrating the BEV features from both radar and image branches. It combines the complementary information each modality provides, allowing for enhanced object detection performance. The fusion process leverages the BEV features to generate a unified representation that captures the combined strengths of radar and image data.
4. The detection head is responsible for bounding box regression and classification for each potential object in the scene. It utilizes the fused features to estimate the 3D position, dimensions, orientation, and category of the objects. By leveraging the comprehensive information from both radar and image modalities, the detection head produces accurate predictions.
Further details regarding the proposed method are elaborated in the subsequent subsections.
§.§ Radar Branch
In the radar branch, the input radar point cloud is initially voxelized into pillars, similar to the voxelization process employed in PointPillars <cit.> does. Subsequently, the pillar representation is fed into the radar backbone and neck modules to extract relevant features. Following the widely referenced network SECOND <cit.>, the radar backbone and neck are constructed. The radar backbone extracts multi-level BEV features from the voxelized pillars. These features capture important spatial and contextual information inherent in the radar modality. The radar neck module then combines these multi-level features into a unified single-scale representation, facilitating subsequent fusion and analysis.
The obtained radar BEV feature maps serve two primary purposes within our model. Firstly, they are forwarded to the fusion module, where they are integrated with the image BEV features for effective object detection. Secondly, the radar BEV feature maps are utilized to predict radar 3D occupancy grids. The specific details and motivations behind these components are discussed further in Section <ref>.
To generate radar 3D occupancy grids, we employ an occupancy net in our proposed method. The radar BEV feature map, denoted as 𝐅_BEV^P with a shape (X, Y, C_P), is fed into the occupancy net. Here, X and Y represent the dimensions of the feature map, and C_P corresponds to the number of channels. In our framework, the height of the 3D occupancy grid, denoted as Z, is predefined. The occupancy net can be formulated as follows:
𝐎_3D^P=𝚂𝚒𝚐𝚖𝚘𝚒𝚍(𝙲𝚘𝚗𝚟_C_P→ Z(𝐅_BEV^P)),
where 𝐎_3D^P∈ℝ^X× Y× Z is the predicted 3D occupancy grid and 𝙲𝚘𝚗𝚟_a→ b represents a 1×1 convolution layer with a input channels and b output channels.
To transform the image view from PV to BEV, we leverage the radar 3D occupancy grids as an assistance. The specific details and the process of this transformation will be elaborated upon in Section <ref>.
§.§ Image Branch
The image branch consists of several key modules: the image backbone, neck, depth net, and view transformation module.
The image backbone extracts multi-level image PV features. The image neck is embraced to further enhance the features by mixing them at different scales. In our model, we employ the same architecture as YOLOX <cit.> to achieve this design, utilizing CSPNet <cit.> and PAN <cit.>.
The depth net is implemented as a 1×1 convolutional layer for each multi-level image PV feature. Similar to many existing methods <cit.><cit.><cit.><cit.>, we discretize the depth space into multiple bins and treat the depth estimation task as a depth bin classification task. Consequently, the depth net outputs a depth probability distribution for each pixel.
Given the image PV feature map of the i-th level, denoted as 𝐅_i,PV^I ∈ℝ^H_i× W_i× C_I, the depth distribution map 𝐃_i^I ∈ℝ^H_i× W_i× D can be obtained as
𝐃_i^I=𝚂𝚘𝚏𝚝𝚖𝚊𝚡(𝙲𝚘𝚗𝚟_C_I→ D(𝐅_i,PV^I)),i=1,2,⋯,N_lvl,
where D represents the pre-defined number of depth bins, N_lvl denotes the number of levels and 𝚂𝚘𝚏𝚝𝚖𝚊𝚡(·) is applied along the depth dimension.
The final module in the image branch is the view transformation module. Its primary objective is to lift the image PV features into a 3D space and compress the height dimension. The detailed workings of this module will be elaborated upon in Section <ref>.
§.§ View Transformation
The process of view transformation, which incorporates predicted multi-scale depth distribution maps and radar 3D occupancy grids, is illustrated in Fig. <ref>. This method is referred to as “radar occupancy-assisted depth-based sampling" in this paper.
Feature Lifting:
There are two primary strategies for geometrically lifting image features into 3D space.
The first strategy is “sampling", where pre-defined 3D voxels are projected onto the image plane, and the features of nearby pixels in the projected region are combined to form the voxel feature.
Representative models utilizing this strategy include <cit.><cit.><cit.>.
The second strategy, “splatting", involves transforming each image pixel into points or frustum voxels along a straight line in 3D space based on the calibration matrix. The features of these points or frustum voxels are determined by their corresponding pixel features.
Subsequently, the points are voxelized, or the frustum voxels are transformed into cubic voxels. Prominent models that employ the “splatting" technique for view transformation include <cit.><cit.><cit.>.
Simple-BEV <cit.> has demonstrated that the “sampling" approach outperforms “splatting", and our experiments in Section <ref> corroborate this conclusion.
Therefore, we have chosen the “sampling" strategy for view transformation in our model.
Specifically, given the 3-dimensional coordinates of the pre-defined 3D voxels, denoted as 𝐕^P ∈ℝ^X× Y× Z×3 in the radar coordinate system, the radar-to-image coordinate transformation matrix 𝐓_r2c∈ℝ^3×3, and the camera intrinsic matrix 𝐈∈ℝ^3×4, we first project the voxel centers onto the image plane using the following equation:
𝐕^I_i,j,k=𝐈·𝐓̅_r2c·𝐕̅^P_i,j,k,
where 𝐕̅^P_i,j,k=[𝐕^P_i,j,k,1]∈ℝ^4 is the extended coordinates and
𝐓̅_r2c=[ 𝐓_r2c 0; 0 1 ]∈ℝ^4×4
is the extended coordinate transformation matrix. 𝐕^I_i,j,k=[ud,vd,d] is the projected coordinate in the image coordinate system, where (u,v) and d denotes the pixel index and the image depth, respectively.
Subsequently, the feature of each pre-defined voxel can be obtained through bi-linear sampling on each multi-level image PV feature map. Specifically, we select the 2×2 pixels closest to (u,v) and compute the weighted sum of their features, which is then assigned to the corresponding voxel as its feature.
This step is accomplished using the “𝚝𝚘𝚛𝚌𝚑.𝚗𝚗.𝚏𝚞𝚗𝚌𝚝𝚒𝚘𝚗𝚊𝚕.𝚐𝚛𝚒𝚍_𝚜𝚊𝚖𝚙𝚕𝚎" operation, resulting in image 3D voxel features 𝐅^I_3D∈ℝ^N_lvl× X× Y× Z× C_I.
Depth-Based Sampling: However, the aforementioned operations do not consider the predicted image depths, which may lead to sub-optimal feature lifting. While LSS <cit.> employs the outer product for “depth-based splatting", this approach is not directly applicable in our “sampling" case due to the different coordinate systems of the predicted depth distribution maps and image 3D features.
To address this issue, we leverage tri-linear sampling, the 3D extension of bi-linear sampling, on the predicted multi-scale image depth distribution maps in the image coordinate system, denoted as 𝐃_i^I, to obtain depth probabilities 𝐃^I_3D∈ℝ^N_lvl× X× Y× Z for the pre-defined voxels in the radar coordinate system. Subsequently, the image 3D voxel features are multiplied by the sampled depth probabilities using the following equation:
𝐅'^I_3D=𝐅^I_3D⊙𝐃^I_3D.
Here, ⊙ represents element-wise multiplication with broadcasting, and 𝐅'^I_3D∈ℝ^N_lvl× X× Y× Z× C_I denotes the result of the “depth-assisted image feature lifting" process.
Radar Occupancy-Assisted Sampling: The model faces challenges in learning accurate depth prediction without direct supervision, as the image depth information is often ambiguous <cit.>.
To address this issue, one possible approach is to “generate" depth supervision using radar points. This method involves projecting radar points onto images and assigning their depths as the ground-truth depths of the nearest pixels. However, due to the sparsity of radar point cloud, only a few pixels have ground-truth depth information, and the accuracy of the ground-truth depths are limited because of the noise inherent in radar measurements.
Another approach leverages the radar modality in a different manner by adding an additional branch for lifting the image PV features and fusing with the aforementioned lifted features 𝐅'^I_3D. The latest work of this approach, CRN <cit.>, projects the 2D radar points onto the image plane and applies convolutional operations after pillarization. The resulting convolution output, referred to as the radar occupancy map, is in the image coordinate system, and aids in the view transformation process.
However, the coordinate transformation and pillarization procedures are time-consuming. In addition, when combing CRN with our “sampling" strategy, the radar occupancy map must be re-sampled to the radar coordinate system, which further increases the complexity. Thus, our proposed method generates radar occupancy grids in the radar coordinate system directly, as explained in Section <ref>.
It is worth noting that in our model, radar 3D occupancy grids are predicted instead of radar 2D occupancy maps, as 4D radar is capable of capturing height information. Moreover, since the required occupancy grids and radar BEV features share the same BEV resolution, they are generated from the radar BEV features for simplicity.
The radar 3D occupancy grids are then multiplied by 𝐅^I_3D to obtain the radar-assisted image 3D features 𝐅”^I_3D∈ℝ^N_lvl× X× Y× Z× C_I using the following equation:
𝐅”^I_3D=𝐅^I_3D⊙𝐎^P_3D.
Height Compression: The resulting radar-assisted image 3D features, denoted as 𝐅”^I_3D, and the depth-assisted image 3D features, denoted as 𝐅'^I_3D are concatenated along the channel dimension and summed along the level dimension. Subsequently, the tensor is reshaped from X× Y× Z× 2C_I to X× Y× (Z·2C_I), enabling the application of convolutional layers to facilitate spatial interaction. The process can be mathematically expressed as
𝐅^I_BEV=𝙲𝚘𝚗𝚟𝚜(𝚁𝚎𝚜𝚑𝚊𝚙𝚎(𝙲𝚘𝚗𝚌𝚊𝚝(𝐅'^I_3D, 𝐅”^I_3D))),
where 𝐅^I_BEV∈ℝ^X× Y× C_I represents the final image BEV features, which are the output obtained from the view transformation module utilizing the radar occupancy-assisted depth-based sampling method.
§.§ Multi-modal Fusion and Detection Head
After acquiring the radar and image BEV features, the fusion module integrates their information and produces fused BEV feature maps. In our approach, the radar BEV features, denoted as 𝐅_BEV^P, and the image BEV features, denoted as 𝐅^I_BEV, have the same resolution, allowing for concatenation and fusion through convolutional operations. The resulting fused BEV features are subsequently fed into the detection head to predict 3D bounding boxes. In this work, we adopt the methodology of CenterPoint <cit.> to generate category-wise heatmaps and perform object detection. It is important to note that our fusion strategy and detection head are not limited to specific methods. For instance, our model can also incorporate attention-based fusion techniques and employ an anchor-based detection head.
§ EXPERIMENTS AND ANALYSIS
§.§ Dataset and Evaluation Metrics
Dataset: In this study, we utilize two datasets, View-of-Delft (VoD) <cit.> and TJ4DRadSet <cit.>, to evaluate the performance of our proposed model. These datasets are designed for autonomous driving and encompass data from various sensors, including LiDAR points, 4D radar points, and camera images. Each object in the datasets is annotated with its corresponding category, a 3D bounding box, and a tracking ID. Moreover, the datasets provide coordinate transformation matrices between different sensors.
The VoD dataset encompasses three object categories we used in experiments: Car, pedestrian, and cyclist. On the other hand, the TJ4DRadSet includes an additional class, truck. It also presents a more diverse range of driving scenarios than VoD. Notably, it exhibits significant variations in lighting conditions throughout the dataset, as well as different road types such as crossroads and elevated roads. Consequently, the 3D object detection task becomes considerably more challenging when working with the TJ4DRadSet dataset.
For both datasets, we adopt the official data splits provided. Specifically, the VoD dataset comprises 5139 frames for training and 1296 frames for validation. Since the official test server for the VoD dataset is not yet released, evaluations and analyses are performed solely on the validation set. In the case of the TJ4DRadSet, the training set consists of 5717 frames, while the test set encompasses 2040 frames.
Evaluation Metrics: Our proposed model is evaluated using specific metrics for each dataset.
For the VoD dataset, there are two official evaluation metrics: AP in the entire annotated area (EAA AP) and AP in the driving corridor (RoI AP). The driving corridor, considered as a region of interest (RoI), is located close to the ego-vehicle and is defined as a specific area, D_RoI={(x,y,z)|-4m<x<4m,z<25m}, within the camera coordinate system. The Intersection over Union (IoU) thresholds used in the calculation of AP are 0.5, 0.25, and 0.25 for cars, pedestrians, and cyclists, respectively. For each predicted bounding box defined as a True Positive (TP), these thresholds determine the minimum overlap required between it and the ground truth.
In the case of the TJ4DRadSet dataset, evaluation metrics include 3D AP and BEV AP for different object classes within a range of 70 meters. The IoU thresholds for cars, pedestrians, and cyclists follow the same values as those used in the VoD dataset. Additionally, for the truck class, the IoU threshold is set to 0.5.
§.§ Implementation Details
The model implementation is based on MMDetection3D <cit.>, an open-source framework designed for 3D object detection tasks.
Hyper-parameter Settings: The hyper-parameters are determined following the official guidelines of the VoD dataset. The point cloud range (PCR) is set to a specific range, D_PCR={(x,y,z)|0<x<51.2m, -25.6m<y<25.6m, -3m<z<2m}, in the radar coordinate system. The pillar size in the voxelization process of radar points is defined as 0.16m×0.16m. The stride of the radar feature extractor, which consists of the backbone and neck, is adjusted to 2 to achieve a final BEV resolution of 160×160.
For the detection head, we utilize the CenterPoint <cit.> framework. During training, the minimum Gaussian radius for generating ground-truth heatmaps is set to 2. During inference, the top 1000 detections are considered, and a post-processing step with non-maximum suppression (NMS) is applied. The distance thresholds for NMS are set to 4m, 0.3m, and 0.85m for cars, pedestrians, and cyclists, respectively.
Regarding the TJ4DRadSet dataset, the PCR is set to D_PCR={(x,y,z)|0<x<69.12m, -39.68m<y<39.68m, -4m<z<2m}, and the other hyper-parameters remain consistent with those used in the VoD dataset.
Training Details: During training, both images and radar points are normalized with the mean and standard deviation values of the corresponding data in the whole training set before being fed into the model. Radar points and ground-truth bounding boxes outside the image view are filtered out to ensure data consistency. Random horizontal flipping is applied as a data augmentation technique for both input data and BEV features. The model is trained for 80 epochs using the AdamW optimizer and StepLR scheduler. The batch size is set to 6, and the initial learning rate is set to 1e-3. It is important to note that the image backbone and neck are loaded from a pre-trained model, and their parameters are frozen to prevent overfitting.
§.§ Results and Analysis
Results on VoD: The experimental results on the VoD <cit.> validation set are presented in Table <ref>. We compare our proposed LXL-R model, which is our single-modal baseline without the image branch, occupancy net, and fusion module, with other models.
The RoI AP for cars and cyclists is relatively high for LXL-R, indicating that 4D radar alone is effective in perceiving the environment at close range.
However, the RoI AP for pedestrians is limited due to two main reasons. Firstly, pedestrians are small in the BEV representation, often occupying only a single grid or even a fraction of it, making it challenging for the network to accurately regress bounding boxes.
Additionally, millimeter waves have weak reflections on non-metallic objects, resulting in sparse and less accurate measurements from pedestrians.
Another observation is that the radar-modal-only model performs poorly in terms of EAA AP for all categories, highlighting the challenges of detecting far-away objects due to the sparsity and noise in radar points.
Upon fusing camera images with radar data, the detection results of different models are improved, particularly in the EAA metric. Compared to RCFusion <cit.>, the latest benchmark on 3D object detection with 4D imaging radar and camera fusion, our LXL model achieves higher detection accuracy across almost all categories and evaluation regions.
Notably, the most significant performance gains are observed in the EAA AP for pedestrians and cyclists and the RoI AP for pedestrians.
These improvements suggest that dense images with rich semantic information can compensate for the sparsity and noise in radar points, enhancing radar perception for objects that are porous, non-metallic, or located at a distance.
Additionally, the precise image view transformation achieved through the use of image depth distribution maps and radar 3D occupancy grids amplifies the effectiveness of fusion with images. These results and analyses underscore the superiority of our “radar occupancy-assisted depth-based sampling" view transformation strategy.
Fig. <ref> showcases the visualization results of our LXL model, demonstrating its accurate detection of various object classes.
Notably, in some cases, LXL even detects true objects that are not labeled (e.g., the bottom-right cyclist in the second row of the image).
Moreover, when the radar point cloud is sparse, LXL has the ability to leverage camera information to detect objects. It is also able to utilize radar measurements to detect objects occluded in the camera view. Therefore, our model effectively leverages the advantages of both modalities to reduce missed detections and improve detection accuracy.
Results on TJ4DRadSet: To evaluate the generalization ability of our proposed model, we conduct additional experiments on the TJ4DRadSet <cit.> dataset. Table <ref> presents the performance of different methods on the test set of TJ4DRadSet, and Fig. <ref> provides visualizations of the detection results in various scenarios. These results demonstrate the effectiveness of our model in fusing radar and camera information for 3D object detection, even under challenging lighting conditions such as darkness or excessive illumination.
To further investigate the influence of lighting conditions and object distances on our LXL model, we analyze the detection results on TJ4DRadSet.
Specifically, we divide the test set into three subsets based on the brightness of the scenarios: dark, standard, and over-illuminated (referred to as “Shiny" in Table <ref>). These subsets account for approximately 15%, 60%, and 25% of the entire test set, respectively.
We report the detection accuracy on these subsets in Table <ref>. To mitigate the influence of road conditions across subsets, we also include the performance of LXL-R, which is less affected by lighting conditions, in the table.
By comparing the results of LXL with LXL-R on the same subset, we observe that image information is beneficial in normal lighting conditions, as expected.
Interestingly, even in dark scenarios, fusion with images brings some performance gain because the headlights and taillights of vehicles provide valuable cues for object classification and localization. However, in cases of excessive illumination, the performance deteriorates due to unclear images under such conditions.
To address this issue, a simple rule-based approach could be employed, such as switching to LXL-R with a single 4D imaging radar mode when the image quality illumination factor falls below a certain threshold.
As there are few studies about camera and 4D radar fusion-based 3D object detection, we aim to improve the overall performance here, and robustness against image quality degradation is not the primary focus of this work. Improving model robustness will be a subject of our future research.
Furthermore, we evaluate our model on objects at different distances from the ego-vehicle and present the results in Table <ref>. The LXL model exhibits higher detection accuracy than LXL-R for objects at almost all distances, and the performance decreases as the distance increases due to the sparsity of radar points. Moreover, as the semantic information from images aids in identifying objects at a distance, there are fewer missed detections and more TPs for far-away objects. In contrast, the radar-only modality demonstrates some ability to detect objects at medium range, and the introduction of images primarily improves bounding box regression accuracy. Since the number of TPs significantly impacts the AP, long-range objects benefit more from multi-modal fusion than medium-range objects.
§.§ Ablation Study
In this subsection, we perform several experiments to validate the effectiveness of key design choices in our model on the VoD <cit.> dataset. Specifically, we focus on two aspects: the image feature lifting strategy and the utilization of radar in the image branch.
We investigate the commonly used geometrical feature lifting strategies, “sampling" and “splatting", as described in Section <ref>. For the “splatting" process, we follow the implementation approach employed in LSS <cit.>.
Table <ref> presents the results of these experiments. While “sampling" exhibits slightly lower performance in terms of RoI AP compared to “splatting", it significantly outperforms “splatting" in terms of EAA APs. This finding suggests that while “splatting" may have a slight advantage at short distances, its performance deteriorates significantly as the distance increases, leading to lower performance compared to “sampling" in a broader range.
This observation can be attributed to the characteristic of splatted point clouds becoming sparser with increasing distance. After pillarization, a considerable number of far-away BEV grids may remain empty, as illustrated in Fig. <ref>. In contrast, “sampling" ensures that each 3D voxel is associated with a sampled image feature, as long as the corresponding grid falls within the camera view. Consequently, “sampling" proves to be more effective in capturing information across a wide range.
Regarding radar assistance in the image branch, we compare our “radar occupancy-assisted sampling" method with two alternative approaches. One alternative, referred to as “Depth Supervise" in Table <ref>, is similar to the approach used in BEVDepth <cit.>. It leverages radar points to generate supervision signals for image depth distribution prediction. Specifically, the radar points are first transformed into the image coordinate system. Subsequently, for each projected radar point, we identify the nearest pixel and assign the radar depth as the ground-truth depth for that pixel.
In cases where multiple radar points correspond to a single pixel, we compute the average depth to determine its ground-truth value.
However, we find that reducing the depth loss during training using this method is challenging. This difficulty arises due to the inherent noisiness and sparsity of radar points.
The noise in the radar measurements leads to inaccuracies in the derived ground-truth depths, while the sparsity of radar points poses challenges for the convergence of the depth estimation network.
Consequently, this alternative method only yields a slight improvement in detection accuracy compared to the approach without radar assistance.
Another alternative approach involves generating radar 3D occupancy grids by following the method in CRN <cit.>, namely "3D Occupancy Grids (CRN)" in Table <ref>, which differs from our method. The raw radar points are projected onto the image plane and voxelized to match the shape of the image depth distribution maps. Subsequently, sparse convolutions are employed to generate radar 3D occupancy grids in the image coordinate system, and tri-linear sampling is applied for image-to-radar coordinate transformation. It is important to note two differences between the aforementioned method and the original approach in CRN <cit.>. Firstly, since the radar points come from 4D radars and contain height information, 3D occupancy grids are generated instead of 2D occupancy maps. Secondly, the feature lifting method here is “sampling" rather than “splatting", so the occupancy grids need to be resampled to transform back to the radar coordinate system before being multiplied with the 3D image features. Nonetheless, the underlying idea is the same as CRN.
Compared to the “Depth Supervise" approach discussed earlier, the performance of the “3D Occupancy Grids (CRN)" method is even more significantly affected by the sparsity of radar points, due to the larger number of empty grids in 3D space. Furthermore, this method requires a time-consuming projection and voxelization process, whereas our approach only relies on a simple occupancy net to predict radar 3D occupancy grids in the radar coordinate system directly. Consequently, our “radar occupancy-assisted sampling" strategy offers performance and inference speed advantages.
§ CONCLUSION
In this paper, a new camera and 4D imaging radar fusion model, namely LXL, is proposed for 3D object detection. It is shown that LXL outperforms existing works by a large margin, mainly because its elaborate “radar occupancy-assisted depth-based sampling" view transformation strategy can effectively transform image PV features into BEV with the aid of predicted image depth distribution maps and radar 3D occupancy grids. This design demonstrates that there is a large room for improving the “sampling" strategy, where a small enhancement can boost the view transformation significantly.
The proposed LXL provides a framework capable of inspiring subsequent research in camera and 4D imaging radar fusion-based 3D object detection.
The future work will focus on strengthening the robustness of LXL by applying an attention-based transformer to achieve adaptive interaction between the two modals.
IEEEtran
60
RadarInsSeg J. Liu, W. Xiong, L. Bai, Y. Xia, T. Huang, W. Ouyang, and B. Zhu, “Deep instance segmentation with automotive radar detection points,” IEEE Transactions on Intelligent Vehicles, vol. 8, no. 1, pp. 84-94, 2023.
RadarInsSeg2 W. Xiong, J. Liu, Y. Xia, T. Huang, B. Zhu, and W. Xiang, “Contrastive learning for automotive mmWave radar detection points based instance segmentation,” in Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC), 2022, pp. 1255-1261.
RaLiBEV Y. Yang, J. Liu, T. Huang, Q. L. Han, G. Ma, and B. Zhu, “RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object Detection Systems," 2022, arXiv:2211.06108.
GNN-PMB
J. Liu, L. Bai, Y. Xia, T. Huang, B. Zhu, and Q. L. Han, “GNN-PMB: A simple but effective online 3D multi-object tracker without bells and whistles," IEEE Transactions on Intelligent Vehicles, vol. 8, no. 2, pp. 1176-1189, 2023.
automotive_radar_survey
S. Sun, A. P. Petropulu, and H. V. Poor, “MIMO radar for advanced driver-assistance systems and autonomous driving: Advantages and challenges,” IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 98–117, 2020.
4D_radar_overview
Z. Han, J. Wang, Z. Xu, S. Yang, L. He, S. Xu, and J. Wang, “4D millimeter-wave radar in autonomous driving: A survey,” 2023, arXiv:2306.04242.
VoD
A. Palffy, E. Pool, S. Baratam, J. F. Kooij, and D. M. Gavrila, “Multi-class road user detection with 3+1D radar in the View-of-Delft dataset,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4961–4968, 2022.
RPFA-Net
B. Xu, X. Zhang, L. Wang, X. Hu, Z. Li, S. Pan, J. Li, and Y. Deng, “RPFA-Net: A 4D radar pillar feature attention network for 3D object detection,” in Proceedings of the IEEE International Intelligent Transportation Systems Conference (ITSC), 2021, pp. 3061–3066.
RadarMFNet
B. Tan, Z. Ma, X. Zhu, S. Li, L. Zheng, S. Chen, L. Huang, and J. Bai, “3D object detection for multi-frame 4D automotive millimeter-wave radar point cloud,” IEEE Sensors Journal, 2022, doi: 10.1109/JSEN.2022.3219643.
CaDDN
C. Reading, A. Harakeh, J. Chae, and S. L. Waslander, “Categorical depth distribution network for monocular 3D object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 8555–8564.
BEVDet
J. Huang, G. Huang, Z. Zhu, and D. Du, “BEVDet: High-performance multi-camera 3D object detection in bird-eye-view,” 2023, arXiv:2112.11790.
M2BEV
E. Xie, Z. Yu, D. Zhou, J. Philion, A. Anandkumar, S. Fidler, P. Luo, and J. M. Alvarez, “M2BEV: Multi-camera joint 3D detection and segmentation with unified birds-eye view representation,” 2022, arXiv:2204.05088.
BEVFormer
Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Y. Qiao, and J. Dai, “BEVFormer: Learning bird’s-eye-view representation from multi-camera images via spatio-temporal transformers,” in Proceedings of the 17th European Conference on Computer Vision (ECCV). Springer, 2022, pp. 1–18.
PolarFormer
Y. Jiang, L. Zhang, Z. Miao, X. Zhu, J. Gao, W. Hu, and Y.-G. Jiang, “PolarFormer: Multi-camera 3D object detection with polar transformers,” 2022, arXiv:2206.15398.
LSS
J. Philion and S. Fidler, “Lift, Splat, Shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3D,” in Proceedings of the 16th European Conference on Computer Vision (ECCV). Springer, 2020, pp. 194–210.
BEVDepth
Y. Li, Z. Ge, G. Yu, J. Yang, Z. Wang, Y. Shi, J. Sun, and Z. Li, “BEVDepth: Acquisition of reliable depth for multi-view 3D object detection,” 2022, arXiv:2206.10092.
CRN
Y. Kim, S. Kim, J. Shin, J. W. Choi, and D. Kum, “CRN: Camera radar net for accurate, robust, efficient 3D perception,” in Proceedings of the International Conference on Learning Representations (ICLR), Workshop on Scene Representations for Autonomous Driving, 2023.
Simple-BEV
A. W. Harley, Z. Fang, J. Li, R. Ambrus, and K. Fragkiadaki, “Simple-BEV: What really matters for multi-sensor BEV perception?” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2023.
TJ4DRadSet
L. Zheng, Z. Ma, X. Zhu, B. Tan, S. Li, K. Long, W. Sun, S. Chen, L. Zhang, M. Wan, et al., “TJ4DRadSet: A 4D radar dataset for autonomous driving,” in Proceedings of the IEEE 25th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2022, pp. 493–498.
M3D-RPN
G. Brazil and X. Liu, “M3D-RPN: Monocular 3D region proposal network for object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9287–9296.
IPM
H. A. Mallot, H. H. Bülthoff, J. Little, and S. Bohrer, “Inverse perspective mapping simplifies optical flow computation and obstacle detection,” Biological Cybernetics, vol. 64, no. 3, pp. 177–185, 1991.
Pseudo-LiDAR
Y. Wang, W.-L. Chao, D. Garg, B. Hariharan, M. Campbell, and K. Q. Weinberger, “Pseudo-LiDAR from visual depth estimation: Bridging the gap in 3D object detection for autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 8445–8453.
Cam3DRadFusion1
T.-Y. Lim, A. Ansari, B. Major, D. Fontijne, M. Hamilton, R. Gowaikar, and S. Subramanian, “Radar and camera early fusion for vehicle detection in advanced driver assistance systems,” in Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), Machine Learning for Autonomous Driving Workshop, vol. 2, 2019, p. 7.
Cam3DRadFusion2
R. Nabati and H. Qi, “Radar-camera sensor fusion for joint object detection and distance estimation in autonomous vehicles,” 2020, arXiv:2009.08428.
CenterFusion
R. Nabati and H. Qi, “CenterFusion: Center-based radar and camera fusion for 3D object detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 1527–1536.
CramNet
J.-J. Hwang, H. Kretzschmar, J. Manela, S. Rafferty, N. Armstrong-Crews, T. Chen, and D. Anguelov, “CramNet: Camera-radar fusion with ray-constrained cross-attention for robust 3D object detection,” in Proceedings of the 17th European Conference on Computer Vision (ECCV). Springer, 2022, pp. 388–405.
RCBEV
T. Zhou, J. Chen, Y. Shi, K. Jiang, M. Yang, and D. Yang, “Bridging the view disparity between radar and camera features for multi-modal fusion 3D object detection,” IEEE Transactions on Intelligent Vehicles, vol. 8, no. 2, pp. 1523–1535, 2023.
RADIANT
Y. Long, A. Kumar, D. Morris, X. Liu, M. Castro, and P. Chakravarty, “RADIANT: Radar-image association network for 3D object detection,” in Proceedings of the 37th AAAI Conference on Artificial Intelligence, 2023.
MVFusion
Z. Wu, G. Chen, Y. Gan, L. Wang, and J. Pu, “MVFusion: Multi-view 3D object detection with semantic-aligned radar and camera fusion,” 2023, arXiv:2302.10511.
CRAFT
Y. Kim, S. Kim, J. W. Choi, and D. Kum, “CRAFT: Camera-radar 3D object detection with spatio-contextual fusion transformer,” 2022, arXiv:2209.06535.
TransCAR
P. Su, M. Daniel, and R. Hayder, “TransCAR: Transformer-based camera-and-radar fusion for 3D object detection,” 2023, arXiv:2305.00397.
Astyx
M. Meyer and G. Kuschk, “Automotive radar dataset for deep learning based 3D object detection,” in Proceedings of the IEEE 16th European Radar Conference (EuRAD), 2019, pp. 129–132.
RADIal
J. Rebut, A. Ouaknine, W. Malik, and P. Pérez, “Raw high-definition radar for multi-task learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17021–17030.
K-radar
D.-H. Paek, S.-H. Kong, and K. T. Wijaya, “K-radar: 4D radar object detection for autonomous driving in various weather conditions,” in Proceedings of the 36th Conference on Neural Information Processing Systems (NeuIPS), Datasets and Benchmarks Track, 2022.
SECOND
Y. Yan, Y. Mao, and B. Li, “SECOND: Sparsely embedded convolutional detection,” Sensors, vol. 18, no. 10, p. 3337, 2018.
CenterPoint
T. Yin, X. Zhou, and P. Krahenbuhl, “Center-based 3D object detection and tracking,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 11784–11793.
PointPillars
A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “PointPillars: Fast encoders for object detection from point clouds,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12697–12705.
PointNet
C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 652–660.
InterFusion
L. Wang, X. Zhang, B. Xv, J. Zhang, R. Fu, X. Wang, L. Zhu, H. Ren, P. Lu, J. Li, and H. Liu, “InterFusion: Interaction-based 4D radar and LiDAR fusion for 3D object detection,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 12247–12253.
M2-Fusion
L. Wang, X. Zhang, J. Li, B. Xv, R. Fu, H. Chen, L. Yang, D. Jin, and L. Zhao, “Multi-modal and multi-scale fusion 3D object detection of 4D radar and LiDAR for autonomous driving,” IEEE Transactions on Vehicular Technology, pp. 1–15, 2022.
self-attentions
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, and I. Polosukhin, “Attention is all you need,” Advances in Neural Information Processing Systems, 2017, pp. 30.
Cam4DRadFusion
H. Cui, J. Wu, J. Zhang, G. Chowdhary, and W. R. Norris, “3D detection and tracking for on-road vehicles with a monovision camera and dual low-cost 4D mmwave radars,” in Proceedings of the IEEE International Intelligent Transportation Systems Conference (ITSC). IEEE, 2021, pp. 2931–2937.
SSMA
A. Valada, R. Mohan, and W. Burgard, “Self-supervised model adaptation for multi-modal semantic segmentation,” International Journal of Computer Vision, vol. 128, no. 5, pp. 1239–1285, 2020.
RCFusion
L. Zheng, S. Li, B. Tan, L. Yang, S. Chen, L. Huang, J. Bai, X. Zhu, and Z. Ma, “RCFusion: Fusing 4D radar and camera with bird’s-eye view features for 3D object detection,” IEEE Transactions on Instrumentation and Measurement, 2023, doi: 10.1109/TIM.2023.3280525.
YOLOX
Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “YOLOX: Exceeding YOLO series in 2021,” 2021, arXiv:2107.08430.
CSPNet
C.-Y. Wang, H.-Y. M. Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, “CSPNet: A new backbone that can enhance learning capability of CNN,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) workshops, 2020, pp. 390–391.
PAN
S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8759–8768.
OFT
T. Roddick, A. Kendall, and R. Cipolla, “Orthographic feature transform for monocular 3D object detection,” in Proceedings of the British Machine Vision Conference (BMVC). BMVA Press, September 2019, pp. 59.1–59.13.
mmdet3d
MMDetection3D Contributors, “MMDetection3D: OpenMMLab next generation platform for general 3D object detection,” https://github.com/open-mmlab/mmdetection3d, 2020.
|
http://arxiv.org/abs/2307.01162v1
|
20230703171229
|
An $\ell^2$ bound on the influence of edges in first-passage percolation on $\mathbb{Z}^d$
|
[
"Barbara Dembin",
"Dor Elboim",
"Ron Peled"
] |
math.PR
|
[
"math.PR"
] |
We study the probability that a geodesic passes through a prescribed edge in first-passage percolation on ^d, for general d≥ 2. Our main result is a non-trivial power-law upper bound for the ℓ^2 norm of these probabilities, under regularity conditions on the weight distribution. This addresses a problem raised by Benjamini–Kalai–Schramm (2003). We also demonstrate our methods by deriving a mild strengthening of a lower bound on transversal fluctuations due to Licea–Newman–Piza (1996).
Multifractal and recurrence measures from meteorological data
of climate zones in India
G. Ambika
27 June 2023
========================================================================================
§ INTRODUCTION
First-passage percolation is a model for a random metric space, formed by a random perturbation of an underlying base space. Since its introduction by Hammersley–Welsh in 1965 <cit.>, it has been studied extensively in the probability and statistical physics literature. We refer to <cit.> for general background and to <cit.> for more recent results.
We study first-passage percolation on the hypercubic lattice (^d,E(ℤ^d)), d≥ 2, in an independent and identically distributed (IID) random environment. The model is specified by a weight distribution G, which is a probability measure on the non-negative reals. It is defined by assigning each edge e∈ E(ℤ^d) a random passage time t_e with distribution G, independently between edges. Then, each finite path p in ℤ ^d is assigned the random passage time
T(p):=∑ _e∈ p t_e,
yielding a random metric T on ^d by setting the passage time between u,v∈ℤ ^d to
T(u,v):=inf_p T(p),
where the infimum ranges over all finite paths connecting u and v. Any path achieving the infimum is termed a geodesic between u and v. A unique geodesic exists when G is atomless and will be denoted γ(u,v). The focus of first-passage percolation is the study of the large-scale properties of the random metric T and its geodesics.
The passage time of the geodesic between given endpoints is naturally a function of the weights assigned to all edges. To what extent is this passage time influenced by the weight assigned to a specific edge? This notion is formalized here by the probability that the geodesic passes through that edge.
It is clear that the influence of edges near the endpoints cannot be uniformly small, but it is not clear whether the influence diminishes uniformly for edges far from the endpoints. This issue was highlighted by Benjamini–Kalai–Schramm <cit.> in their seminal study of the variance of the passage time, where the following problem, later termed the BKS midpoint problem, was posed: Consider the geodesic between 0 and v. Does the probability that it passes at distance 1 from v/2 tend to zero as v→∞?
On the square lattice (d=2), The BKS midpoint problem was resolved positively by Damron–Hanson <cit.> under the assumption that the limit shape boundary is differentiable and then resolved unconditionally by Ahlberg-Hoffman <cit.>. Recently, assuming that the limit shape has more than 32 extreme points, the authors <cit.> provided a quantitative version, showing that the probability that the geodesic between two given points passes through a given edge is smaller than a power of the distance between the points and the edge.
The BKS midpoint problem remains open in dimensions d≥ 3.
In the same paper <cit.>, Benjamini–Kalai–Schramm also raised the following “averaged” version of the midpoint problem[The phrasing in <cit.> differs slightly to fit the setup of Bernoulli weights used there.]:
show that there exist C,c>0 such that for each v∈ℤ^d∖{0},
ℙ(e∈γ(0,v))≤ Cv^-c for all but at most Cv/ logv edges e∈ E(ℤ^d).
A positive answer to this problem would have simplified the proof of the main result of <cit.>. As such an answer was lacking, the authors of <cit.> resorted to an averaging trick to circumvent the difficulty. The problem and its variants, along with circumventing solutions, also arose in later adaptations of the BKS method <cit.>. Problem (<ref>) was further highlighted in the book <cit.> where it is pointed out that the only known upper bound on the ℓ^2-norm of the influences is the trivial
∑_e∈ E(ℤ^d)ℙ(e∈γ(0,v))^2≤ Cv,
which follows from the fact that the expected number of edges in γ(0,v) is of the order of v.
A power-law improvement to this estimate directly implies a positive answer to Problem (<ref>). The main result of this paper, Theorem <ref> below, provides such an improvement for a large class of weight distributions, in all dimensions d≥ 2.
We proceed to state our main result, which is proved under the following assumption on the weight distribution G. We assume that for some b>a>0 and α>0,
G is supported on the interval [a,b] and is absolutely continuous
with a density ρ satisfying ρ (x)≥α for almost all x∈ [a,b].
For definiteness, we let v denote the ℓ ^2 norm.
Suppose that G satisfies (<ref>) and the dimension d≥ 2. Then, there exists C>0, depending only on d and G, such that for all v∈ℤ ^d with v≥2 we have
∑ _e∈ E(ℤ^d)ℙ ( e∈γ (0,v) ) ^2 ≤ C(logv )^dv^2/(d+1).
In particular, for any a>0, the number of edges e for which ℙ( e∈γ (0,v) ) ≥v^-a is at most C (logv)^d v^2/(d+1)+2a.
§.§ Relation with transversal fluctuations lower bound
To gain a better understanding of the bound (<ref>) it is instructive to have the following picture in mind. It is expected that the transversal fluctuations of the geodesic γ(0,v) are of order v^ξ for a dimension-dependent exponent ξ and that the geodesic is “roughly equally likely to be anywhere in this range”. Thus, edges which are at distance of order at most v^ξ from the line segment connecting 0 and v should have probability of order v^-ξ(d-1) to be visited by the geodesic while the probability to visit other edges should exhibit rapid decay in their (rescaled) distance from the line segment[While expected, this is not proved - such an estimate would include, as a special case, a quantitative version of the BKS midpoint problem with optimal exponent.]. Consequently, one expects that
∑ _e∈ E(ℤ^d)ℙ( e∈γ (0,v) ) ^2 ≤ Cv^1+ξ(d-1)v^-2ξ(d-1)=Cv^1-ξ(d-1).
The bound (<ref>) is of this type with the estimate (up to a poly-logarithmic factor)
ξ≥1/d+1.
The lower bound (<ref>) is not expected to be optimal: it is predicted that ξ=2/3 for d=2 and that ξ≥1/2 for all d≥ 3 (see the discussion in <cit.>). Still, a version of it is the best currently known lower bound on transversal fluctuations for a point-to-point geodesic in a fixed direction, established in the work of Licea–Newman–Piza <cit.>. However, as we elaborate next, the version of (<ref>) proved in <cit.> is too weak to imply (<ref>), whence the need for Theorem <ref>.
The lower bound on transversal fluctuations stated in <cit.> is
ξ^(0)≥1/d+1
where
ξ^(0)=sup{ a≥ 0lim_n→∞sup_v∈ℤ^d, v≥ nℙ( γ(0,v)⊂cyl(0,v,v^a) ) <1 },
and cyl(0,v,r) denotes the cylinder of radius r>0 around the line connecting 0 and v∈ℤ^d, defined as
cyl(0,v,r):={ w∈ℝ^d: ∃λ∈ℝ so that w-λ v≤ r }.
The definition implies that for each positive a<ξ^(0), with uniformly positive probability over v∈ℤ^d∖{0}, the geodesic γ(0,v) contains at least one point outside of cyl(0,v,v^a). In comparison, the predicted bound (<ref>) yields a stronger conclusion, that for some c>0, the expected number of vertices of the geodesic γ(0,v) outside of cyl(0,v,cv^ξ) is at least linear in v. This stronger conclusion, with 1/d+1 replacing ξ and a poly-logarithmic correction, is also implied by our bound (<ref>). As it turns out, our methods also allow to derive this conclusion without the poly-logarithmic correction. As the proof provides a good illustration of our methods in a simpler context, we state this as a second result, which may be viewed as a (mildly) stronger version of the bound (<ref>) of <cit.>.
Suppose G satisfies (<ref>) and the dimension d≥ 2. Then, there exists c>0 and n_0≥ 1, depending only on d and G, such that for all v∈ℤ ^d with v≥ n_0,
ℙ( | γ(0,v)∖cyl(0,v,cv^1/d+1)| ≥ c v) ≥ 1/5,
where | γ(0,v)∖cyl(0,v,r) | denotes the number of vertices of the geodesic γ (0,v) lying outside the cylinder cyl(0,v,r).
Licea–Newman–Piza <cit.> use martingale methods to prove their results (following Newman–Piza <cit.> and Aizenman and Wehr <cit.>) while our proofs rely on perturbing the weights using Lemma <ref> below. The approaches are different but share a common essence, related to the inequality χ≥1/2(1-(d-1)ξ) derived by Wehr–Aizenman <cit.> (χ is the fluctuation exponent of the passage time). After inspecting the proof of the bound (<ref>) in <cit.> we think it may adapt to also yield a version of Theorem <ref> (possibly under weaker assumptions). However, it is not clear to us whether the method there also adapts to yield a version of Theorem <ref>.
§.§ Remarks and open questions
* It would be interesting to remove the polylogarithmic factor in (<ref>). Besides the improved bound in Theorem <ref>, such an estimate would also directly imply Theorem <ref>.
* Our proofs continue to apply under somewhat weaker assumptions than (<ref>). First, the same proof applies when the assumption that the density ρ is bounded from below is replaced by the assumption that the distribution G is the image of the standard Gaussian distribution under an increasing Lipschitz function from ℝ to [a,b]. Second, with minor modifications to the proof, the left boundary of the support can be chosen to be a=0. We also tend to think, but have not established, that a variant of the proof will hold for general absolutely-continuous distributions with sufficiently light tail but this will require an improvement to our Lemma <ref>.
* Of course, it would be significant to resolve the BKS midpoint problem in dimensions d≥ 3 or to improve the lower bound (<ref>) on the transversal fluctuation exponent.
§ TOOLS
Throughout the proofs the constants C and c may depend on G and d and are regarded as generic constants in the sense that their value may change from one appearance to the next, with the value of C increasing and the value of c decreasing. However, constants labeled with a fixed number,
such as C_0, c_0, have a fixed value throughout the paper. The following lemma is the main technical tool required for the proof of the main theorems. The lemma is a Mermin–Wagner type argument and is taken from <cit.>.
Suppose that G satisfies (<ref>). Then, there exist C_0>0 and
* Borel subsets (B_δ)_δ>0 of [a,b] with lim_δ↓ 0G(B_δ)=1,
* For each τ∈ [0,1], an increasing bijection g^+_τ:[a,b]→ [a,b].
such that the following holds:
* For any τ∈[0,1], w≤ g^+_τ(w)≤ w+C_0τ for w∈ [a,b] and for all δ>0,
g^+_τ(w)≥ w+δτ for w∈ B_δ.
* For any p>1, an integer n≥ 1, a vector τ=(τ_1,…,τ_n)∈[0,1]^n and a Borel set A⊂ℝ^n we have
ℙ( ( g^+_τ _1(X_1),… ,g^+_τ _n(X_n) ) ∈ A )≥exp(-pτ^2/2(p-1)) ·ℙ( (X_1,… ,X_n) ∈ A ) ^p,
where X_1,X_2,… ,X_n are i.i.d. random variables with distribution G.
For the proofs of Theorem <ref> and Theorem <ref>, we need the following claim. To this end, fix δ _0 >0 sufficiently small such that G (B_δ _0 ) ≥ 1-e^-20d and define the event
Ω :={[ ∀ k ≥log n and for every path Γ⊆ [-n^2,n^2]^d; of length k we have |{e∈Γ : t_e∈ B_δ _0}| ≥ k/2 ]}.
We have that ℙ (Ω ) ≥ 1-n^-5.
The proof is a simple union bound. For a fixed path Γ of length k we have that
|{e∈Γ : t_e∈ B_δ _0}|∼Bin( k,1-e^-20d)
and therefore
ℙ(|{e∈Γ : t_e∈ B_δ _0}|≤ k/2) ≤ 2^k e^-10dk≤ n^-10-3d (2d)^-k
The number of such paths Γ⊆ [-n^2,n^2]^d of length k is at most (2n^2+1)^d (2d)^k and therefore ℙ (Ω ) ≥ 1-n^-5 for all n large enough.
§ THE TRANSVERSAL FLUCTUATIONS
In this section we prove Theorem <ref>. Let v∈ℤ ^d such that n:=v is sufficiently large and let r>0 be the radius of the cylinder. Let ĥ∈ℝ ^d be an arbitrary unit vector orthogonal to v and let h∈ℤ^d be the closest integer point to 3rĥ. Finally, let
γ _0:=γ (0,v), γ _h:=γ (h,v+h), T_0:=T(0,v), T_h:=T(h,v+h).
This choice of the geodesic γ _h ensures that the cylinder corresponding to γ _0, cyl_0:=cyl(0,v,r) is disjoint from the cylinder corresponding to γ _h, cyl_h:=h+cyl(0,v,r) while the endpoints of these geodesics are at distance O(r) away.
Next, set =δ _0 /(8C_0) and define the events
ℰ _0 :={ |γ _0 ∖cyl_0 |≤ n } and ℰ _h :={ |γ _h ∖cyl_h |≤ n }.
In order to prove Theorem <ref>, it suffices to show that if ℙ (ℰ _0)≥ 4/5 then r≥ c n^1/(d+1). Thus, let us assume that ℙ (ℰ _0)≥ 4/5. Our goal will be to use Lemma <ref> in order to increase the weights in the cylinder cyl_0. Let N be the number of edges in the cylinder cyl_0 and note that N≤ Cnr^d-1. Let τ_e:=1/√(N)1{e∈cyl_0} and define the modified environment by t_e^+:=g^+_τ_e(t_e) for e∈ E(^d). Note that t_e^+=t_e outside of the cylinder cyl_0.
We let T^+(p) be the passage time of a path p in the modified environment (t_e^+)_e∈ E(ℤ ^d). We also let γ _0^+ and γ_h^+ be the corresponding geodesics in the modified environment and let T_0^+ and T_h^+ be the corresponding passage times in the modified environment. Finally, we let ℰ_0 ^+ be the analogue of the event ℰ _0 given in (<ref>) for the modified environment. That is, ℰ _0 ^+:={ |γ _0^+ ∖cyl_0|≤ n }. First, we claim that the event ℰ_h ∩ℰ _0^+ ∩Ω holds with positive probability as long as n is sufficiently large. Indeed, using Lemma <ref> with p=2, and our assumption that ℙ (ℰ _0 )≥ 4/5 we obtain
ℙ(ℰ _0 ^+)≥ e^-1ℙ(ℰ _0)^2≥ 16/(25e).
Thus, by translation invariance and Claim <ref>
ℙ(ℰ _h∩ℰ_0 ^+∩Ω ) ≥ 16/(25e)-1/5 -n^-5>0.
The idea is that on the event ℰ_h ∩ℰ_0^+ ∩Ω, increasing the weights from t_e to t_e^+ will increase the passage time from 0 to v and will not affect the passage time from h to v+h by much. However, by the triangle inequality, these passage times must differ by at most Cr (before and after the change) which gives a lower bound on r.
More rigorously, on the event ℰ_h we have
T^+_h≤ T^+(γ_h)=∑_e∈γ_h t_e^+≤∑_e∈γ_h t_e+C_0τ _e ≤ T_h+C_0/√(N) |γ_h∩cyl_0 |≤ T_h + C_0ϵ n/√(N)=T_h + δ _0 n/8√(N).
Similarly, on the event ℰ^+∩Ω, using that t_e^+∈ [a,b] and therefore γ _0^+⊆ [-n^2,n^2]^d we have
T_0^+=∑ _e∈γ _0^+ t_e^+≥∑ _e∈γ _0^+ t_e+δ _0τ _e 1 {t_e∈ B_δ _0}= T(γ _0^+)+δ _0/√(N) |{ e∈γ _0^+ ∩cyl_0 :t_e∈ B_δ _0} |
≥ T_0+δ _0/√(N)(|γ _0^+|/2- |γ _0∖cyl_0| )≥ T_0+δ _0/√(N)(|γ _0^+|/2-ϵ n)≥ T_0+δ _0 n/4√(N),
where in the last inequality we used that |γ _0^+|≥ n and ϵ≤ 1/4. However, using the triangle inequality and the fact that t_e≤ b we obtain
T_h≤ T(h,0)+ T_0+ T(v,v+h)≤ T_0+2bh_1≤ T_0+Cr.
Combining the last three inequalities, we obtain that on the event ℰ_r ∩ℰ^- ∩Ω,
T_h^+≤ T_0^++Cr- δ _0 n/8√(N)≤ T_h^+ +Cr- δ _0 n/8√(N),
where in here we used the triangle inequality, once again, for the weights t_e^+ ≤ b. Since the event ℰ_h ∩ℰ_0^+ ∩Ω occurs with positive probability we have
Cr≥δ _0 n/8√(N)≥cn/√(nr^d-1).
Rearranging we obtain that r≥ c n^ 1/(d+1) where c is a constant depending only on G and d. This concludes the proof.
§ THE ℓ ^2 NORM OF THE INFLUENCE OF EDGES
Let us start by briefly explaining the idea of the proof. Let v∈ℤ ^d and suppose that n:=v is sufficiently large. As in the proof of Theorem <ref>, we consider two geodesics γ _0:=γ (0,v) and γ _h:=γ (h,v+h) for some h∈ℤ ^ d.
Let p_e:=ℙ(e∈γ _0) and let p=(p_e)_e∈ E(ℤ^d). Our goal is to bound the ℓ ^2 norm p. Now, roughly speaking, one can increase the weight of each edge e by p_e/p without changing much the distribution of the environment (t_e)_e∈ E (ℤ^d). Indeed, we see that in Lemma <ref>, when τ is of order 1 the probabilities in the two environments are comparable. This change will increase the passage time T_0 by at least ∑ _e∈γ _0^+ p_e/p, where γ _0^+ is the geodesic in the increased environment. Thus, the expected increase will be
𝔼[∑ _e∈γ _0^+ p_e/p]≈𝔼[∑ _e∈γ _0 p_e/p]= 𝔼[∑ _e∈ E (ℤ ^d) 1 {e∈γ _0}p_e/p]= 1/p∑ _e∈ E (ℤ ^d) p_e^2=p,
where in the first approximation we used that the two environments are comparable. Similarly, the passage time T_h will be increased by at most ∑ _e∈γ _hp_e/p. Thus, using translation invariance, the expected increase will be at most
𝔼[∑ _e∈ E (ℤ ^d) 1 {e∈γ _h}p_e/p]= 1/p∑ _e∈ E (ℤ ^d)ℙ (e∈γ _h)p_e=1/p∑ _e∈ E (ℤ ^d) p_e-hp_e.
However, these passage times must differ by at most Ch (before and after increasing the weights) and therefore, as long as h≤ cp we obtain the inequality
∑ _e∈ E (ℤ ^d) p_e-hp_e≥ c ∑ _e∈ E (ℤ ^d) p_e^2.
The bound p^2≤ n^2/(d+1) (which is actually better than the bound in Theorem <ref> by a logarithmic factor) easily follows by summing the last inequality over h with h≤ cp and using the facts that geodesics have linear length and that the intersection of a geodesic with a box of side length r contains at most Cr edges. Indeed, we have
pn≥ cp·𝔼 |γ _0| = c p∑ _e∈ E (ℤ ^d)p_e ≥ c∑ _e∈ E (ℤ ^d)p_e ·𝔼[ | γ _0∩{ e-h :h≤ cp}| ]
=c∑ _e∈ E (ℤ ^d)p_e ∑ _h≤ cp p_e-h = c∑ _h≤ cp∑ _e∈ E (ℤ ^d) p_e-hp_e≥ c p^d ∑ _e∈ E (ℤ ^d) p_e^2=cp^d+2.
There are two main difficulties in carrying out this argument. The first difficulty, which causes the extra polylogarithmic factor in Theorem <ref> is that one cannot increase the weights by p_e/p deterministically. Lemma <ref> allows to increase each weight only with high probability (on the event {t_e∈ B_δ}). It might be the case that along the geodesic, only edges with small value of p_e will be increased and the total change in the passage time will be small. To overcome this issue we introduce the logarithmic averaging of p in (<ref>) and use the percolation argument in Claim <ref>, showing that on each path of logarithmic length enough edges will be increased.
The second difficulty is that the typical order of the random variable ∑ _e∈γ _0^+p_e can be much smaller than its expectation. That is, the main contribution to the expectation comes from rare events. Taking care of this requires the full strength of Lemma <ref> to control small probabilities. When the contribution to the expectation comes from rare events in which ∑ _e∈γ _0^+p_e is large, the inequality in (<ref>) deteriorates slightly but this is compensated by the fact that h can be chosen larger in this case (so that this difficulty does not lead to a loss in the final bound). This trade off can be seen in the statement of Lemma <ref> and in equations (<ref>) and (<ref>) below.
Let us move on to the precise argument. Recall that p_e:=ℙ (e∈γ _0 ) and let q_e be the logarithmic smoothing of p defined by
q_e:=max{p_e'·exp( -e'-e/log n) e'∈ E(ℤ^d)}
where e'-e denote the ℓ ^2 distance between the centers of the edges e and e'.
Note that q_e is smooth in the sense that
0.1≤ q_e/q_e'≤ 10, for any e,e' with e-e'≤ 2 log n.
The following inequality is the main ingredient required for the proof of Theorem <ref>.
There exists a constant C depending only on G and d such that
( ∑ _e∈ E(ℤ^d) q_e p_e )^d ≤ C ( ∑ _e∈ E(ℤ^d) q_e^2 ) ^d-1/2∑ _e∈ E(ℤ^d) q_e.
There exists a constant C depending only on G and d such that
∑ _e∈ E(ℤ^d) q_e ≤ Cn(log n)^d and ∑ _e∈ E(ℤ^d) q_e^2 ≤ C(log n)^d ∑ _e∈ E(ℤ^d) p_e ^2.
We postpone the proof of Claim <ref> and turn to prove Theorem <ref>.
Substituting the bounds in Claim <ref> into (<ref>) we obtain
( ∑ _e∈ E(ℤ^d) p_e^2 )^d ≤( ∑ _e∈ E(ℤ^d) q_e p_e )^d ≤ C n(log n)^d(d+1)/2( ∑ _e∈ E(ℤ^d) p_e^2 ) ^d-1/2.
The theorem follows by rearranging the last inequality.
In fact, if one could show that p is a “smooth" function in the logarithmic scale then the logarithmic factors in Claim <ref> and Theorem <ref> could be removed. Indeed, in this case q_e will not be much larger than p_e for most edges e.
For a path p define the function f(p):=∑ _e∈ p q_e and let
μ :=𝔼 [f(γ _0 )] =𝔼[ ∑ _e∈ E(ℤ^d) q_e 1 { e∈γ}] =∑ _e∈ E(ℤ^d) p_e q_e.
There exists a constant c_1>0 such that for all t>0 and h<c_1t/q we have
ℙ( f(γ _h) ≥ c_1t ) ≥ c_1·ℙ( f(γ _0) ≥ t )^3/2-n^-5.
Let ϵ >0 sufficiently small and let h≤ϵ t/q. Let τ _e := q_e /q and define the weights t_e^+:=g^+_τ _e(t_e) for any e∈ E(ℤ ^d). Note that t_e^+=t_e for any edge e outside of [-n^2,n^2]^d. Indeed, the weight distribution is supported on [a,b] and therefore the geodesic to v cannot travel so far. As in the proof of Theorem <ref>, we let T^+(p) be the passage time of a path p in the modified environment (t_e^+)_e∈ E(ℤ ^d). We also let γ _0^+, γ _h^+ and T_0^+,T_h^+ be the corresponding geodesics and passage times in the modified environment. Recall the definition of δ _0 and Ω before Claim <ref>. We have that
T_0^+=T^+(γ _0^+) =∑ _e∈γ _0^+ t_e^+ ≥ T(γ _0^+) + ∑ _e∈γ _0^+ 1 {t_e∈ B_δ _0}δ _0τ _e
≥ T_0 + δ _0q^-1∑ _e∈γ _0^+ 1 {t_e∈ B_δ _0} q_e.
In order to estimate the last sum we decompose the path γ _0^+ into m:=⌊ |γ _0^+|/log n ⌋ edge disjoint paths Γ _1,… ,Γ _m such that for all i≤ m we have that log n ≤ |Γ _i|≤ 2log n. We also let e_i be the first edge of the path Γ _i. We have that t_e^+∈ [a,b] and therefore γ _0^+ ⊆ [-n^2,n^2]. Thus, on Ω for all i≤ m
∑ _e∈Γ _i 1 {t_e ∈ B_δ _0} q_e ≥ cq_e_i| { e∈Γ _i : t_e∈ B_δ _0}| ≥ c q_e_i |Γ _i| ≥ c∑ _e∈Γ _i q_e,
where in the first and last inequalities we used (<ref>) and the fact that any edge e∈Γ _i satisfis e-e_i≤ 2log n. Substituting this into (<ref>) we get that on the event { f(γ _0^+)≥ t }∩Ω
T_0^+≥ T_0 + cq^-1∑ _e∈γ _0^+ q_e ≥ T_0+c t / q.
Moreover, using that t_e ≤ b and the triangle inequality we obtain
T_0 ≥ T_h -2bh_1 ≥ T_h-Cϵ t/q.
Similarly, using that t_e^+≤ b we have
T_0^+≤ T_h^++2bh_1 ≤ T_h^++Cϵ t/q.
Combining (<ref>), (<ref>) and (<ref>) we obtain that on the event { f(γ _0^+)≥ t }∩Ω we have T_h^+ ≥ T_h+c t/q as long as ϵ is sufficiently small. On the other hand
T_h^+≤ T^+(γ _h)≤ T_h +∑_e∈γ_hC_0τ _e= T_h+C_0f(γ_h)/ q
and therefore on { f(γ _0^+)≥ t }∩Ω we have f(γ _h) ≥ϵ t as long as ϵ is sufficiently small.
Finally, by Lemma <ref> with p=3/2 we have
ℙ( f(γ _h) ≥ϵ t ) ≥ℙ( f(γ _0^+)≥ t, Ω) ≥ℙ( f(γ _0^+)≥ t )-n^-5≥ c·ℙ( f(γ _0)≥ t ) ^3/2-n^-5.
This finishes the proof of the lemma as c_1 can be chosen sufficiently small.
We can now prove Proposition <ref>.
Recall the definition of μ in (<ref>). We claim that there exists an integer -1 ≤ k ≤log n such that
ℙ( f(γ _0) ≥ 3^kμ) ≥ 4^-k-3.
Indeed, otherwise
μ =𝔼 [f(γ _0)] ≤𝔼[ f(γ _0) 1 { f(γ _0)≤μ /3 }] +∑ _k=-1^log n𝔼 [f(γ _0) 1 { 3^kμ≤ f(γ _0) ≤ 3^k+1μ}]
≤μ /3+ ∑ _k=-1^log n 3^k+1μ·ℙ( f(γ _0)≥ 3^kμ) ≤μ /3
+ 4^-2μ∑ _k=0^∞ (3/4)^k <μ,
where in the first inequality we used that f(γ _0) ≤ |γ _0|≤ Cn almost surely and that μ≥ c (since p_e≥ c for some edge e incident to 0). Fix -1≤ k ≤log n such that (<ref>) holds. By Lemma <ref> with t=3^kμ, for all h≤ c_13^kμ /q we have
∑ _e∈ E(ℤ ^d) q_ep_e-h =𝔼[ ∑ _e∈ E(ℤ ^d)q_e 1 {e∈γ _h}]=𝔼 [f(γ _h)] ≥ c_13^kμ·ℙ( f(γ _h) ≥ c_13^kμ)
≥ c_13^kμ·( c_1ℙ( f(γ _0 ) ≥ 3^kμ)^3/2-n^-5) ≥ c(3/8)^kμ,
where in the last inequality we used (<ref>) and that k≤log n. Letting Λ _e:={e+h:h≤ r} be the ball of radius r:=c_13^kμ /q around the edge e and summing the last inequality over h∈ℤ ^d with h≤ r we obtain
cr ^d (3/8)^kμ≤∑ _h≤ r∑ _e∈ E(ℤ ^d) q_ep_e-h = ∑ _e∈ E(ℤ ^d) q_e ·𝔼 [|γ∩Λ _e|] ≤ Cr ∑ _e∈ E(ℤ^d) q_e,
where the last inequality we used that the geodesic between the first entry point of γ to Λ _e and the last exit point cannot be longer than Cr as the weights are supported in [a,b]. Rearranging we obtain
( ∑ _e∈ E(ℤ^d) q_e p_e )^d =μ ^d ≤ C (8/3^d)^kq^d-1∑ _e∈ E(ℤ^d) q_e ≤ Cq^d-1∑ _e∈ E(ℤ^d) q_e,
where in here we used that d≥ 2. This finishes the proof of the proposition.
It remains to prove Claim <ref>.
We have that
∑ _e∈ E(ℤ^d) q_e ≤∑ _e∈ E(ℤ ^d)∑ _e'∈ E(ℤ ^d) p_e'exp( -e'-e/log n)=∑ _e'∈ E(ℤ ^d) p_e'∑ _e∈ E(ℤ ^d)exp( -e'-e/log n)
≤ C log ^dn ∑ _e'∈ E(ℤ ^d) p_e' = C (log n)^d ·𝔼 |γ _0|≤ C n(log n)^d,
where in the last inequality we used that |γ _0|≤ Cn as the weights are supported in [a,b]. This finishes the proof of the first part of the claim.
For the second part we write
∑ _e∈ E(ℤ^d) q_e ^2 =∑ _e∈ E(ℤ ^d)max{p_e'^2 ·exp( -2e'-e/log n) e'∈ E(ℤ^d)}
≤∑ _e∈ E(ℤ^d)∑ _e'∈ E(ℤ^d) p_e'^2 exp( -2e'-e/log n)
= ∑ _e'∈ E(ℤ^d) p_e'^2 ∑ _e∈ E(ℤ^d)exp( -2e'-e/log n) ≤ C(log n)^d ∑ _e'∈ E(ℤ^d)p_e'^2,
as needed.
§.§ Acknowledgements
The research of B.D. is partially funded by the SNF Grant 175505 and the ERC Starting Grant CriSP (grant agreement No 851565) and is part of NCCR SwissMAP. The research of R.P. is supported by the Israel Science Foundation grant
1971/19 and by the European Research Council Consolidator grant 101002733 (Transitions).
Part of this work was completed while R.P. was a Cynthia and Robert Hillas Founders' Circle Member of the Institute for Advanced Study and a visiting fellow at the Mathematics Department of Princeton University. R.P. is grateful for their support.
plain
|
http://arxiv.org/abs/2307.01593v1
|
20230704093239
|
Cross-Element Combinatorial Selection for Multi-Element Creative in Display Advertising
|
[
"Wei Zhang",
"Ping Zhang",
"Jian Dong",
"Yongkang Wang",
"Pengye Zhang",
"Bo Zhang",
"Xingxing Wang",
"Dong Wang"
] |
cs.IR
|
[
"cs.IR",
"cs.AI",
"cs.LG"
] |
zhangwei180, zhangping18, dongjian03, [email protected]
zhangpengye, zhangbo126, wangxingxing04, [email protected]
Meituan
Beijing
China
The effectiveness of ad creatives is greatly influenced by their visual appearance. Advertising platforms can generate ad creatives with different appearances by combining creative elements provided by advertisers. However, with the increasing number of ad creative elements, it becomes challenging to select a suitable combination from the countless possibilities. The industry's mainstream approach is to select individual creative elements independently, which often overlooks the importance of interaction between creative elements during the modeling process. In response, this paper proposes a Cross-Element Combinatorial Selection framework for multiple creative elements, termed CECS. In the encoder process, a cross-element interaction is adopted to dynamically adjust the expression of a single creative element based on the current candidate creatives. In the decoder process, the creative combination problem is transformed into a cascade selection problem of multiple creative elements. A pointer mechanism with a cascade design is used to model the associations among candidates. Comprehensive experiments on real-world datasets show that CECS achieved the SOTA score on offline metrics. Moreover, the CECS algorithm has been deployed in our industrial application, resulting in a significant 6.02% CTR and 10.37% GMV lift, which is beneficial to the business.
<ccs2012>
<concept>
<concept_id>10002951.10003260.10003272.10003275</concept_id>
<concept_desc>Information systems Display advertising</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Information systems Display advertising; Recommender systems; Ad Creative
Cross-Element Combinatorial Selection for Multi-Element Creative in Display Advertising
Pengye Zhang, Bo Zhang, Xingxing Wang, Dong Wang
August 1, 2023
=======================================================================================
§ INTRODUCTION
In recent years, we have witnessed a growing business in online display advertising and its importance for Internet service providers. Image ads are the most widely used format since they are more compact, intuitive, and comprehensible. In the image of the advertisement, aka an ad creative, there may be background pictures, item descriptions, promotional texts, and other creative elements. And different combinations can quite vary in Click-Through Rate(CTR)<cit.>. Therefore, it is of great value to platforms to study how to improve the visual effect of creatives.
With the increase of creative elements, it has become an emerging trend to investigate how to combine creative elements to make a better appearance and thus maximize the effectiveness of ads<cit.>. However, the difficulty lies in the exponential growth in the number of candidate combinations. For example, a creative that contains three elements would involve N_1*N_2*N_3 possible elements combinations. More generally, assuming that there are M types of creative elements, each with an average of N candidates, the complexity of the problem is N^M. Moreover, as new creative elements are added, the number of possible combinations will increase rapidly, making it difficult to apply complex models to creative selection tasks.
Traditionally, to deal with online performance issues caused by the excessive number of creative combinations, the creative selection is performed offline<cit.>, thereby reducing the number of online candidates. However, the offline selection of creatives lacks the awareness of online request-level data. Therefore, some online strategies are applied, such as popularity, and preferences<cit.>. However, this method requires time to accumulate online feedback. Meanwhile, some studies have attempted to classify creative elements into different types and estimate using multi-task click-through-rate(CTR) models, and then combine the elements that are estimated by different task towers<cit.>. This method solves the problem of the combinatorial explosion. The complexity here is reduced from N^M to N * M. But for these multi-task learning routes, the cross-category interactions are not fully captured.
To this end, we propose a Cross-Element Combinatorial Selection framework for multiple creative elements, named CECS. To capture the association between different types of creative elements, a cross-element interaction method is applied to adjust the expression of every single creative element. We further designed a cascade element selection to transform the creative combination problem into a cascade selection problem. Experimental results show that the method has a significant improvement compared with the baselines, indicating that the method can select better combinations of candidate elements. Through online real-world experiments, it is demonstrated that the method can improve the CTR and GMV, which means the selected advertisements are appealing to the users.
To summarize, the contribution of this paper can be described as follows:
* We propose a cross-element interaction to tackle the relationship between different creative elements. Since the modal and information may vary, we adopt multiple attention mechanisms between each pair of creative types.
* We transform the creative combination problem into a cascade selection problem, the creative elements information is passed through the cascade design, so that the association in ad creatives can be integrated.
* We evaluate the proposed model on real-world datasets. The offline statistics and the online feedback indicate the strong practical value of the algorithm in the industry.
§ METHODOLOGY
We first define the concepts and issues involved in this paper,
then give corresponding explanations for the details of the
different modules in the architecture.
§.§ Problem Setup
Definition 1. Creative Optimization. Given a user and the
candidate creative elements, the goal of creative optimization is to
find the best combination of creative elements, so as to maximize the probability of being clicked/satisfied by the user, denoted as follows:
maximize P(A,label=1|C,u;Θ)
where A means an ad creative combination and C is the set of candidate elements, label=1 indicates the positive feedback from the user u, e.g. click behavior in advertising, and Θ is the model parameter to be optimized.
In real industrial scenarios, the creative elements in an ad may have multiple categories, e.g., banners, description texts, shop names, etc. Every element may contribute to the click behavior, so a design to fully explore the relationships is necessary.
§.§ Model Overview
In this section, we provide an overview of the model. The overall structure of CECS is shown in Figure <ref>, which follows an encoder-decoder process design. In the encoder process, the model processes candidate creative elements in a Cross-Element Interaction (CEI), with the goal of obtaining creative elements that contain rich interaction information. In the decoder process, creative elements are sequentially estimated using a Cascade-Element Selection (CES), resulting in a combined ad.
In this way, it is no longer necessary to enumerate every creative combination(N^M solutions), but only to estimate the creative elements sequentially through the model(N*M solutions). The inner relationship between categories is expected to be captured by the model, which will be discussed later.
§.§ Cross-Element Interaction
In an advertisement, every creative element contributes to the final click behavior of a user, since different types of elements may be related to each other at the semantic or vision level. Traditionally, the elements are modeled separately, and the relationships that lie between different creative elements are often overlooked, which is the key point this chapter addresses.
In order to better integrate the relationships, we apply a simple but efficient way to combine information from all categories, so that the cross-category relationships can be captured. Inspired by the design of the parameter personalized network<cit.>, we adopt a similar interaction way, as is shown in Figure <ref>.
Firstly, we compute the central representation of each category of candidate creative elements, which is shown as follows:
E_i =1/N_i∑_j=1^N_iE_i,j
where E_i,j represents the embedding of the j-th element of the i-th creative category, N_i means the number of items of the i-th creative category. For short, the central representation is the mean pooling among the specific category. Later, we adopt a 1-layer-MLP to transform its semantic, i.e., v_i=MLP(E_i), and the dimension is made consistent with the original representation.
Then, for each candidate's creative element E_i,j, we use the attention method to calculate the relationship with every transformed central representation v_i. Here the attention mechanism is applied to weight the importance of the different categories of creative elements, as is shown:
E_i,j^out = ∑_i=1^k softmax(v_i^T E_i,j/√(d_dim))E_i,j
where k is the total number of creative category, d_dim is the dimension of the embedding, E_i,j is a creative element of j-th category, v_i is the central representation of the i-th creative category, and E_i,j^out is the output representation of the creative.
In this way, it is possible to get the encoded creatives containing interacted features by explicitly integrating the attention mechanism.
§.§ Cascade-Element Selection
The prediction process of creative optimization has two
characteristics: 1) the output is derived from the input, i.e., the predicted creative elements are derived from the candidate ones. 2) There are relations among
the estimated creative elements, which makes it necessary to estimate in a sequential manner. The pointer network mechanism <cit.> is a method of selecting from existing candidate solutions, and we adopt and extend it here.
The module Cascade-Element Selection(CES for short) estimates creative elements step-by-step, so a cascade structure is applied here, as shown in Figure <ref>. One of the key points is that in the i-th prediction process, the prediction status of (i-1)-th needs to be considered. Therefore, The Gate Recurrent Unit<cit.>(GRU for short) structure is included in the CES. The hidden vector of GRU(also called the state vector in some scenarios) named h_i represents the predicted state of (i-i)-th elements that will be transferred to the next CES unit, so the inter-relationship between output elements can be integrated.
In the i-th step of the decoder, i.e., predicting the creative element of the i-th category, the hidden vector h_i-1 and ŷ_i-1 in the previous CES are fed into the current CES, as is shown in eq(<ref>):
h_i = GRU(ŷ_i-1, h_i-1)
ŷ_i = ∑ _j=1^N_i softmax(h_i^T E_i,j/√(d_dim)) E_i,j
where h_i is the output of the GRU unit function annotated as GRU, ŷ_i-1 and h_i-1 are the previous output and state respectively, N_i is the number of the i-th candidates, E_i,j is the interacted representation of the candidate creative elements of the i-th category. For the output vector of the GRU, an attention mechanism is applied to calculate the probability among the encoded creative elements. Then we get the probability distribution over the candidates.
Finally, the index of the output creative element ŷ_i among candidates is used. Pick up the corresponding index in the candidate's creative elements, and the solution of the i-th creative element can be obtained.
The above process repeats k times to predict every creative element, and the creative information is cascaded to the next pointer until the final creative element is predicted. Particularly, the encoded representation of the total candidates is used for the first CES unit, which acts as the overall upstream for the decoder. In this way, the cascade-element selection models the interactive relationship between different types of creative elements.
§.§ Model Loss and Optimization
The estimated creative elements are derived
from the input, so we model the output probability of each
cascade-element selection. Therefore, we use cross-entropy to model
the loss of one type of creative element, as is shown below:
Loss_i = - y_j log(ŷ_j)
where Loss_i is the loss for the i-th category, y_i and ŷ_i are the ground truth and predicted result for the i-th category respectively.
The different categories of creative elements may result in different promotional and visual effects, so the loss of different creative elements may have a large variance. Therefore, we adopt the idea of
uncertainty modeling<cit.> to provide adaptive weight coefficients for the selection
process of different types of creative elements. In this way, the human parameter tuning process can be reduced and better-weighted loss can be learned, shown as follows:
Loss_total = ∑_i=1^k ( 1/σ_i^2 Loss_i + logσ_i^2 )
where k is the total number of creative categories, σ_i is the corresponding learnable weight, and Loss_i is the loss of the i-th
predicted creative element. In this way, the loss of multiple creative elements can be adaptively adjusted.
§ EXPERIMENTS
We conduct several groups of experiments to verify the model
proposed, with the purpose of answering the following research
questions:
RQ1 Does our proposed method outperform the baseline methods
in the creative element selection problem?
RQ2 How do the two modules in the model work for modeling the relationship between creative elements?
RQ3 How does the model perform when deployed online?
§.§ Datasets and Evaluation
§.§.§ Datasets.
Since there doesn’t exist an off-the-shelf benchmark dataset for such creative selection tasks, we construct our dataset from real data online. Firstly, we set up a small amount of random traffic online, i.e., creative elements are randomly combined by type, so that we can obtain an unbiased copy of user preference data.
In our scenario, an ad is composed of a banner background image, a main title, and a sub-title, so the creative type is 3, i.e. k=3. Finally, we collect a total number of 7.4 million samples, containing 5,494,110 users and 157,611 shops. We use 90% of the data for training and the rest for testing. And we truncate each type of the creative elements sequences length of each shop to 5 uniformly,
that is, up to 125(5^3) potential creative combinations are considered for each shop.
§.§.§ Evaluation Metrics.
For evaluation, traditional
ranking evaluation metrics such as nDCG, MAP are not suitable, because this is a combinatorial modeling problem.
We focus on the "best combination" of a desired advertisement,
and the evaluation of creative combination needs to be considered in a
holistic way.
Hit Ratio (HR). Hit ratio is a recall-based evaluation metric
that measures how much the real creative elements in an ad
overlap with the predicted elements. The formula is shown as follows:
HR= ∑_i=1^k w_i |C_p∩ C_g|/∑_i=1^k w_i |C_p∪ C_g|
where k is the number of creative element types,
C_p and C_g are the predicted result and the ground truth respectively, w_i indicates different weights for creative
elements.
Precision (PR). The concept of precision is defined as whether the actually clicked (positive sample) creative element is also included in the estimated ad creative containing k creative elements, formulated as follows:
PR= ∑_i=1^k w_i I(c_i∈ C_g)
where k is the number of creative element types,
w_i is the weight parameter for each type of
creative element, I(·)∈{0,1} is the indicator function.
In our dataset, considering the visual influence of the three types of creatives, we set
w_1=0.5, w_2=0.3, w_3=0.2, and the overall HR and PR are
calculated through the average metric of all testing examples.
§.§.§ Baselines.
We compare the proposed method(CECS)
with several strategies commonly used in online creative selection scenarios (Strategies for short).
To evaluate the ability to tackle relationships between elements,
we regard the selection as a multi-task CTR prediction task (Multi-CTR for short).
Therefore, we combine CTR models like DNN<cit.>, DeepFM<cit.>,
AutoInt<cit.> with multi-task learning mechanisms like shared-bottom,
MMoE<cit.> and PLE<cit.> to fully explore
the offline effects.
§.§ Performance Comparison (RQ1)
Table <ref> shows the performance of HR and PR for the dataset for
different methods.
In general, the CTR-based methods are more effective than the Strategies cause these models are better at generalizing features.
With the improvement of the Multi-CTR model, we see a
corresponding increase in the HR and PR evaluation metrics.
And the MMoE structure captures the relationships better than the shared-bottom and slightly better than the PLE structure.
Among them, the AutoInt+MMoE model achieves the best results because
the self-attention mechanism in this model is applied to capture
the correlations between the same type of creative elements.
Nevertheless, our model still outperforms the baselines, indicating the interaction between creative elements can be better captured.
From the table, we can conclude that the interaction between
the elements in the Multi-CTR methods are not sufficient.
This is because different creative elements are not displayed
independently, but in a holistic way, and contributes to
the final visual effect of the advertisement.
§.§ Analysis of Model Designs (RQ2)
Here, we focus on the attention and the cascade design in CEI and CES respectively. We conduct several ablation studies for the two designs,
as is shown in Table <ref>. The √ and × marks in the table indicate whether the above designs are used or not. When not
using the CEI, we use MLP instead. Not using the CES means that no estimated element information is transmitted through the decoder process, so we omit the hidden vector h_i-1 from the previous CES.
It can be seen from the table that with the
plug-in of different designs, the HR and PR of the
model increase. We see there's a higher improvement in the CES design, which mainly contributes to the performance. Because through the design, the relationships can be transferred to other candidates to make better selections. The improvement of the CES design means that more information lies among different types of creative elements, and affects users' preference when combined together.
§.§ Online Results(RQ3)
To evaluate how the model performs in real-world industrial scenarios, we conduct A/B tests online.
The model online is an MMoE-based Multi-CTR model, it predicts different types of creatives at the same time. Our model focus on the relationships between creative elements, solving them in a cascade selection way. For the online time complexity problem, we shrink the feature pulling time before prediction, then the inference time is within an acceptable range. In the end, compared with the Multi-CTR method, we further obtained an increase of CTR+6.02% and GMV+10.37%, which proved the effectiveness of the method in the real-world industrial scene.
§ CONCLUSION
In this paper, we investigate the creative element selection from the perspective of cascade creative selection, solving the problem using a holistic way. The cross-element interaction is applied, which captures the associations between different types of candidate creative elements.
And we proposed the cascade element selection to
model the relationship between estimated elements.
It was experimentally verified that the method outperforms
the traditional strategies and Multi-task CTR prediction method. And the ablation study verified the importance of
attention and cascading design in the framework. The model was deployed on real-world data and significantly improved CTR and GMV, which can be extended and applied in other advertising scenarios.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.03311v1
|
20230706214904
|
On Invariance, Equivariance, Correlation and Convolution of Spherical Harmonic Representations for Scalar and Vectorial Data
|
[
"Janis Keuper"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
=-2pt
same
ℂ
ℝ
narrow[2]
=
1,2]Janis Keuper
mailto:[email protected]@imla.ai
[1]Institute for Machine Learning and Analytics (IMLA), Offenburg University
[2]CC-HPC, Fraunhofer ITWM, Kaiserslautern
On Invariance, Equivariance, Correlation and Convolution of Spherical Harmonic Representations
for Scalar and Vectorial Data.
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
[
August 1, 2023
=================================================================================================================================================================================================================================================================
§ ABSTRACT
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community. This technical report gives an in-depth introduction to the theoretical foundation and practical implementation of SH representations, summarizing works on rotation invariant and equivariant features, as well as convolutions and exact correlations of signals on spheres. In extension, these methods are then generalized from scalar SH representations to Vectorial Harmonics (VH), providing the same capabilities for 3d vector fields on spheres.
NOTE 1: This document is a re-publication of a subset of works originally published in my PhD thesis (I changed my last name from Fehr to Keuper):
Hence, it does NOT provide any references or findings since 2009. The sole intention of this re-publication is to provide old (but still very useful) insights to an Arxiv audience (which occasionally appears not to be aware of pre-Arxiv works).
Please cite the thesis or the original publications:
when using this content for your work.
NOTE 2: The original thesis and publications where all targeting multi-channel volumetric input data given by the target applications at that time. However, the actual methods in the harmonic domain directly extend to other data and applications in most cases.
tocchapterIntroduction and Perquisites
CHAPTER: INTRODUCTION AND PERQUISITES
Features
Structure of of the report:
The following report is structured as follows: in this introductory chapter we review the aspects of feature design
in general (section <ref>), and take a closer look at local (section <ref>) and invariant
features (section <ref>).
In chapter <ref> we introduce the essential mathematical basics and derive further mathematical techniques needed for the
formulation of our features, including correlation and convolution.
Chapter <ref> discusses basic implementation issues like sampling problems or parallelization and fills the gap
between the continuous mathematical theory and discrete implementation: for each of the following features, we first derive the theoretic
foundation in a continuous setting, and then give details on the actual discrete implementation based on these methods.
Then we introduce several different classes of features and their feature extraction algorithms:
chapter <ref> introduces the class of SH-Features,
chapter <ref> derives new features based on Haar-Integration and finally the chapters <ref> and
<ref> show how
we can compute different features on 3D vector fields. An overview of all features which are covered in this work can be found in table
<ref>.
Finally, we evaluate and compare the introduced features on an artificial benchmark (chapter <ref>).
§ GENERAL FEATURE DESIGN
Most pattern recognition tasks can be derived from a very general and basic problem setting: given an arbitrary set of patterns
{X_i| X_i∈ X}, we are looking for some function Γ: X_i → y_i which denotes each pattern with a semantic label
y_i ∈ Y from the category space Y⊂ℤ.
In general, X contains all possible patterns, which are usually defined as the digitalized signals obtained from a sensor capturing
the “real world” (see figure <ref>). Y holds the semantic meaning (categorization) of the real world, where
each category y_i defines an equivalence class.
The actual task of assigning the label y_i is called classification and Γ is often referred as decision function or classifier which
should hold:
X_1 ∼_y X_2 ⇔Γ(X_1) = Γ(X_2).
The most crucial step towards a suitable Γ is to find an adequate equality measure on X.
Since the notion of equivalence of real world objects is given by the human perception and is often highly semantic, it is usually very
hard to construct a measure which fulfills (<ref>).
In practice, there are two strategies to tackle this problem: learning and feature extraction - which are usually combined.
The first approach
tries to learn Γ from a set of training examples - we discuss this method in depth in part II of this work.
However, most practical problems
are too complex to construct or learn Γ directly by raw “pattern matching”. Such a “pattern matching” is usually too expensive in
terms of computational complexity, or even completely intractable in cases with a large intra class variance, e.g. if
patterns of the same equivalence class are allowed to have strong variations in their appearance.
The second approach tries to solve the problem by simplifying the original problem: the goal is to find a reduced representation X
of the original pattern X which still preserves the distinctive properties of X. A commonly used analogy for the feature concept is the
notion of “fingerprints” which are extracted from patterns to help to find a simpler classifier Γ' which holds:
X_1 ∼_y X_2 ⇔Γ(X_1) = Γ(X_2) ⇔Γ'(X_1) = Γ'(X_2).
Either a perfect feature extraction or a perfect classifier would solve the problem completely, but in practice we have to combine both
methods to obtain reasonable results: We use features to reduce the problem and then learn Γ' (see figure <ref>).
§.§ Feature Extraction
Features
We formalize the feature extraction in form of some function T(X_i) which maps all input signals X_i into the so-called feature space
X:
X_i =: T(X_i).
For the theoretical case of a “perfect” feature, T(X_i) maps all input signals X_i belonging to the same semantic class with label
y_i onto one point X_i in this features space:
X_1 ∼_y X_2 ⇔ T(X_1) = T(X_2).
As mentioned before, the nature of practical problems includes that there are intra class variations which make things more complicated.
We model these intra class variations by transformations h_i∈ H_y, where H_y is the set of all possible transformations, which
do not change the label y of the ideal class template X_y:
X_i := h_iX_y.
If it is impossible to construct the “perfect” feature for a practical application, the goal is to find feature mappings T(hX_y)
which at least fulfill the following properties:
Feature properties
* (I) size: the feature space should be much smaller than the pattern space: n <<< p with X⊂ℝ^n, X⊂ℝ^p.
* (II) continuity: small changes in the input pattern X_i should have only small effects in feature space X
* (III) cluster preservation: local neighborhoods should be transfered from input to feature space
If the extracted feature X_i adheres to these properties, X provides several advantages for the further construction or
learning of Γ:
first, (I) drastically reduces the computational complexity and second, (II) and (III) make it possible to introduce a meaningful similarity
measure on X (like a simple Euclidean-Norm), which is an essential precondition to the application of learning algorithms (see
part II).
Still, the question remains how to construct features which hold the properties I-III. While size property (I) is rather easy to meet,
continuity (II) and cluster preservation (III) are more difficult to obtain. This leads us to the notions of invariance and
robustness of features, which are central to the methods presented in this work.
§.§ Invariance
InvarianceInvariant features
Feature extraction methods are strongly interlaced with the concept of invariance. The basic idea of invariant features is to
construct T(X) in such a way that the effect of those transformations h_i ∈ H_y (<ref>) which are
not affecting the semantic class label y of X, e.g. X ∼_y h_iX, is canceled out by T:
T(h_iX_y) = X_y, ∀ h_i∈ H_y.
For two signals X_1 and X_2 which are considered to be equivalent under a certain transformation h_i∈ H_y,
X_1h_i∼X_2,
the necessary condition <cit.> for invariance against h_i is:
Invariance: necessary condition
X_1h_i∼X_2⇒ T(X_1)=X_1=X_2= T(X_2).
In order to achieve completeness <cit.>, T has to hold:
Invariance: completeness
T(X_1)=T(X_2)⇒X_1h_i∼X_2.
In most cases the mathematical completeness condition is too strict, since it is not practicable to have a distinct mapping for every
theoretically possible pattern X_i. However, with only little a priori knowledge, one can
determine a sufficient subset of likely patterns X'. If (<ref>) holds for all X_i,X_j∈ X',
separability <cit.> can be guaranteed for the likely patterns.
Invariance: separability
It is straightforward to see that a feature which holds the necessary condition (<ref>) and achieves at least
separability meets the properties II and III.
§.§.§ Group Transformations
Transformation groups
The construction of an invariant feature requires that we are able to model the allowed transformations h_i ∈ H_y of the equivalence class
with label y. In general this is a hard and sometimes infeasible task, e.g. just think of arbitrary deformations. However, for the
subset of transformations G_y⊂ H_y, where G_y forms a compact mathematical group, we have sophisticated mathematical tools to model the
individual transformations g_i∈ G_y.
Luckily, many practically relevant transformations like rotations are groups or can easily be transformed to groups, e.g. translations if we
consider cyclic translations. Overall, we can formulate translations, rotations, shrinking, shearing and even affine mappings as group
operations <cit.>.
§.§.§ General Techniques For The Construction Of Invariant Features
In general, there are three generic ways of constructing invariant features: by normalization, derivation and integration
<cit.>. For allowed transformations
H_y, the individual transformations h ∈ H_y differ only by their associated set of parameters λ⃗, which cover the
degrees of freedom under H_y. The most popular method for invariant feature construction is to eliminate the influence of
λ⃗ via normalization of the class members X_i := h_λ X_y with a class template X_y.
We apply normalization techniques in the following features:
SH_abs (chapter <ref>), SH_phase (chapter <ref>), SH_bispectrum,
and
VH_abs (chapter <ref>)
However, it should be noted that normalization techniques in general tend to suffer in
cases of noisy or partially corrupted data and are often totally infeasible for complex data where no normalized template can be found.
A second possibility is the elimination of λ⃗ via derivation:
∂ T(g_λ⃗X_i)∂λ≡ 0.
The resulting differential equations can be solved using Lie-Theory <cit.> approaches, but in practice it is often very
difficult to obtain solutions to the differential equations.
Finally, the approach which has been proposed by <cit.> can be applied on the subset of group transformations:
It generates invariant features via Haar-Integration
over all degrees of freedom of the transformation group G. We take an in-depth look at the Haar-Integration approach in chapter
<ref> and apply it in several of our features:
2p-Haar (chapter <ref>), 3p-Haar (chapter <ref>), np-Haar (chapter <ref>), 1v-Haar
(chapter <ref>), 2v-Haar (chapter <ref>) and nv-Haar (chapter <ref>).
For many practical applications invariance can be achieved by the combination of several different approaches: we can split transformations h
into a combination of several independent transformations h := h_1 ∘ h_2 ∘…, where h_1 might be a group transformation
like i.e. rotation and h_2 a non-group transformation like gray-scale changes.
The concept of invariance provides us with a powerful tool for the construction of features which is suitable for a wide range of problems.
However, there are still many practically relevant cases where some of the underlying transformations h_i cannot be
sufficiently modelled, or are even partially unknown. Then it becomes very hard or impossible to construct invariant features. In these
cases we have to fall back to the sub-optimal strategy to construct robust instead of invariant features.
§.§ Robustness
Robustness
Robustness is a weaker version of invariance: if we are not able to cancel out the effect of the transformations h_i like in
(<ref>), we can at least try to minimize the impact of these intra class variations.
Given X_1h∼X_2, X_1, X_2∈ X,
we are looking for a feature T which maps X_1,X_2 in such a way that the intra class variance
in X is smaller than the extra class distances given some distance measure d in X:
X_1h_i∼X_2⇒ d(T(X_1), T(X_2)) < d(T(X_1,2),T(X')), ∀ X'∈ X: X'h_i∼ X_1,2.
It is obvious that the robustness property (<ref>) directly realizes the feature properties II and III. In practice,
robustness is often achieved by simplified approximations of complex intraclass variations, e.g. linear approximations of actually
non-linear transformations h_i. In theses cases, we often use an even weaker definition of robustness and demand that
(<ref>) has only to hold for most but not all X' ∈ X.
§.§ Equivariance
Equivariance
For some applications it is desirable to explicitly transfer the variations to the feature space:
X_1h_i∼X_2⇒ T(X_1)= h_iT(X_2).
These features are called equivariant, and are often used to compute the parameters of known transformations h_i.
§ LOCAL FEATURES
FeaturesLocal features
The feature definition in the last section (<ref>) considered only the extraction of so-called “global” features, i.e.
features are extracted as descriptors X_i = T(X_i) (or “Fingerprints”) of the entire pattern X_i. This global approach
is suitable for many pattern recognition problems, especially when the patterns are taken from prior segmented objects (see part III).
In other cases, it can be favorable to describe a global pattern as an ensemble of locally constrained sub-patterns. Such a local
approach is suitable for object retrieval, object detection in unsegmented data, or data segmentation itself (see part III).
§.§ Local Features on 3D Volume Data
Throughout the rest of this work we deal with 3D volume data or 3D vector fields. In general we derive the theoretical background of the
local features
in settings of continuous 3D volumes, which we define as functions X: ℝ^3 →ℝ^m with
values X( x) ∈ℝ^m at evaluation coordinates x∈ℝ^3.
We then transfer the feature algorithms to operate on the practical relevant discrete 3D volume grids: X:ℤ^3
→ℝ^m, where we often refer to the position x as a “voxel”.
Given 3D volume data, we capture the locality of the features extracted from X in
terms of a spatial constraining of the underlying sub-pattern. More precisely, we define a sub-pattern as “local neighborhood” around a
data point at x with the associated local feature X( x).
Further, we parameterize the local “neighborhood” in concentric spheres with radii r around x.
This has several advantages over a rectangular
definition of the “local neighborhood”:
First, we can easily define the elements of the sub-pattern by a single parameter r using the following notation for the sub-pattern
around x:
S[r]( x) := { x_i ∈ℝ^3 | x- x_i_2 = r}.
Second, we can address all points in S[r]( x) via the parameterization in radius r and the spherical
coordinates (Φ, Θ) - see section <ref> for more details on the parameterization. And finally, we can rely on
a well known and sound mathematical theory to handle signals (patterns) in spherical coordinates which provides us with very useful
tools to handle common transformations such as rotations.
We give an in-depth introduction and further extensions to this mathematical basis for our local features in chapter <ref>.
§.§.§ Gray-Scale Data
Gray-scale dataVoxel
In cases where the 3D volume data is scalar X: ℝ^3 →ℝ,
we can directly apply the locality definition (<ref>). Note,
that we usually refer to scalar data as “gray-scale” data, this term is derived from the usual data visualization as gray-scale images -
even though the scalar values might encode arbitrary information.
Analogous to this, we denote intensity changes as gray-scale changes.
For many pattern recognition tasks on scalar 3D volume data we like to obtain gray-scale and rotation invariant local features in order to
cancel out the dominant transformations which act locally. Other transformations of the data do not act locally, like translations, or
are very hard to model like arbitrary deformations. In these cases we try to obtain local robustness, which is usually easier to obtain than
global robustness since the local affect of complex global transformations is limited in most cases.
§.§.§ Multi-Channel Data
Multi-channel data
In many cases we face volumes with data which holds more than a single scalar value at each position x.
Then we define X: ℝ^3 →ℝ^m for data with m scalar values per position.
The classic example
could be a RGB color coding at each voxel, but we might also have other multi-modal data with an arbitrary number of scalar values.
We refer to these volumes as multi-channel data, where we address the individual channels c_i by X[c_i]( x) ∈ℝ.
Figure <ref> shows an example of such multi-channel data.
It is obvious that we also need features which operate on multiple channels - this is an important aspect we have to take into account
for the feature design.
§.§ Local Features on 3D Vector Fields
Vector fields
Besides local features for scalar gray-scale and multi-channel scalar volumes, we further investigate and derive features which
operate on 3D vector fields X:ℝ^3 →ℝ^3.
Usually these vector fields are directly obtained by the extraction of gradient information from
scalar volumes (see figure <ref>).
In contrast to multi-channel data, the elements of the vectors in the field are not independent and change according to transformations,
e.g. under rotation. This makes the feature design a lot more complicated.
§ RELATED WORK
The number of publications on feature extraction methods and their applications is countless. Hence, we restrict our review of related work
to methods which provide local rotation invariant features for 3D volume data or 3D vector fields. This restriction reduces the number of
methods we have to consider to a manageable size.
Since we provide an in-depth discussion of most of the suitable methods in the next chapters (see table <ref>), we
are left with those few methods we are aware of, but which are not further considered throughout the rest of this work:
* The first class of rotational invariant features which operate on spherical signals are based on the so-called “Spherical Wavelets”
<cit.> which form the analog to standard wavelets on the 2-sphere. These methods have mostly been used for 3D shape analysis,
but also for the characterization of 3D textures <cit.>.
* Second, we have to mention methods based on 3D Zernike moments. For shape retrieval (also see Part III), 3D Zernike moments have been
successfully applied as 3D shape descriptors, i.e. by <cit.> and <cit.>. In both cases, only the absolute
value of the Zernike coefficients were used to obtain rotation invariance which leads to rather weakly discriminative features just as in the
case of the SH_abs features <ref>.
<cit.> introduced a set of complete affine invariant 3D Zernike moments which overcome these problems. However, just as for the
SH_bispectrum features <ref>, the completeness comes at the price of very high complexity.
* Finally, we were not able to find much significant prior work on rotation invariant features operating on 3D vector fields.
Mentionable is the work in <cit.>, which uses a generalized Hough approach <cit.> to detect spherical structures in a 3D gradient
vector field. This method is closely related to our 1v-Haar feature <ref>.
CHAPTER: MATHEMATICAL BACKGROUND
In this chapter we introduce and review the mathematical background of important methods we use later on. First we exploit
and formulate the basics of mathematical operations on the 2-sphere, which are essential to derive our features. The theoretical
foundation of these methods has been adapted for our purposes from angular momentum theory <cit.>, which plays an
important role in Quantum Mechanics. Hence, we can rely on a well established and sound theoretical basis when we extend existing and
derive novel operations in the second part of this chapter.
The reader may refer to <cit.><cit.><cit.> and <cit.> for a detailed introduction to angular momentum theory.
§ SPHERICAL HARMONICS
Spherical HarmonicsHarmonic expansion
Spherical Harmonics (SH) <cit.> form an orthonormal base on the 2-sphere S^2. Analogical to the Fourier Transform,
any given real or complex valued, integrable function f in some Hilbert space on a sphere with its parameterization over the angles
Θ∈ [0,π[ and Φ∈ [0,2π[ (latitude and longitude of the sphere) can be represented by an expansion in its
harmonic coefficients by:
f(Φ,Θ)=∑_l=0^∞∑^m=l_m=-lf^l_m Y_m^l(Φ,Θ),
where l denotes the band of expansion, m the order for the l-th band and f^l_m the harmonic coefficients.
The harmonic base functions Y_m^l(Θ,Φ) are calculated (using the standard normalized <cit.> formalization) as follows:
Y_m^l(Φ,Θ) = √(2l+1/4π(l-m)!/(l+m)!)· P_m^l(cosΘ)e^imΦ,
where P_m^l is the associated Legendre polynomial (see <ref>). Fig. <ref> illustrates
the Y_m^l base functions of the first few bands.
The harmonic expansion of a function f will be denoted by f with corresponding coefficients f^l_m.
We define the forward Spherical Harmonic transformation as:
SH(f) := f, with f^l_m = ∫_Φ,ΘY_m^l(Φ,Θ)f(Φ,Θ)
sinΘdΦ dΘ,
where x denotes the complex conjugate, and the backward transformation accordingly:
SH^-1(f)(Φ,Θ) := ∑_l=0^∞∑^m=l_m=-lf^l_m Y_m^l(Φ,Θ).
§.§ Associated Legendre Polynomials
Associated Legendre polynomial
Associated Legendre polynomials P^l_m(x) are derived as the canonical solution of the General Legendre differential
equation <cit.>:
((1-x^2)y')+(l(l+1) - (m^2)/1-x^2)y = 0,
which plays an important role for the solution of many well known problems such as the Laplace equation <cit.> in our case.
For integer values of -l ≤ m ≤ l,
P^l_m(x) = (-1)^m/2^ll!(1-x^2)^m/2d^l+m/dx^l+m(x^2-1)^l
has non-singular solutions in [-1,1]. The Associated Legendre polynomials are linked to the General Legendre polynomials by:
P^l_m(x) = (-1)^m(1-x^2)^m/2d^m/dx^m(P^l(x)),
which implies that P^l_0(x) = P^l(x) - as shown in Fig. <ref>.
Properties:
Two main properties of the Associated Legendre polynomials in context of this work are the orthogonality of the P^l_m(x) <cit.>
as well as the symmetry property:
P^l_-m = (-1)^m(l-m)!)/(l+m)! P^l_m.
Another notable fact is that in contrast to its name, the P^l_m(x) are actually only polynomials if m has a even integer value.
§.§ Deriving Spherical Harmonics
We give a brief sketch of how Spherical Harmonics have been derived in literature <cit.><cit.> focusing on some aspects which
are useful for our purposes. For more details please refer to <cit.> or <cit.>.
Given a function f parameterized in Φ, Θ on S^2, its Laplacian is:
∇^2Φ = ∂^2 f/∂Θ^2+Θ∂ f/∂Θ+^2Θ∂^2 f/∂Φ^2.
A solution to the partial differential equation
∂^2 f/∂Θ^2+Θ∂ f/∂Θ+^2Θ∂^2 f/∂Φ^2 + λ f = 0
can be obtained <cit.> by separation into Φ-dependent parts
sin(mΦ) for m < 0
cos(mΦ) else
and Θ-dependent parts
d^2y/dΘ^2+Θdy/dΘ+(λ-m^2/sin^2Θ) y = 0,
with solutions given by P^l_m(cos(Θ)) (section <ref>) for the integer valued m≥ 0 and λ=l(l+1).
Rewriting the Φ-dependent parts in exponential notation and adding the normalization to ∑|Y^l_m|^2=1 <cit.>,
we obtain the Spherical
Harmonics:
Y_m^l(Φ,Θ) := √(2l+1/4π(l-m)!/(l+m)!)· P_m^l(cosΘ)e^imΦ.
Thanks to O. Ronneberger for the MATLAB visualization
§.§ Useful Properties of Spherical Harmonics
Spherical Harmonics
We give some of the useful properties of Spherical Harmonics which we exploit later. All presented properties are valid for the use
of normalized base functions.
Orthonormal
Orthonormality: As mentioned before, the key property is that the base functions Y^l_m are orthonormal:
∫_ΘΦY^l_mY^l'_m'sinΘdΘ dΦ = δ_ll'δ_mm',
with the Kronecker symbol δ.
Symmetry: Symmetry of the Spherical Harmonic base functions can be nicely observed in Fig. <ref> and
is given by:
Symmetry
Y^l_m = (-1)^m Y^l_-m.
Addition Theorem:
Addition Theorem
For γ given by
cos(γ) = cos(Θ)cos(Θ')+sin(Θ)sin(Θ')cos(Φ-Φ')
the Addition Theorem <cit.> states that P^l(cos(γ)) can be obtained by:
P^l(cos(γ)) = 4π/2l+1∑_m Y^l_m(Θ,Φ)Y^l_m(Θ',Φ'),
which also implies the property <cit.>
Y^l_0 = (2l+1/4π)^1/2 P^l(cos(Θ)).
§ ROTATIONS IN SH
Rotation
Throughout the rest of this work we will use the Euler notation in zyz'-convention (see Fig. <ref>) denoted
by the angles
ϕ,θ,ψ with ϕ,ψ∈ [0, 2π[ and θ∈ [0, π[ to parameterize the rotations R∈ SO(3)
(abbreviated for R(ϕ,θ,ψ)∈ SO(3)).
Rotations R(ϕ,θ,ψ) in the Euclidean space find their equivalent representation in the harmonic domain in terms
of the so called Wigner D-Matrices, which form an irreducible representation of the rotation group SO(3) <cit.>.
For each band l, D^l(ϕ,θ,ψ) (or abbreviated D^l( R)) defines a band-wise rotation in the SH coefficients.
A rotation of f by R in the Euclidean space can be computed in the harmonic domain by:
R f = ∑_l=0^∞∑_m=-l^l∑_n=-l^l D^l_mn( R) f^l_n Y^l_m.
Hence, we rotate f^l_m by R(ϕ,θ,ψ) via band-wise multiplications:
f'= R(ϕ,θ,ψ)f ⇒f'^l_m = ∑_n=-l^l D^l_mn(ϕ,θ,ψ) f^l_n.
Due to the use of the zyz'-convention, we have to handle inverse rotations with some care:
f'= R^-1(ϕ,θ,ψ)f ⇒f'^l_m = ∑_n=-l^l D^l_mn(-ψ,-θ,-ϕ) f^l_n.
§.§ Computation of Wigner d-Matrices
Wigner d-Matrix
The actual computation of the Wigner d-Matrices is a bit tricky.
In a direct approach, the d-Matrices can be computed by the sum
d^l_mn(θ) = ∑_t (-1)^t √((l+m)!(l-m)!(l+n)!(l-n)!)(l+m-t)!(l-n-t)!t!(t+n-m)!
· cos(θ/2)^2l+m-n-2t·sin(θ/2)^2t+n-m
over all t which lead to non-negative factorials <cit.>.
It is easy to see that the constraints on t are causing the computational complexity to grow
with the band of expansion. To overcome this problem, <cit.> introduced a recursive method for the d-Matrix computation. We
are applying a closely related approach inspired by <cit.>, where we retrieve d-Matrices from recursively computed D-Matrices.
§.§.§ Recursive Computation of Wigner D-Matrices
Wigner D-Matrix
Given D^l for the first two bands l=0 and l=1,
D^0(ϕ,θ,ψ) := 1
D^1(ϕ,θ,ψ) := (
e^ -iψ1+cos(θ) 2e^ -i ϕ -sin(θ)√(2)e^ -i ϕ e^ iψ1-cos(θ) 2e^ -i ϕ
e^ -i ψsin(θ)√(2) cos(θ)
-e^ i ψsin(θ)√(2)
e^ -i ψ1-cos(θ) 2e^ i ϕ sin(θ)√(2)e^ i ϕ e^ i ψ1+cos(θ) 2e^ i ϕ)
we can compute D^l via band-wise recursion:
D^l_mn = ∑_m,m',n,n'=-l^l D^1_m'n' D^l-1_(m-m')(n-n')
· ⟨ (l-1)m|1m',l(m-m') ⟩
· ⟨ (l-1)n|1n',l(n-n') ⟩
where ⟨ lm|l'm',l”m”⟩ denotes Clebsch-Gordan coefficients (see section <ref>) known from angular momentum
theory.
Using (<ref>), we finally obtain:
d^l_mn(θ) = D^l_mn(0,θ,0).
§.§ Properties of Wigner Matrices
Orthogonality:
The Wigner D-matrix elements form a complete set of orthogonal functions over the Euler angles <cit.>:
∫_ϕ,θ,ψ D^l_mn(ϕ,θ,ψ) D^l'_m'n'(ϕ,θ,ψ) sinΘdϕ dθ dψ = 8π^2/2l+1δ_ll'δ_mm'δ_nn',
with Kronecker symbol δ.
Symmetry:
D^l_mn(ϕ,θ,ψ) = D^l_-m-n(ϕ,θ,ψ).
Relations to Spherical Harmonics:
The D-Matrix elements with second index equal to zero, are proportional to Spherical Harmonic base functions <cit.>:
D^l_m0(ϕ,θ,ψ) = √(4π/2l+1)Y^l_m(ϕ,θ).
Relations to Legendre Polynomials:
The Wigner small d-Matrix elements with both indices set to zero are related to Legendre polynomials <cit.>:
d^l_00(θ) = P^l(cos(θ)).
§ CLEBSCH-GORDAN COEFFICIENTS
Clebsch-Gordan CoefficientsAngular Coupling
Clebsch-Gordan Coefficients (CG) of the form
⟨ lm|l_1 m_1, l_2 m_2 ⟩
are commonly used for the representation of direct sum decompositions of SO(3) tensor couplings <cit.>.
The CG define the selection criteria for couplings and are by definition only unequal to zero if the constraints
m=m_1+m_2 and |l_1-l_2|≤ l≤ l_1 +l_2
hold. In most cases non-zero Clebsch-Gordan Coefficients are not directly evaluated, we rather utilize their orthogonality
and symmetry properties to reduce and simplify coupling formulations. The quite complex closed form for the computation of CG can be
found in <cit.>.
§.§ Properties of Clebsch-Gordan Coefficients
Some useful properties of Clebsch-Gordan Coefficients <cit.>:
Exceptions:
For l=0 the CG are:
⟨ 00|l_1m_1,l_2m_2 ⟩ = δ_l_1,l_2δ_m_1,-m_2(-1)^l_1-m_1/√(2l_2+1)
and for l=(l_1+l_2) and m_1=l_1, m_2=l_2:
⟨ (l_1+l_2)(l_1+l_2)|l_1l_1,l_2l_2 ⟩ = 1.
Orthogonality
Orthogonality:
∑_l=|l_1-l_2|^l_1+l_2∑_m=-l^l⟨ lm|l_1m_1,l_2m_2⟩⟨ lm|l_1m_1',l_2m_2'⟩ = δ_m_1,m_1'δ_m_2,m_2'
∑_m_1m_2⟨ lm|l_1m_1,l_2m_2⟩⟨ l'm'|l_1m_1,l_2m_2⟩ = δ_l,l'δ_m,m'.
Symmetry
Symmetry: Some symmetry properties of CG. There are even more symmetries <cit.>, but we only provide those
which we will use later on:
⟨ lm|l_1m_1,l_2m_2⟩ = (-1)^l_1+l_2-l⟨ l(-m)|l_1(-m_1),l_2(-m_2)⟩
= (-1)^l_1+l_2-l⟨ lm|l_2m_2,l_1m_1 ⟩
= (-1)^l_1-m_1√(2l+1/2l_2+1)⟨ l_2(-m_2)|l_1m_1,lm⟩
= (-1)^l_2+m_2√(2l+1/2l_1+1)⟨ l_1(-m_1)|l(-m),l_2m_2⟩.
§ FAST AND ACCURATE CORRELATION IN SH
CorrelationRotation
So far we have introduced many basic properties of the Spherical Harmonic domain, which we are using now to derive more complex operations.
In analogy to the Fourier domain, where the Convolution Theorem enables us to compute a fast convolution and correlation of
signals in the frequency domain, we now derive fast convolution and correlation for the Spherical Harmonic domain which
we introduced in <cit.>.
Since some important features and feature selection
methods have been derived from the key ideas of this approach, we review this method in detail:
Correlation on the 2-Sphere:
The full correlation function C^#: SO(3)→ℝ of two signals f and g under the rotation R∈ SO(3) on a 2-sphere is
given as:
SH_corr( R) := ∫_S^2 f ( R g) sinΘdΦ dΘ.
Obviously, the computational cost of a direct evaluation approach - over all possible rotations R - is way too high. Especially
when we are considering arbitrary resolutions of the rotation parameters. To cope with this problem, we derive a fast but accurate
method for the computation of the correlation in the harmonic domain.
Besides the obvious usage of the (cross)-correlation as similarity measure, the correlation on the 2-sphere can also be used
to perform a rotation estimation of similar signals on a sphere.
Rotation Estimation:
given any two real valued signals f_1 and f_2 on a 2-sphere which are considered to be equal
or at least similar under some rotational invariant measure (∼_ R):
f_1 ∼_ R f_2, R∈ SO(3),
the goal is to estimate the parameters of an arbitrary rotation R as accurate as possible without any additional information other than f_1, f_2 and considering arbitrary resolutions of the rotation parameters.
Related Approaches:
Recently, there have been proposals for several different methods which try to overcome the direct matching problem. Here, we are only
considering methods which provide full rotational estimates (there are many methods covering only rotations around the z-axis) without
correspondences.
A direct nonlinear estimation (DNE) which is able to retrieve the parameters for small rotations via
iterative minimization techniques was introduced in <cit.>. However, this method fails for larger rotations and was proposed only for
“fine tuning” of pre-aligned rotations.
Most other methods use representations in the Spherical Harmonic domain to solve the problem.
The possibility to recover the rotation parameters utilizing the spherical harmonic shift theorem (SHIFT) <cit.>
has been shown in <cit.>. This approach also uses an iterative minimization and was later refined by <cit.>.
Again, the estimation accuracy is limited to small rotations.
Rotation Estimation via Correlation:
The basis of our method was first suggested by <cit.>, presenting a fast correlation in two angles followed by a correlation in the
third Euler angle in an iterative way (known as FCOR).
This method
was later extended to a full correlation in all three angles by <cit.>. This approach allows the direct computation of the
correlation from the harmonic coefficients via FFT, but was actually not intended to be used to recover the rotation parameters.
Its angular resolution directly depends on the range of the harmonic expansion - making high angular resolutions rather expensive.
But FCOR was used by <cit.> to initialize the DNE and SHIFT “fine tuning” algorithms. The same authors
used a variation of FCOR (using inverse Spherical Fourier Transform <cit.> in stead of FFT) in combination with
SHIFT <cit.> to recover robot positions from omni-directional images via rotation parameter estimation.
§.§ Basic SH-Correlation Algorithm
Starting from the full correlation function (<ref>)
we use the Convolution Theorem and substitute f and g with their SH expansions
(<ref>, <ref>)
, which leads to
SH_corr( R) = ∑_l=0^∞∑_m=-l^l∑_n=-l^l D^l_mn( R)f^l_m g^l_n.
The actual “trick” to obtain the fast correlation is to factorize the original rotation R(ϕ,θ,ψ) into
R = R_1 · R_2, choosing
R_1(ξ,π/2,0) and R_2(η,π/2,ω) with ξ = ϕ-π/2, η = π - θ, ω = ψ-π/2.
Using the fact that
D^l_mn(ϕ,θ,ψ) = e^-imϕd^l_mn(θ)e^-inψ,
where d^l is a real valued “Wigner (small) d-matrix” (see (<ref>)), and
D^l_mn( R_1· R_2) = ∑_h=-l^l D^l_nh( R_1) D^l_hm( R_2),
we can rewrite
D^l_mn( R) = ∑_h=-l^l d^l_nh(π/2) d^l_hm(π/2) e^-i(nξ + hη + mω).
Substituting (<ref>) into (<ref>) provides the final formulation for the correlation function
regarding
the new angles ξ, η and ω:
SH_corr(ξ, η, ω) = ∑_l=0^∞∑_m=-l^l∑_h=-l^l∑_m'=-l^l d^l_mh(π/2) d^l_hm'(π/2) f^l_m g^l_m'e^-i(mξ + hη + m'ω).
The direct evaluation of this correlation function is of course not possible - but it is rather straightforward to obtain the Fourier
transform of (<ref>), hence eliminating the missing angle parameters:FFT
SH_corr(m, h, m') = ∑_l=0^∞ d^l_mh(π/2) d^l_hm'(π/2) f^l_m g^l_m'.
Finally, the correlation SH_corr(ξ, η, ω) can be retrieved via inverse Fourier transform of SH_corr,
SH_corr(ξ, η, ω) = F^-1( SH_corr(m, h, m')),
revealing the correlation values in a three dimensional C^#(ξ, η, ω)-space.
§.§ Euler Ambiguities
The final obstacle towards
the recovery of the rotation parameters inherits from the Euler parameterization used in the correlation function. Unfortunately, Euler
angle formulations cause various ambiguities and cyclic shift problems.
One minor problem is caused by the fact that our parameter grid range is from 0, …, 2π in all dimensions, while the
angle θ is only defined θ∈ [0,π[. This causes two correlation peaks at θ = β and
θ = 2π - β for an actual rotation of θ = β. We avoid this problem by restricting the maximum search to
θ∈ [0,π[, hence neglecting half of the correlation space.
The formulation of the correlation function also causes further cyclic shifts in the grid representation of the Euler angles.
This way, the zero rotation R(ϕ=0, θ=0, ψ=0) does not have its peak at the zero position C^#(0,0,0) of the parameter
grid as
one would expect. For a more intuitive handling of the parameter extraction from the grid, such that the (0,0,0) position in the grid
corresponds to no rotation,
we extend the original formulation of
(<ref>) and use a shift in the frequency space in order to normalize the mapping of R(π,0,π) to C^#(0,0,0):
C^#(m, h, m') = ∑_l=0^∞ d^l_mh(π/2) d^l_hm'(π/2) f_lmg_lm'· i^m+2h+m'.
§.§ Increasing the Angular Resolution
For real world applications, where the harmonic expansion is limited to some maximum expansion band b_max:
C^#(m, h, m') = ∑_l=0^b_max d^l_mh(π/2) d^l_hm'(π/2) f^l_m g^l_m'· i^m+2h+m',
the resulting (ξ, η, ω) space turns into a sparse and discrete space. Unfortunately, this directly affects the angular
resolution of the correlation.
Let us take a closer look at figure (<ref>): first of all, it appears (and our experiments in
section <ref>) clearly support this
assumption) that
the fast correlation function has a clear and stable maximum in a point on the grid. This is a very nice property, and we could simply
recover the corresponding rotation parameters which are associated with this maximum position. But there are still some major problems:
The image in Figure (<ref>) appears to be quite coarse - and in fact, the parameter grids for expansions up to
the 16th band (b_max=16) have
the size of 33× 33× 33 since the parameters m,m',h in (<ref>) are running from -b_max,…,b_max.
Given rotations up to 360^∘, this leaves us in the worst case with an overall estimation
accuracy of less than 15^∘.
In general, even if our fast correlation function (<ref>) would perfectly estimate the maximum position in all cases,
we would have to expect a worst case accuracy of
Err_corr = 2·180^∘ 2b_max + 90^∘ 2b_max,
accumulated over all three angles.
Hence, if we would like to achieve an accuracy of 1^∘, we would have to take the harmonic expansion roughly beyond the 180th band.
This
would be computationally expensive. Even worse, since we are considering discrete data, the signals on the sphere are band-limited.
So for smaller radii, higher bands of the expansion are actually not carrying any valuable information.
Due to this resolution problem, the fast correlation has so far only been used to initialize iterative algorithms <cit.><cit.>.
§.§.§ Sinc Interpolation.
Sinc interpolation
Now, instead of increasing the sampling rate of our input signal by expanding the harmonic transform, we have found an alternative way to
increase the correlation accuracy: interpolation in the frequency domain.
In general, considering the Sampling Theorem and given appropriate discrete samples a_n with step size Δ_x of some continuous 1D
signal a(x), we can reconstruct the original
signal via sinc interpolation <cit.>:
a(x) = ∑_n=-∞^∞ a_n sinc(π(x/Δ_x -n)),
with
sinc(x) = sin(x) x.
For a finite number of samples, (<ref>) changes to:
a(x) = ∑_k=0^N a_k sin(π(x/Δ_x-k))N sin(π(x/Δ_x-k)/N).
This sinc interpolation features two nice properties <cit.>: it entirely avoids aliasing errors and it can easily be applied in
the discrete Fourier space. Given the DFT coefficients α_n, n=0,1, …, N-1 of the discrete signal a_n, n=0,1, …, N-1,
the sinc interpolation is implemented by adding a zero padding between α_(N/2)-1 and α_(N/2).
Returning to our original correlation problem, it is easy to see that the (m, h, m')-space in (<ref>)
is actually nothing else
but a discrete 3D Fourier spectrum. So we can directly apply the 3D extension of (<ref>) and add a zero padding into the
(m, h, m')-space. This way, we are able to drastically increase the resolution of our correlation function at very low additional
cost for implementation issues as well as suitable pad sizes). Figure (<ref>)
shows the effect of
the interpolation on the correlation matrix for different pad sizes p.
It has to be noted that even though the sinc interpolation implies some smoothing characteristics to the correlation matrix,
the maxima remain fixed to singular positions in the grid.
Theoretically, we are now finally able to reduce the worst case accuracy to arbitrarily small angles for any given band:
Err_corr^pad = 2·180^∘ 2b_max + p + 90^∘ 2b_max + p.
Of course, the padding approach has practical limitations - inverse FFTs are becoming computationally expensive at some point. But
as our experiments in <ref> show, resolutions below one degree are possible even for very low expansions.
FFT
Implementation: The implementation of the inverse FFT in (<ref>) combined with the frequency space
padding requires some care: we need an inverse complex to real FFT with an in-place mapping (the grid in the frequency space has the same
size as the resulting grid in ℝ^3). Most FFT implementations are not providing such an operation. Due to the symmetries
in the frequency space not all complex coefficients need to be stored, hence most implementations are using reduced grid sizes.
We can avoid the tedious construction of such a reduced grid from C^# by using an inverse complex to complex FFT
and taking only the real part of the result.
In this case, we only have to shuffle the coefficients of C^#, which can be done via simple modulo operations while simultaneously
applying the padding. We rewrite (<ref>) to:
C^#(a, b, c) = ∑_l=0^b_max d^l_mh(π/2) d^l_hm'(π/2) f_lmĝ_lm'· i^m+2h+m',
where
s:=2bp, a:=(m + s + 1)mod s, b:=(h + s + 1)mod s, c:=(m' + s + 1)mod s.
Concerning the pad size: due to the nature of the FFT, most implementations achieve notable speed-ups for certain grid sizes. So it
is very useful to choose the padding in such a way that the overall grid size has, e.g., prime factor decompositions of mostly small primes
<cit.>.
§.§ Rotation Parameters
Finally, we are able to retrieve the original rotation parameters.
For a given correlation peak
at the grid position c(x,y,z), with maximum harmonic expansion b and padding p the rotation angles are:
ϕ = {π +(2π-xΔ) for xΔ>π
π - xΔ otherwise.
θ = {
(2π-yΔ) for yΔ>π
yΔ otherwise.
ψ = {π +(2π-zΔ) for zΔ>π
π - zΔ otherwise.
with Δ = 2π/(b+p).
The resulting rotation estimates return very precise and unique parameter sets. Only one ambiguous
setting has to be noted: for θ=0,π all zyz'-Euler formulations which hold ϕ+ψ=2π encode the very same
rotation (see Figure (<ref>)). This is actually not a problem for our rotation estimation task,
but it might be quite confusing especially in the case of numerical evaluation of the estimation accuracy.
§.§ Normalized Cross-Correlation
Cross-Correlation
In many cases, especially when one tries to estimate the rotation parameters between non-identical objects,
it is favorable to normalize the (cross-)correlation results. We follow an approach which is widely known from
the normalized cross-correlation of 2D images: First, we subtract the mean from both functions prior to the
correlation and then divide the results by the variances:
SH_corr-norm( R) := ∫_S^2(f-f) ( R (g-g))/σ_fσ_g sinΘdΦ dΘ.
Analogous to Fourier transform, we obtain the expected values f and g directly from the 0th
SH coefficient. The variances σ_f and σ_g can be estimated from the band-wise energies:
σ_f ≈√(∑_l |f_l|^2).
§.§ Simultaneous Correlation of Signals on Concentric Spheres
In many applications we consider local signals which are spread over the surfaces of several concentric spheres with different radii.
Instead of computing the correlation for each surface separately, we can simply extend (<ref>) to compute
the correlation over all signals at once.
This can be achieved by the use of a single correlation matrix C^#. We simply add the SH_corr(m, h, m')
(<ref>) for all radii and retrieve the combined correlation matrix C^# via inverse FFT as before.
§.§ Complexity
Following the implementation given in section <ref>, we obtain the harmonic expansion to band b_max at each
point of a volume with m voxels in O(m(b_max)^2 + (m log m)). Building the correlation matrix C^# at each point
takes O((2b_max)^4) plus the inverse FFT in O((b_max+p)^3 log (b_max+p)^3).
Parallelization: Further speed-up can be achieved by parallelization (see section <ref>): the
transformation into the harmonic domain can be parallelized as well as the point-wise computation of C^#.
§ CONVOLUTION IN SH
Convolution
After the fast correlation has been introduced, it is obvious to also take a look at the convolution in the harmonic domain. If we are only
interested in the result of the convolution of two signals at a given fixed rotation, we can apply the so-called “left”-convolution.
§.§ “Left”-Convolution
Convolution
We define the “left”-convolution of two spherical functions f and g in the harmonic domain as f * g.
Following the Convolution Theorem this convolution is given as:
(f* g)^l_m = 2π√(4π/2l+1)f^l_m·g^l_0.
Note that this definition is asymmetric and performs an averaging over the translations (rotations) of the “left” signal.
The “left”-convolution is quite useful, but for our methods we typically encounter situations like in the case of the fast correlation,
where we need to evaluate the convolution at all possible rotations of two spherical functions.
§.§ Fast Convolution over all Angles
Convolution
Following the approach used for the fast correlation, we introduce a method for the fast computation of full convolutions over all angles on
the sphere in a very similar way:
Again, the full convolution function SH_conv: SO(3)→ℝ of two signals f and g under the rotation R∈ SO(3) on a 2-sphere
is
given as:
SH_conv( R) := ∫_S^2 f ( Rg) sinΘdΦ dΘ.
Applying the same steps as in the case of the correlation, we obtain a convolution matrix:
C^*(m, h, m') = ∑_l=0^∞ d^l_mh(π/2) d^l_hm'(π/2) f^l_m g^l_m'.
Analog to equation (<ref>),
C^*(ξ, η, ω) = F^-1(C^*),
an inverse Fourier transform reveals the convolution f * g for each possible rotation in the three dimensional C^*(ξ, η, ω)-space.
Regarding computational complexity and angular resolution, this convolution method shares all the properties of the fast correlation
(see sections <ref> to <ref>).
§ VECTORIAL HARMONICS
Vectorial Harmonics
So far, we have exploited and utilized the nice properties of the harmonic expansion of scalar valued functions on S^2 in Spherical
Harmonics to derive powerful methods like the fast correlation. These methods can be operated on single scalar input in form of gray-scale
volumes, which is one of the most common data types in 3D image analysis. But there are two equally important data types: multi-channel
scalar input (e.g. RGB colored volumes) and 3D vector fields (e.g. from gradient data).
In the first case, a harmonic expansion of multi-channel scalar input is straightforward: since the channels are not affected
independently, one can simply combine the Spherical Harmonic expansions of each individual channel (e.g. see section <ref>).
For 3D vector fields, the harmonic expansion turns out to be less trivial, i.e. if we rotate the field, we are not only changing the
position of the individual vectors, but we also have to change the vector values accordingly. This dependency can be modeled by the use
of Vectorial Harmonics (VH).
Given a vector valued function f:S^2→ℝ^3 with three vectorial components [x,y,z] = f(Φ,Θ) and parameterized in
Euler angles (Fig. <ref>) ϕ,θ,ψ, we can expand f in Vectorial Harmonics:
f(Φ,Θ) = ∑_l=0^∞∑_k=-1^1∑_m=-(l+k)^(l+k) f^l_km Z ^l_km(Φ,Θ),
with scalar harmonic coefficients f^l_km and the orthonormal base functions:
Z^l_km =
(
⟨1 1 | l+k m, l 1-m⟩ Y^l_1-m
⟨1 0 | l+k m, l -m⟩ Y^l_-m
⟨1 -1 | l+k m, l -1-m⟩ Y^l_-1-m)^T.
Figure <ref> visualizes the first two bands of these base functions as vector fields on a sphere. We define the
forward Vectorial Harmonic transformation as
VH( f) := f, with f^l_km = ∫_Φ,Θ Z^l_(-1)m(Φ,Θ) f[-1](Φ,Θ) sinΘdΦ dΘ
+ ∫_Φ,Θ Z^l_(0)m(Φ,Θ) f[0](Φ,Θ) sinΘdΦ dΘ
+ ∫_Φ,Θ Z^l_(1)m(Φ,Θ) f[1](Φ,Θ) sinΘdΦ dΘ,
where f[-1] returns the scalar function on S^2 which is defined by the complex transformation (<ref>)
of the z component of the vector-valued f.
The backward transformation in is defined as:
VH^-1( f(Φ,Θ)) := ∑_l=0^∞∑_k=-1^1∑_m=(l+k)^(l+k) f^l_km Z^l_km(Φ,Θ).
In our case, the Vectorial Harmonics are defined to operate on vector fields with complex vector coordinates.
For fields of real valued vectors r(x,y,z)∈ℝ^3, we need to transform the vector coordinates to ℂ^3
according to the Spherical Harmonic relation:
u∈ℂ^3:
u :=
(
x-iy/√(2)
z
x+iy/√(2)).
§.§ Deriving Vectorial Harmonics
There have been several different approaches towards Vectorial Harmonics, like <cit.> or <cit.>. All use a slightly different
setting and notation. For our purposes, we derive our methods from a very general theory of Tensorial Harmonics <cit.>,
which provides expansions for arbitrary real valued tensor
functions f on the 2-sphere:
f(Φ,Θ) := ∑_l=0^∞∑_k=-d^d∑_m=-(l+k)^(l+k) f^l_km Z^l_km(Φ,Θ),
where f^l_km is the expansion coefficient of the l-th band of tensor order d and harmonic order m.
The orthonormal Tensorial Harmonic base functions Z^l_km are given as:
Z^l_km := e^(l+k)_m ∘_1 Y^l,
with the Spherical Harmonic bands Y^l. The e^l_m are elements of the standard Euclidean base of
ℂ^2d+1, and ∘_l denotes a bilinear form connecting tensors V_l_1 and V_l_2 of different ranks:
∘_d: V_l_1× V_l_2→ℂ^2d+1,
where l_1,l_2 ∈ℕ have to hold |l_1-l_2|≤ l ≤ l_1+l_2. ∘_l is computed as follows:
( e^l_m)^T ( v∘_l u) := ∑_m=m_1+m_2⟨ lm|l_1m_1,l_2m_2 ⟩ v_m_1 u_m_2.
See <cit.> for details and proofs.
If we limit the general form to tensors of order one (d:=1) and use <ref> for the computation of the
base functions <ref>, we directly obtain Vectorial Harmonic expansions as in
<ref>.
§.§ Useful Properties of Vectorial Harmonics
Vectorial Harmonics inherit most of the favorable properties of the underlying Spherical Harmonics, such as orthonormality.
Orthonormality:
∫_Φ,Θ( Z^l_km(Φ,Θ))^T Z^l'_k'm'(Φ,Θ) sinΘdΦ dΘ =
4π/(1/3)(2l+1)(2(l+k)+1)δ_l,l'δ_k,k'δ_m,m'.
§ ROTATIONS IN VECTORIAL HARMONICS
RotationVectorial Harmonics
The analogy of Vectorial Harmonics to Spherical Harmonics continues also in the case of rotation in the harmonic domain.
Complex 3D vector valued signals f with Vectorial Harmonic coefficients f are rotated
<cit.> by:
R f = ∑_l=0^∞∑_k=-1^k=1∑_m=-(l+k)^l+k∑_n=-(l+k)^l+k D^l+k_mn( R)
f^l_km Z^l_kn,
which is a straightforward extension of (<ref>). One notable aspect is that we need to combine Wigner-D matrices
of the upper l+1 and lower l-1 bands in order to compute the still band-wise rotation of f^l_km.
Hence, we rotate f^l_km by R(ϕ,θ,ψ) via band-wise multiplications:
f'= R(ϕ,θ,ψ) f⇒ f'^l_km = ∑_n=-(l+k)^l+k D^l+k_mn(ϕ,θ,ψ) f^l_km.
Due to the use of the zyz'-convention, we have to handle inverse rotations with some care:
f'= R^-1(ϕ,θ,ψ) f⇒ f'^l_km = ∑_n=-(l+k)^l+k D^l+k_mn(-ψ,-θ,-ϕ) f^l_km.
§ FAST CORRELATION IN VECTORIAL HARMONICS
Correlation
We use local dot-products of vectors to define the correlation under a given rotation R in Euler angles ϕ, θ, ψ as:
( f# g)( R) := ∫_Φ,Θ⟨ f(Φ,Θ), R g(Φ,Θ)⟩ sinΘdΦ dΘ.
Using the rotational properties (<ref>) of the Vectorial Harmonics, we can extend the fast correlation approach (see section
<ref>) from SH to VH. Starting from (<ref>) we insert (<ref>) into (<ref>) and obtain:
VH_corr( R) = ∑_l=0^l=∞∑_k=-1^k=1∑_m,n=-(l+k)^(l+k)D^l+k_mn( R) f^l_km g^l_kn.
Analogous to (<ref>), substituting (<ref>) into (<ref>) provides the final
formulation for the correlation function regarding the new angles ξ, η and ω:
VH_corr(ξ, η, ω) = ∑_l=0^l=∞∑_k=-1^k=1∑_m,h,m'=-(l+k)^m,h,m'=(l+k) d^l+k_mh(π/2)
d^l+k_hm'(π/2) f^l_km g^l_km'e^-i(mξ + hη + m'ω).
Following (<ref>) we obtain the Fourier transform of the correlation matrix C^# (<ref>)
to eliminate the missing angle parameters:
C^#(m, h, m') = ∑_l=0^l=∞∑_k=-1^k=1 d^l+k_mh(π/2) d^l+k_hm'(π/2) f^l_km g^l_km'.
Again, the correlation matrix C^#(ξ, η, ω) can be retrieved via inverse Fourier transform of C^#:
C^#(ξ, η, ω) = F^-1(Ĉ^̂#̂(m, h, m')),
revealing the correlation values in a three dimensional (ξ, η, ω)-space.
§ FAST CONVOLUTION IN VECTORIAL HARMONICS
Convolution
The fast convolution C^* in Vectorial Harmonics can be directly derived from sections <ref> and
<ref>:
C^*(m, h, m') = ∑_l=0^∞∑_-1^k=1 d^l+k_mh(π/2) d^l+k_hm'(π/2) f^l_km g^l_km'.
Analog to equ. (<ref>), we reconstruct C^*(ξ, η, ω) from (<ref>) via
inverse Fourier transform:
C^*(ξ, η, ω) = F^-1( C^*(m, h, m')).
CHAPTER: IMPLEMENTATION
Implementation
So far, we derived the mathematical foundations for the computation of local features with a parameterization on the 2-sphere (see chapter
<ref>) in a setting with strong continuous preconditions: the input data in form of functions on 3D volumes
X: ℝ^3 →ℝ is
continuous, and the harmonic frequency spaces of the transformed neighborhoods S[r]( x) are infinitely large
because we assume to have no band limitations. This setting enables us to nicely derive sound and easy to handle methods,
however, it is obvious that these preconditions cannot be met in the case of real world applications where we have to deal with discrete
input data
on a sparse volume grid (X: ℤ^3 →ℝ) and we have to limit the harmonic transformations to an
upper frequency (band-limitation to b_max). Hence, we some how have to close this gap, when applying the theoretically derived feature
algorithms to real problems.
In general, we try to make this transition to the continuous setting as early as possible so that we can avoid discrete operations which
are usually causing additional problems, i.e. the need to interpolate. Since we derive all of our feature algorithms (chapters
<ref> - <ref>) in the locally expanded harmonic domain, we actually only have to worry about the
the transition of the local neighborhoods S[r]( x) in X by SH(X|_S[r]( x))
(see section <ref>) and VH( X|_S[r]( x)) (see section <ref>).
Hence, we need sound Spherical and Vectorial Harmonic transformations for discrete input data which handle the arising sampling problems
and the needed band limitation. We derive these transformations in the next sections <ref>,
<ref> and discuss some relevant properties like complexity.
Another issue we frequently have to face in the context of an actual implementation of algorithms is the question of parallelization.
We tackle the basics of parallelization in section <ref>.
The introduction of the actual features in the next chapters always follows the same structure: first, we derive the theoretic
foundation of the feature in a continuous setting, and then we give details on the actual discrete implementation based on
the methods we derive in this chapter.
§ DISCRETE SPHERICAL HARMONIC TRANSFORM
Spherical Harmonics
Implementation
We are looking for discrete version of the Spherical Harmonic transform, e.g. we want to obtain the frequency decomposition of local
discrete spherical neighborhoods S[r]( x) (<ref>) in X:ℤ^3 →ℝ.
If we disregard the sampling issues for a moment, the discrete implementation is rather straightforward: first, we pre-compute
discrete approximations of the orthonormal harmonic base functions Y^l_m[r, x] (<ref>) which are centered in x.
In their discrete version, the Y^l_m are parameterized in Euclidean coordinates x∈ℤ^3 rather then Euler angles:
Y^l_m: ℤ^3 →ℂ.
Next, we obtain the
transformation coefficients SH(X|_ S[r]( x))^l_m via the discrete dot-product:
SH(X|_ S[r]( x))^l_m := ∑_ x_i∈ S[r]( x) X( x_i)
Y^l_m[r, x]( x_i).
For most practical applications we have to compute the harmonic transformation of the neighborhoods around each voxel x,
which can be computed very efficiently: since (<ref>) is actually determined via convolution, we can apply the
standard convolution theorem “trick” and perform a fast convolution via FFT to obtain SH^l_m(X): ℝ^3
→ℂ^b (with b = b_max(b_max-1)):
SH[r](X)^l_m = X * Y^l_m[r].
This leaves us with the problems to construct correct base function templates Y^l_m[r], which is essentially a sampling issue, and to find an appropriate b_max.
§.§ Correct Sampling
The key problem of obtaining discrete approximations of continuous signals is to avoid biased results due to false sampling. In the case of
the discrete harmonic transformations we have to handle two different sampling steps: first, the discretization of the input data, and
second the construction of the base function templates Y^l_m[r]. In both cases, we can rely on the Sampling Theorem <cit.>
<cit.> to obtain correct discretizations:
If a function x(t) contains no frequencies higher than B cycles per second, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart <cit.>
equivalent to modern unit hertz
The sampling rate during the discretization of the input data is usually bound by the imaging device. While most modern microscope systems
obey the sampling theorem (see part III), other data sources might be more problematic. Hence, we are forced to introduce an artificial
band-limitation, i.e. apply a low pass filtering on the input data whenever we face insufficient sampling.
The construction of correct discrete base function templates Y^l_m[r] is more challenging because due to the dot-product nature of the
discrete transformation (<ref> ) the sampling rate is fixed by the
resolution of the input data and dominantly by the radius r, e.g. we cannot simply increase the sampling for higher frequency bands
l (see figure <ref>).
This results in an insurmountable limitation for our discrete harmonic transformations: the maximum expansion band b_max is bound by the
radius: given small radii, the regarding spherical neighborhood S[r] only provides a sufficient number of sampling points for low
frequent base
functions.
Further more, the discretization of convex structures like spheres easily causes aliasing effects we have to avoid. We cope with this problem
by a Gaussian smoothing in radial direction. Figure <ref> shows an example of a discrete base function template.
Thanks to O. Ronneberger for the “Volvim” orthoviewer.
§.§ Band Limitation b_max
Assuming that we obey the sampling theorem during the construction of Y^l_m[r] (see previous section), we still have to worry about
the effect of the band limitation of the harmonic expansion and reasonable choice of b_max below the theoretic limit.
The good news is that reconstructions from the harmonic domain are strictly band-wise
operations (e.g. see (<ref>)). Hence, the actual band limitation has no effect on the correctness of the lower
frequencies: the band limitation simply acts as low-pass filter on the spherical signal. Figure <ref>shows the effects of the band-limitation in a synthetic example.
Thanks to H. Skibbe for his volume rendering tool.
One should also keep in mind that a limitation of higher frequencies directly affects the angular resolution SH_res
of the fast correlation and convolution in the harmonic domain (see section <ref>).
In the end, the selection of b_max is always a tradeoff between computational speed and maximum resolution.
§.§ Invariance
Another practical aspect of the harmonic expansion is that we are able to obtain additional invariance or robustness properties directly
from the transformation implementation.
§.§.§ Gray-Scale Robustness
The most obvious example is the simple “trick” to become robust against gray-scale changes:
As mentioned before in section <ref>, one very convenient property of the spherical harmonic transformations is that analogous
to the Fourier transform, the constant component of the
expanded signal is given by the 0th coefficient SH[r](X)^0_0.
Hence, we can easily achieve invariance towards shift of the mean gray-value in scalar operations if
we simply normalize all coefficients by the 0th component.
Usually we denote this invariance only as “gray-scale robustness” since most practical applications include more complex
gray-scale changes as this approach can handle.
§.§.§ Scale Normalization
It is also very easy to normalize the SH coefficients to compensate known changes in the scale of the data. In case we need to compute
comparable features for data of different scale, we can normalize the coefficients
SH[r] by the surface of the base functions, which is 4π r^2 in a continuous setting. In the discrete case, we have to take the
Gaussian smoothing into account: we simply use the sum over Y_0^0 as normalization coefficient.
§.§.§ Resolution Robustness
A typical problem which arises in the context of “real world” volume data is that we sometimes have to deal with non-cubic voxels, i.e.
the input data is the result of a sampling of the real world which has not been equidistant in all spatial directions.
Such non-cubic voxels cause huge problems when we try to obtain rotation invariant features. Fortunately, we can cope with this problem
during the construction of the base function templates Y^l_m[r]: as figure <ref> shows, we simply adapt the voxel
resolution
of the input data to the templates. Usually, we can obtain the necessary voxel resolution information directly from the imaging device.
§.§ Complexity
Concerning the voxel-wise local transformation for a single radius SH[r](X) of a 3D volume X with m voxels,
we obtain the harmonic expansion to band b_max in O(m(b_max)^2 + (m log m)) if we follow the fast convolution approach (
<ref>) and assume the base function templates are given.
Since we have to extract n=b_max(b_max-1) coefficients, the memory consumption lies in O(m(b_max)^2).
§.§ Parallelization
Further speed-up can be achieved by parallelization (see section <ref>): the data can be transformed into
the harmonic domain by parallel computation of the coefficients.
For C CPU cores with C≤ (b_max)^2 and C≤ m we obtain:
O(m(b_max)^2/ C) +O( (m log m)/ C).
§.§ Fast Spherical Harmonic Transform
Recently, there has been an approach towards a fast Spherical Harmonic transform (fSHT) <cit.> for discrete signals.
The fSHT uses a similar approach as in the FFT speed-up of the DFT and performs the computation of the entire inverse transformation in
O(N log^2 N), where N is the number of sampling points.
Since we hardly need the inverse transformation and only a small set of different extraction radii throughout this work, we prefer
a simple caching of the pre-computed base functions to achieve faster transformations over of the quite complex fSHT method. Additionally,
for real valued input data, we can exploit the symmetry properties (<ref>):
Y^l_m = (-1)^m Y^l_-m,
allowing us to actually compute only the positive half of the harmonic coefficients.
§ DISCRETE VECTORIAL HARMONIC TRANSFORM
Vectorial Harmonics
Implementation
For the extraction of features on 3D vector fields,
we need a discrete version of the Vectorial Harmonic transform (see section <ref>), i.e. we need to obtain the frequency
decomposition of 3D vectorial signals at discrete positions on the discrete spherical neighborhoods
S[r]( x) (<ref>) in X:ℤ^3 →ℝ^3.
As for the discrete Spherical Harmonic transform, we pre-compute
discrete approximations of the orthonormal harmonic base functions Z^l_k,m[r, x] (<ref>)
which are centered in x.
In their discrete version, the Z^l_k,m are parameterized in Euclidean coordinates x∈ℤ^3 rather then Euler
angles:
VH( X|_ S[r]( x))^l_k,m := ∑_ x_i∈ S[r]( x) X( x_i)
Z^l_k,m[r, x]( x_i).
For most practical applications we have to compute the harmonic transformation of the neighborhoods around each voxel x,
which can be computed very efficiently: since (<ref>) is actually determined via convolution, we can apply the
standard convolution theorem “trick” and perform a fast convolution via FFT to obtain VH^l_k,m( X): ℝ^3
→ℂ^b:
VH[r]( X)^l_k,m = X * Z^l_k,m[r].
The sampling and non-cubic voxel problems can be solved in the very same way as for the Spherical Harmonics. Figure
<ref> shows an artificial reconstruction example.
The complexity of a vectorial transformation grows by factor three compared to the Spherical Harmonics, but we are able to apply the
same parallelization techniques.
§.§ Gray-Scale Invariance
The notion of gray-scale invariance might appear a bit odd, since vector fields are not directly associated with scalar gray values.
But it is common practice to obtain the 3D vector fields by the gradient evaluation of 3D scalar data (see part III). Hence, it is
of major interest to know if and how a 3D gradient vector field changes under gray-scale changes of the underlying data.
<cit.> showed that the gradient direction is in fact invariant under additive and multiplicative gray-scale changes. Therefore,
we consider features based on Vectorial Harmonics to be gray-scale invariant - which is an important property for many applications.
§ PARALLELIZATION
Implementation
Parallelization
Modern computing architectures come with an increasing number of general computing units: standard PCs have multi-core CPUs
and more specialized computing servers combine several of these multi-core CPUs in a single system. This endorses the use of
parallel algorithms.
In this work, parallel computing is only a little side aspect - but one with great speed-up potential. We restrict ourself
to very simple cases of parallelization algorithms: first, we only consider systems with shared memory where all computing units (we
refer to them as cores) share the same memory address space of a single system - hence, we explicitly disregard clusters.
Second, we only consider algorithmically very simple cases of parallelization where the individual threads run independently, i.e. we avoid
scenarios which would require a mutual exclusion handling, while still going beyond simplest cases data parallelization.
We give more details on the actual parallelization at the individual description of each feature implementation.
CHAPTER: SH-FEATURES
SH-Features
In this chapter, we derive a set of local, rotation invariant features which are directly motivated by the sound
mathematical foundation for operations on the 2-sphere introduced in chapter <ref>. We take advantage of the
nice properties of Spherical Harmonics (<ref>) which allow us to perform fast feature computations in the
frequency domain.
Given scalar 3D volume data X, the transformation SH(X|_S[r]( x))
(<ref>) of local data on a sphere with radius r around the
center point x in Spherical Harmonics is nothing more than a change of the base-functions representing the initial data.
So the new base might provide us with a nice framework to operate on spheres, but we still have to perform the actual feature construction.
Primarily, we want to obtain rotation and possibly gray-scale invariance.
First we introduce a simple method to obtain rotational invariance: In section <ref> we review SH_abs features,
which use the fact that the band-wise energies of a SH representation does not change under rotation. This method is well
known from literature (i.e. <cit.>), but has its limitations.
To cope with some of the problems with SH_abs features, we introduced a novel rotation and gray-scale
invariant feature based on the SH
phase information <cit.>. We derive the SH_phase feature in section <ref>.
The third member of the SH-Feature class is a fast and also rotation invariant auto-correlation feature SH_autocorr
(section <ref>) which
is based on the fast correlation in Spherical Harmonics from section <ref>.
Finally, in section <ref>, we derive a complete local rotation invariant 3D feature from a global 2D image
feature introduced in <cit.>. The SH_bispectrum feature.
§ SH_ABS
SH-FeaturesSH_abs
The feature we chose to call SH_abs throughout this work is also known as “Spherical Harmonic Descriptor” and has been
used by several previous publications e.g. for 3D shape retrieval in <cit.>.
We use SH_abs as one of our reference features to evaluate the
properties and performance of our methods (see chapter <ref>).
§.§ Feature Design
SH_abs achieves rotation invariance by exploiting some basic principals of the Spherical Harmonic (<ref>)
formulation. Analogous to the Fourier transformation, where we can use the power spectrum as a feature, we use the absolute values of each
harmonic expansion band l as power of the l-th frequency in the Spherical Harmonic power spectrum:
( SH_abs[r]( x))^l := √(∑_m=-l^l(( SH(X|_S[r]( x)))^l_m)^2).
Rotation Invariance
Rotations R(ϕ,θ, ψ)∈ SO(3) (see section <ref>) are represented in the harmonic
domain in terms of band-wise multiplications of the expansions f^l with the orthonormal Wigner D-Matrices
D^l (<ref>).
The power spectrum of a signal f in Spherical Harmonics is given as (also see section <ref> for more details):
q(f,l) := (f^l)^T f^l.
The D^l are orthonormal (<ref>), hence it is easy to show the rotation invariance of the band-wise
SH_abs entries of the power spectrum:
SH_abs(D^l( R)f^l) = (D^l( R)f^l)^T D^l( R)f^l
= (f^l)^T (D^l( R))^T D^l( R) f^l
= (f^l)^T f^l .
So, we note that a rotation has only a band-wise effect on the expansion but does not change the respective absolute values.
Hence, the approximation of the original data via harmonic expansion can be cut off at an
arbitrary band, encoding just the level of detail needed for the application.
Gray-Scale Robustness:
We can obtain invariance towards additive gray-scale changes by normalization by the 0th harmonic coefficient as described in section
<ref>.
§.§ Implementation
The implementation of the SH_abs is straightforward. We follow the implementation of the Spherical Harmonic transformation as
described in chapter <ref>.
Multi-Channel Data: SH_abs cannot directly combine data from several channels into a single feature. In case of
multi-channel data, we have to separately compute features for each channel.
§.§.§ Complexity
Following the implementation given in section <ref>, we obtain the harmonic expansion to band b_max at each
point of a volume with m voxels in O(m(b_max)^2 + (m log m)). The computation of the absolute values takes another O((b_max)^3).
Parallelization
Further speed-up can be achieved by parallelization (see section <ref>): the data can be transformed into
the harmonic domain by parallel computation of the coefficients and the computation of the absolute values can also be split into several
threads.
For C CPU cores with C≤ (b_max)^2 and C≤ m we obtain:
O(m(b_max)^3/ C) +O( m(b_max)^2 +(m log m)/ C)
§.§ Discussion
The SH-Features are a simple and straightforward approach towards local 3D rotation invariant features. They are computationally
efficient and easy to implement, however, the discriminative properties are quite limited. The band-wise absolute values only capture
the energy of the respective frequencies in the overall spectrum. Hence, we loose all the phase information which leads to strong ambiguities
within the feature mappings. In many applications it is possible to reduce these ambiguities by the combination of SH-Features
which were extracted at different radii.
SH_abs Ambiguities: in theory, there is an infinite number of input patterns which are mapped on the same
SH-Feature just as there is an infinite number of possible phase shifts in harmonic expansions. However, one might argue that
this does not prevent a practically usage of the SH-Feature since we generally do not need completeness (see section <ref>).
But we still need discriminative features, and there are practical relevant problems where SH_abs is not powerful enough, as
figure <ref> shows.
§ SH_PHASE
SH-FeaturesSH_phase
Motivated by the ambiguity problems caused by neglecting the phase information in the SH_abs-Features
(see discussion in section <ref>) we presented an
oppositional approach in <cit.>. SH_phase-Features preserve only the phase information of the Spherical Harmonic
representation and disregard the amplitudes.
This approach is
further motivated by results known from Fourier transform, which showed that the characteristic information is dominant in the phase of
a signal's spectrum rather than in the pure magnitude of it's coefficients <cit.>.
Following a phase-only strategy has the nice side-effect that since the overall gray-value
intensity is only encoded in the amplitude, the SH_phase method is gray-scale invariant.
Like the SH_abs-Features (from section <ref>) SH_phase-Features are computed band-wise, but instead
of a single radius SH_phase combines expansions at different radii r_1, r_2 into a feature.
§.§ Feature Design
The phase of a local harmonic expansion in band l at radius r is given by the orientation of the vector p^l[r], which contains
the 2l+1 harmonic coefficient
components of the band-wise local expansion (<ref>). Since the coefficients are changing when the underlying data is rotated,
the phase itself is not a rotational invariant feature.
p^l_m[r]( x) := ( SH(X|_S[r]( x)))^l_m /( SH_abs[r]( x))^l
Since we are often interested in encoding the neighborhood at several concentric radii, we can take advantage
of this additional information and construct a phase-only rotational invariant feature based on the band-wise relations of phases
between the different concentric harmonic series.
Fig. (<ref>) illustrates the basic idea: for a fixed band l, the relation (angle) between phases of harmonic expansions
at different radii are invariant towards rotation. Phases in the same harmonic band undergo the
same changes under rotation of the underlying data (see section <ref> for details), keeping the angle between the phases
of different radii constant. We encode this angle in terms of the dot-product of band-wise Spherical Harmonic expansions at radii
r_1, r_2:
( SH_phase[r_1,r_2]( x))^l := ⟨ p^l[r_1] , p^l[r_2] ⟩.
Rotation Invariance: the proof of the rotation invariance is rather straightforward basic linear algebra:
Rotations RX acting on <ref>: ⟨ D^l p^l[r_1] , D^l p^l[r_2] ⟩
= (D^l p^l[r_1])^T(D^l p^l[r_2]) rewrite as matrix multiplication
= ( p^l[r_1])^T(D^l)^T (D^l p^l[r_2]) resolve transposition
= ( p^l[r_1])^T((D^l)^T D^l)( p^l[r_2]) commutativity
= ( p^l[r_1])^T((D^l)^TD^l)_= I( p^l[r_2]) use orthogonality of D^l
= (( p^l[r_1])^T p^l[r_2])
= ⟨ p^l[r_1], p^l[r_2]⟩ .
The rotation R of the underlying data can now be expressed
in terms of matrix multiplications with the same Wigner-D matrix D^l (<ref>).
Since the rotational invariance is achieved band-wise, the approximation of the original data via harmonic expansion can be cut off at an
arbitrary band, encoding just the level of detail needed for the application.
§.§ Implementation
The implementation of the SH_phase is straightforward. We follow the implementation of the Spherical Harmonic transformation as
described in section <ref> for the two radii r_1 and r_2. The band-wise computation of the phases and the evaluation
of the dot-product is also very simple.
Multi-Channel Data: SH_phase-Features can also directly combine data from several channels into a single feature:
we simply extract the harmonic expansions for the different radii from different data channels.
§.§.§ Complexity
Following the implementation given in section <ref>, we obtain the harmonic expansion to band b_max at each
point of a volume with m voxels in O(m(b_max)^2 + (m log m)). The computation of the dot-products and the phase vectors takes
another O((b_max)^3).
Parallelization
Further speed-up can be achieved by parallelization (see section <ref>): the data can be transformed into
the harmonic domain by parallel computation of the coefficients and the computation of the absolute values can also be split into several
threads.
For C CPU cores with C≤ (b_max)^2 and C≤ m we obtain:
O(m(b_max)^3/ C) +O( m(b_max)^2 +(m log m)/ C)
§.§ Discussion
Event though the SH_phase-Features are not complete either, their discrimination abilities tend to be better than those
of the SH_abs-Features (see section <ref>). Also, the additional gray-scale invariance is very useful in many
applications.
Intuitively, SH_phase encodes local changes between the different radii. This property is especially applicable for texture
classification or to find 3D interest points (see part III).
§ SH_AUTOCORR
SH-FeaturesSH_autocorr
The next approach to compute invariant features directly from the harmonic representation is motivated by the introduction of the
fast normalized cross-correlation in the harmonic domain (see introduction of chapter <ref>).
The cross-correlation SH_corr(f,g)
on two signals f,g ∈ S^2 is a binary operation SH_corr: S^2× S^2 →ℝ. Hence, it cannot be used
directly as a feature, where we require a mapping of individual local signals f∈ S^2 → H into some feature space
H⊆ℝ^n (see section <ref>).
A general and widely known method to obtain features from correlations is to compute the auto-correlation, e.g. <cit.>. In our case,
we propose the local SH_autocorr-Feature, which performs a fast auto-correlation of f ∈ S^2.
The auto-correlation under a given rotation R in Euler angles ϕ, θ, ψ is defined as:
(f # f)( R) := ∫_S^2 f ( R f) sinΘdΦ dΘ.
§.§ Feature Design
As for most of our other features, we first expand the local neighborhood f at radius r around the point x in
Spherical Harmonics, f:= SH(X|_S[r]( x)).
Then we follow the fast correlation method which we introduced in section <ref> to obtain the full correlation
C^# from equation (<ref>).
Invariance:
In order to obtain rotation invariant features, we follow the Haar-Integration approach (see chapter
<ref>) and integrate
over the auto-correlations at all possible rotations R. C^# holds the necessary auto-correlation results in a 3D
(ϕ,θ,ψ)-space (<ref>), hence we simply integrate over C^#,
SH_autocorr := ∫_ϕ,θ,ψκ(C^#(ϕ,θ,ψ)) sinθdϕ dθ dψ
and obtain a scalar feature. Additionally, we insert a non-linear kernel function κ to increase the separability. Usually, very
simple non-linear functions, such as κ(x):=x^2, κ(x):=x^3 or κ(x):= √(x), are sufficient.
Like in the case of the SH_abs-Features, we can obtain invariance towards additive gray-scale changes by normalization by the
0th harmonic coefficient. If we additionally normalize C^# as in (<ref>), SH_autocorr
becomes completely gray-scale invariant.
§.§ Implementation
We follow the implementation of the Spherical Harmonic transformation as described in chapter <ref> and the
implementation of the fast correlation from (<ref>).
In practice, where the harmonic expansion is bound by a maximal expansion band b_max, the integral (<ref>)
is reduce to the sum over the then discrete angular space C^#:
SH_autocorr = ∑_ϕ,θ,ψκ(C^#(ϕ,θ,ψ)).
Multi-Channel Data: It is straightforward to combine the information from several data channels into a single
SH_autocorr-Feature: We simply use the same approach as described in section <ref>, where we
correlated the information of several different radii.
§.§.§ Complexity
Following the implementation given in chapter <ref>, we obtain the harmonic expansion to band b_max at each
point of a volume with m voxels in O(m(b_max)^2 + (m log m)). The complexity of the auto-correlation depends on b_max and
the padding parameter p (<ref>) and can be computed in O(m(b_max+p)^3 log (b_max+p)^3)). The sum over
C^# takes another O((b_max+p)^3) at each point.
Parallelization:
Further speed-up can be achieved by parallelization (see section <ref>): the data can be transformed into
the harmonic domain by parallel computation of the coefficients and the computation of the absolute values also be split into several
threads.
For C CPU cores with C≤ (b_max)^2 and C≤ m we obtain:
O(m( (b_max+p)^3 + (b_max+p)^3 log (b_max+p)^3)/ C) +O( m(b_max)^2 +(m log m)/ C)
§.§ Discussion
Auto-correlation can be a very effective feature to encode texture properties. The discriminative power of SH_autocorr can
be further increased by we combining the correlation a several different radii to a single
correlation result C^#, as described in section <ref>.
§ SH_BISPECTRUM
SH-FeaturesSH_bispectrum
The final member of the class of features which are directly derived from the Spherical Harmonic representation is the
so-called SH_bispectrum-Feature. The approach to obtain invariant features via the computation of the bispectrum
of the frequency representation is well known (e.g. see <cit.>), hence, we review the basic concept in a simple 1D
setting before we move on to derive it in Spherical Harmonics.
Given a discrete complex 1D signal f:{0,1,…,n-1}→ℂ and its DFT f, the power
spectrum q(f,ω) of f at frequency ω
is:
q(f,ω) := f(ω)·f(ω).
The power spectrum is translation invariant since a translation z of f only affects the phases of the Fourier
coefficients which are canceled
out by f(ω)·f(ω):
e^-i2π zω/nf(ω)· e^-i2π zω/nf(ω) =
e^i2π zω/nf(ω)· e^-i2π zω/nf(ω)
= f(ω)·f(ω).
We use the same principle to construct the SH_abs-Features
(see section <ref>). As mentioned in the context of SH_abs, neglecting the valuable phase information makes
the power spectrum not a very discriminative feature.
The basic idea of the bispectrum is to couple two frequencies ω_1, ω_2 in order to implicitly preserve the phase information:
q(f,ω_1,ω_2) := f(ω_1)·f(ω_2)·f(ω_1 + ω_2).
While the invariance property is the same as for the power spectrum:
e^i2π zω_1/nf(ω_1)· e^i2π zω_2/nf(ω_2)· e^-i2π z(ω_1+ω_2)/nf(ω_1 + ω_2) =
f(ω_1)·f(ω_2)·f(ω_1 + ω_2),
it has been shown <cit.> that the phases ω_i can be reconstructed from the bispectra. Hence, the bispectrum is a complete
feature if f is band limited and we extract the bispectrum at all frequencies.
Due to the analogy of the Spherical Harmonic and the Fourier domain, it is intuitive that the concept of the bispectrum is portable to signals
in S^2. This step was derived by <cit.> who constructed a global invariant feature for 2D images by projecting the images on
the 2-sphere and then computing features in the harmonic domain. We adapt the methods from <cit.> to construct local rotation
invariant features for 3D volume data.
§.§ Feature Design
In our case, we are interested in the extraction of invariant features of the local neighborhood f at radius r around the point
x. Just as in the 1D example, we transform f into the frequency space - i.e. in the Spherical Harmonic domain:
f:= SH(X|_S[r]( x)).
Now, the individual frequencies ω correspond to the harmonic bands f^l, and <cit.> showed that the bispectrum
can be computed from the tensor product (f)^l_1⊗ (f)^l_2.
Further, we want to obtain invariance towards rotation instead of translation: given rotations R∈ SO(3), the tensor product
is affected by R in terms of:
R((f)^l_1⊗ (f)^l_2) = (D^l_1( R) ⊗ D^l_2( R))
((f)^l_1⊗ (f)^l_2),
where D^l is the Wigner-D matrix for the l-th band (see section <ref>).
Just like in the 1D case, <cit.> proved that the bispectrum (<ref>) will cancel out the impact of the rotation
R. So, for the l-th band of expansion we can compute the bispectrum of the l_1-th and l_2-th band with l_1,l_2≤ l by:
( SH_bispectrum)^l_1,l_2,l := ∑_m=-l^l∑_m_1=-l_1^l_1⟨ lm|l_1m_1,l_2m_2⟩f^l_1_m_1·f^l_2_(m-m_1)·f^l_m,
where the Clebsch-Gordan coefficients (see section <ref>) ⟨ lm|l_1m_1,l_2m_2 ⟩ determine the impact of the
frequency couplings in the tensor product computing the bispectrum. Refer to <cit.> for full proof.
§.§ Implementation
As before, we follow the implementation of the Spherical Harmonic transformation as described in chapter <ref> and
stop the expansion at an arbitrary band b_max (depending on the application) which has no effect on the rotation invariance.
The actual computation of bispectrum from (<ref>) can be optimized by removing the f^l_m term
to the outer iteration and limiting the inner iteration to values which form possible Clebsh-Gordan combinations:
( SH_bispectrum)^l_1,l_2,l= ∑_m=-l^l f^l_m ×∑_m_1=max-l_1,(m-l_2)^minl_1,(m+l_2)⟨ lm|l_1m_1,l_2m_2 ⟩f^l_1_m_1·f^l_2_(m-m_1).
Multi-Channel Data: It is straightforward to combine the information from two different data channels into a single
SH_bispectrum-Feature: we can simply choose the coefficients f^l_1 and f^l_2 from two t
expansions of the data from two different channels.
§.§.§ Complexity
The computational complexity of a singe ( SH_bispectrum)^l_1,l_2,l( x) feature lies in O(l^3). To obtain
completeness we need all O(b_max^2) features at all m positions of X. The harmonic expansion to band b_max at each
point takes another O(m(b_max)^2 + (m log m)).
Parallelization
It is straightforward to get further speed-up by parallelization (see chapter <ref>). Since the computation of each
single feature
( SH_bispectrum)^l_1,l_2,l( x) is independent from all others, we can split the overall process in
parallel computations.
§.§ Discussion
The basic concept of the SH_bispectrum-Features is quite similar to what we did for the SH_phase-Features (see section
<ref>): we try to obtain a better discrimination performance than SH_abs-Features by implicit preservation
of the phase information. In case of the SH_phase-Features we do this by considering the relation of phases over different radii of the expansion,
here we relate different frequencies of the expansion. In theory, the completeness property makes the SH_bispectrum approach very
competitive, but this comes at high computational costs.
CHAPTER: SCALAR HAAR-FEATURES
Haar-FeatureScalar Haar-Feature
In this chapter we derive several features operating on scalar data which obtain invariance via Haar-Integration. As discussed in
section <ref>, one canonical approach to construct invariant features is to perform a Haar-Integration
over the transformation group.
Before we turn to the specific feature design, we first review the general framework of Haar-Integration in section
<ref> and discuss some aspects of the construction of suitable feature kernels in
section <ref>. Then we introduce 2p-Haar features <ref> and 3p-Haar features <ref>
which are based on the class of separable kernel functions, before we derive the generalized np-Haar features.
It should be noted that we also use Haar-Integration methods for the computation of the auto-correlation features SH_autocorr
(see section <ref>) and VH_autocorr (see section <ref>).
Related Work:
Based on the general group-integration framework (<ref>) which was introduced by <cit.>, <cit.> and
<cit.>, several invariant features were introduced for scalar data in 2D <cit.> and in 3D volumetric data
<cit.> <cit.> <cit.> <cit.>.
We will discuss these methods in the next section when we take a closer look at the class of sparse and separable
kernels <cit.> <cit.> <cit.> which form the basis of our features.
§.§ Invariance via Group-Integration
Haar-FeatureScalar Haar-FeatureGeneral Haar-Framework
Following the general objectives of feature extraction (see <ref>) we apply the Haar-Intergration approach to obtain
invariant features. This method is generally bound to the sub-class of compact group transformations (see <ref>),
where for a given transformation group G, the individual transformations g ∈ G differ only by their associated set of
parameters λ⃗, which cover the degrees of freedom under G.
In this chapter we derive features from the canonical group integration approach (see section <ref>) which generates invariant features
via Haar-Integration over all degrees of freedom of the transformation group G:
T(X) = ∫_ G (g_λ⃗X)dg_λ⃗,
eliminating the influence of λ⃗. Equation (<ref>) is also referred to as the “group-average”.
For the cause of simplicity, we denote individual transformation g_λ⃗ just by g.
It has to be noted that even though the Haar-Integration approach (<ref>) meets the necessary
condition of invariance (<ref>), the sufficient condition (<ref>) is anything but guaranteed.
In fact, a simple group-averaging itself produces incomplete features which often tend to have a weak separability performance.
This can be overcome by embedding non-linear kernel functions κ
into the integral <cit.>: it cannot be stressed enough that the use of such non-linear mappings is essential for any feature design
<cit.> <cit.> <cit.> <cit.> <cit.>,
and is the key element of the group-integration framework. The resulting general framework for invariant
feature generation via group-integration (<ref>) then embeds an arbitrary
non-linear kernel function κ.
T(X) := ∫_ Gκ(gX)dg
G transformation group
g one element of the transformation group
dg Haar measure
κ nonlinear kernel function
X n-dim, multi-channel data set
gX the transformed n-dim data set
Within this framework, features can be generated for data of arbitrary dimensionality and from multiple input channels.
This reduces the key design issue to the selection of appropriate kernel functions.
§.§ Local, Sparse and Separable Kernel Functions
Since we are interested in the construction of local features, we restrict the kernel functions κ in the general group-integration
approach (<ref>) to functions of local support.
Further, following the approach in <cit.> <cit.>, we restrict these local kernels to sparse functions which
only depend on a few discrete points of the local continuous data. Hence, κ(X) can be rewritten as
κ(
X( x_1), X( x_2), X( x_3),
…) <cit.>.
This way, we can reformulate (<ref>) and perform
the group transformation only on the local kernel support,
instead of the whole data set X (see Fig. <ref>). This local transformation
is denoted as s_g( x_i) such that
(gX)( x_i) =: X(s_g( x_i)) ∀ g,
x_i.
For these local kernels, (<ref>) can be rewritten as
T(X) := ∫_G
κ(
X(s_g( x_1)), X(s_g( x_2)),
…)dg.
Fig. (<ref>) shows how a sparse kernel with a local support of three discrete points can be applied
to “sense” the local continuous data.
For kernels with a larger support it does not make much sense to combine single data points over a certain distance. Instead we are interested
in combining larger structures, i.e. in having a kernel over regions rather than over single points. One very simple way to achieve this was
suggested in <cit.>: by applying a Gaussian smoothing of the input data which directly depends on the selected size of the
local support, we can define a “multi-scale” kernel which has different sizes of local support in each point.
This class of local sparse kernel functions provides a more structured but still very flexible framework for the design of local
invariant features.
However, even with a support reduced to n discrete points, naive kernel computation is still very expensive since the local support has to be
integrated over the entire transformation group.
<cit.><cit.><cit.> suggested to overcome this problem by the use of
Monte Carlo methods, but this approach is only effective
when features are computed via integration over the entire dataset (i.e. integration over the translation group). For the
computation of local features, i.e. a Monte Carlo integration over the rotation group is not suitable.
To make group-integral features applicable to large data sets, <cit.> introduced
a sub-class of sparse kernel-functions. For these so called separable kernels, the kernel can be split into a
linear combination of non-linear sub-kernels such that:
κ(X(s_g( x_1)), X(s_g( x_2)), …) = κ_1(X(s_g( x_1)))·κ_2(X(s_g( x_2))) ·… .
This separability constraint is not a strong limitation of the original class of sparse kernel-functions since in many cases it is possible to find
approximative decompositions of non-separable kernels via Taylor series expansion.
Besides the non-linearity, the choice of the sub-kernels κ_i is theoretically not further constrained, but in most cases very simple
non-linear mappings such as κ(x) = x^2, κ(x) = x^3,… or κ(x) = √(x) are powerful enough (see experiments in part
III).
Based on these separable kernels, <cit.> derived a fast convolution method for the evaluation of kernels with a support of
only two sparse points on continuous 2D images - so called “2-point” kernels (see section <ref>).
§ 2-POINT HAAR-FEATURES (2P)
Haar-FeatureScalar Haar-Feature2p-Feature
Our first feature which makes use of the general group integration framework (<ref>) is the so-called
2-Point or 2p-Haar feature. It was first introduced as a global feature for 2D images in <cit.>. We later extended this
approach
to local features on scalar 3D volume data in <cit.> and <cit.> with an application to biomedical 3D image
analysis in <cit.> (see part III).
2p-Features use a sub-class of the previously introduced separable kernel functions (<ref>). The name 2-Point
derives from the constraint that we restrict kernels to have just two separable kernel points x_1, x_2.
This restriction allows a reformulation of the initial parameterization λ of the rotation group SO(3),
which is drastically
reducing the computational complexity necessary to obtain rotation invariance. However, this comes at the price of reduced
discrimination power as we discuss at the end of this section.
§.§ Feature Design
The selection of the kernel points x_1 and x_2 is bound by the following design principle for the 2-Point features:
For the extraction of a local 2p-Feature at a given point x in X of the scalar (or possibly multi-channel) 3D input volume X,
x_1 is fixed at the center of the neighborhood, i.e. x_1 := X( x). The second kernel point is chosen from the local
neighborhood: x_2 ∈ S[r]( x) (see <ref> for the neighborhood definition).
Since x_1 is fixed, we only have to choose the parameters for x_2: the local neighborhood r and the spherical coordinates
Φ, Θ which can be neglected later on.
We are using the scheme for separable kernels (<ref>), we can write the 2p-Kernels as:
κ(X( x_1), X( x_2)) =
κ_1(X( x_1))·κ_2(X( x_2)).
Figure <ref> shows a schematic example of a local 3D 2p kernel on volume data.
§.§.§ Rotation Invariance
As for all other local features, we want to obtain rotation invariance. If we plug the 2p kernel (<ref>) into
the general Haar framework (<ref>), we can achieve invariance regarding rotations
R(ϕ,θ, ψ)∈ SO(3) parameterized in Euler angles (see section <ref>) with local transformations
(<ref>) s_ R(ϕ,θ, ψ) ∈ SO(3). Since x_1 is by definition always in
the rotation center, it is not affected by any rotation. Hence we can simplify the Haar-Integration for the multiplicative and separable
2p-Kernel functions:
T[r, x_2]( x) := κ_1(X( x))·∫_ SO(3)κ_2(X( s_ R(ϕ,θ,ψ)( x_2))) sinθdϕ dθ dψ.
§.§.§ Fast Computation
In order to compute (<ref>) we have to evaluate the integral over all possible rotations at each point X( x), which
turns out to be quite expensive in terms of computational complexity. At this point, the restriction of
(<ref>) to two points provides us with a fast solution: due to the fact that we have to integrate only over
the position of a single point x_2 ∈ S[r](X( x)), the integral over ψ becomes a constant factor and we can
rewrite (<ref>) as:
T[r, x_2]( x) = κ_1(X( x))·∫_ϕ,θκ_2(X(s_ R(ϕ,θ,ψ)( x_2))) sinθdϕ dθ.
Since x_2 ∈ S[r]( x) is also parameterized in ϕ,θ, we can further reformulate the integral
and simply solve:
T[r, x_2]( x) = κ_1(X( x))·∫_ x_i∈ S[r]( x)κ_2(X|_ S[r]( x) ( x_i)).
Finally, the integration over a spherical neighborhood S[r]( x) can easily be formulated as convolution of
X|_ S[r]( x) with a
spherical template S_t[r] with S_t[r](Φ,Θ) = 1, ∀Φ∈ [0,…, 2π], Θ∈ [0,…, π]:
T[r]( x) = κ_1(X( x))·(κ_2(X|_ S[r]( x)) * S_t[r]( x)).
In the same way, we
can evaluate the 2p-Feature at all positions in X at once, using fast convolution in the Fourier domain:
T[r](X) = κ_1(X)·(κ_2(X) * S_t[r]).
§.§ Implementation
The implementation is straightforward: given discrete input data, we apply the convolution theorem to compute the convolution via
FFT:
T[r](X) = κ_1(X)· FFT^-1( FFT(κ_2(X)) · FFT(S_t[r])).
The only thing we have to handle with some care is the implementation of the spherical template S_t[r]. To avoid sampling issues, we apply
the same implementation strategies as in the case of the Spherical Harmonic base functions (see section <ref>
for details).
Multi-Channel Data: Naturally, the application of 2p-Features to multi-channel data is limited to two channels per feature,
but this is straightforward: we can simply set the kernel points to be on different data channels c_i:
T[r](X) = κ_1(X[c_1])·(κ_2(X[c_2]|_ S[r]( x)) * S_t[r]).
Complexity: By reducing the feature computation to a fast convolution, we end up with a complexity of O( m log m) for
an input volume with m voxels.
Parallelization: Since there is no easy way to parallelize the Fourier Transformation, we do not further parallelize the computation
of 2p-Features. However, because 2p-Features can be computed so fast anyway, this is not a real drawback.
§.§ Discussion
The best property of 2p-Features is their computational speed: no other spherical local 3D feature, neither in the context of this work
nor in
the literature can be computed this fast. However, the speed comes at the price of a rather low discrimination power and the lack of
gray-scale robustness. While one might try to compensate the missing gray-scale robustness by pre-normalization of the input data, the
discrimination power hardly can be improved.
The problem is caused by the fact that 2p-Features are not only invariant under rotations, but also under arbitrary permutations of
signals on the sphere.
This causes problematic ambiguities, as illustrated in figure <ref>.
§ 3-POINT HAAR-FEATURES (3P)
Haar-FeatureScalar Haar-Feature3p-Feature
3-Point Haar-Features (or 3p-Features) are a direct extension of separable kernels (<ref>)
from two (<ref>) to three kernel points. The main motivation for this extension derives from
the discussion of the 2p-Features (see section <ref>), where we pointed out that even though 2-point kernels
(<ref>)
provide computationally very efficient features, the resulting discrimination power is flawed by the fact that these kernels are also
invariant to arbitrary permutations.
To overcome this major drawback, we introduced the 3p-Features in <cit.> and <cit.>. The basic idea is to add a third kernel
point x_3 to the separable kernel function κ (<ref>) (see figure
<ref>), which cancels out the permutation ambiguities:
κ(X( x_1), X( x_2), X( x_3)) =
κ_1(X( x_1))·κ_2(X( x_2))·κ_3(X( x_3)).
§.§ Feature Design
As in the 2p case, we fix the first kernel point x_1 := X( x) at the point of the local feature extraction,
while the other two points are placed at the concentric spherical neighborhoods surrounding the first point:
x_2 ∈ S[r_2]( x), x_3 ∈ S[r_3]( x).
Of course, both kernel points x_2, x_3 can be on the same sphere, resulting in r_2 = r_3, and are parameterized in spherical
coordinates Φ_2,Φ_3 and Θ_2, Θ_3. Figure <ref> shows examples of such 3p-Kernels.
§.§.§ Rotation Invariance
If we plug the 3p kernel (<ref>) into
the general Haar framework (<ref>), we can achieve invariance regarding rotations
R(ϕ,θ, ψ)∈ SO(3) parameterized in Euler angles (see section <ref>) with local transformations
(<ref>) s_ R(ϕ,θ, ψ) ∈ SO(3). As in the 2p case, x_1 is by definition
always in the rotation center, hence it is not affected by any rotation. This way, we end up with the Haar-Integration approach for the
separable 3p-kernel functions:
T[r_1,r_2, x_2, x_3]( x) := κ_1(X( x))·
∫_ SO(3)κ_2(X(s_ R(ϕ,θ,ψ)( x_2))) ·κ_2(X(s_ R(ϕ,θ,ψ)( x_3))).
sinθdϕ dθ dψ
We can further simplify this integral by the same considerations we made in (<ref>): since the kernel points
x_2, x_3 are not rotated independently, we express (without loss of generality) x_3 in dependency of x_2
(see Figure <ref>).
The integral over ψ is a constant factor in x_2 (as shown in (<ref>)), but for each position of x_2
the dependency of x_3 is expressed in terms of the angle ψ. Hence we have to integrate over all ψ in x_3:
T[r_1,r_2, x_2, x_3]( x) := κ_1(X( x))·
∫_ϕ,θκ_2(X(s_ R(ϕ,θ)( x_2))) ∫_ψκ_2(X(s_ R(ϕ,θ)( x_3)))
sinθdϕ dθ dψ.
§.§.§ Fast Computation
It is obvious that the introduction of the 3rd kernel point makes it impossible to solve (<ref>) by the same convolution
approach as in (<ref>). But the formulation of (<ref>) leads us to an intuitive
re-parameterization of the original problem. Without loss of generality, we consider the case where both kernel points x_2, x_3
are located on the same sphere, i.e. r_2 = r_3. Further we can fix x_2 at the “north pole” x_N and reduce its parameterization
to the radius r_2.
Since x_3 is bound to x_2 by the angle ψ, we can express the possible positions of x_3 in terms of the points
on the circle which lies on the same sphere as x_2 and is centered in x_2. As figure <ref> shows, this way we can
reduce the parameterization of x_3 to the radius r_c of this circle (Note: if we assume r_2 ≠ r_3, the circle simply lies on a
sphere with radius r_3).
Given this re-parameterization, we can give a fast algorithm for the evaluation of (<ref>): the integral over ψ
can be expressed as a convolution of a circular template on a sphere (analogous to (<ref>)) in spherical coordinates
(we denote this operation by *):
T[r,r_c]( x) = κ_1(X( x))·∫_S^2(κ_2(X(s_ R(ϕ,θ)( x_2 )))
)
·
(κ_3(X|_S[r]( x))* C_t[r_c] )sinθ dϕ dθ.
The key step towards a fast algorithm is to transfer the evaluation of (<ref>) to the Spherical Harmonic domain:
we expand the kernelized spherical neighborhoods
x_2 := SH[r](κ_2(X|_S[r]( x))), x_3:= SH[r](κ_3(X|_S[r]( x)))
and the circle template
C_t:= SH[r](C_t[r_c]) into the harmonic domain. Hence, we can apply the methods for fast convolution
(see section <ref>), or “left-convolution” (see section <ref>)
in case of the convolution with the circle template, in order to evaluate (<ref>).
Using these the techniques and exploiting the orthonormal dependencies of the harmonic base functions, we can directly derive a fast algorithm
for the computation of the 3p integral <cit.>:
T[r,r_c]( x) = κ_1(X( x))·∑_l=0^∞∑_m=-l^l
( x_2)^l_m ·( x_3 * C_t)^l_m.
§.§ Implementation
The transformation into the harmonic domain is implemented as described in section <ref>. Hence,
we can also obtain the expansions at all points in X at once using the convolution approach (<ref>).
The implementation of the circular template C_t[r_c] has to be handled with some care: to avoid sampling issues, we apply
the same implementation strategies as in the case of the Spherical Harmonic base functions (see section <ref>
for details).
Finally, we can even further simplify the computation of the “left convolution” (<ref>),
( x_3 * C_t)^l_m = 2π√(4π/2l+1)( x_3)^l_m C_t^l_0.
Since the 0th order of the harmonic base functions Y^l_0 always has constant values for a fixed
latitude Θ (<ref>), given by the Legendre Polynomials P^l_0(sinΘ) (<ref>),
and C_t only holds ones on a fixed latitude, we can compute (<ref>) by a simple multiplication with a scalar value.
Figure <ref> gives a schematic overview of the implementation of 3p-Features:
Multi-Channel Data:
Naturally, the application of 3p-Features to multi-channel data is limited to three channels per feature
but straightforward: we can simply set the kernel points to be on different data channels as shown in the 2p case.
Complexity:
Given input data X with m voxels, we need to compute the Spherical Harmonic transformation three times, to obtain x_2
,x_3 and C_t. Depending on the maximum expansion band b_max, this lies in O(b_max· m log m)
(see section <ref>). The convolution with the circular template and the dot-product take another O(m· b_max^2),
followed by the voxel-wise multiplication with κ_1(X) in O(m).
Parallelization:
As stated in section <ref>, we can gain linear speed-up in the number of cores for the parallelization of the
harmonic transformation. Further, we could also split the computation of the convolution and the dot-product into several threads, but
in practice this speed-up hardly falls into account.
§.§ Discussion
The 3-Point Haar-Features solve the permutation invariance problem of the 2-Point Features. However, this comes at the price of increased
computational complexity, where the transformation to the harmonic domain makes up most of the additional cost.
Another issue is the growing parameter set: for 3p kernels we have to set κ_1, κ_2, κ_3, r and r_c. which rises the
question of an appropriate feature selection.
§ N-POINT HAAR-FEATURES (NP)
Haar-FeatureScalar Haar-Featurenp-Feature
In this section, we introduce a generic algorithm for the implementation of the general scheme for separable kernels
(<ref>) which can handle an arbitrary number of kernel points x_1,…, x_n.
Just as we obtain an increase in discrimination power by going from two to three kernel points (see section <ref>),
we motivate the strategy to add further points to the kernel by the goal of deriving even more selective features.
The actual number of needed kernel points depends on the application: i.e. for a single feature, the use of four points might deliver more
discriminative texture
features than 3p kernels, while one might use kernels with eight or more points to locate very specific structures in an object detection task
(see part III).
As in (<ref>) and (<ref>), we formalize the n-Point kernels as given by (<ref>):
κ :=
κ_1(X(s_g( x_1))) ·κ_2(X(s_g( x_2))) ·…·κ_n(X(s_g( x_n))).
§.§ Feature Design
As in the case of local 2- and 3-Point features, the primary goal is to achieve rotation invariance.
Hence, the transformation group G is
given by the group of 3D rotations SO(3). If we parameterize these global rotations R∈ SO(3) as local rotations
of the kernel points in Euler angles s_g(ϕ,θ,ψ) (see Fig. <ref>), we can rewrite
(<ref>) as:
T[Λ](X) := ∫_ SO(3)κ_1( s_g_(ϕ,θ,ψ)X( x_1)) ·κ_2( s_g_(ϕ,θ,ψ)X( x_2)) ·…
·κ_n( s_g_(ϕ,θ,ψ)X( x_n))
sinθdϕ dθ dψ.
where Λ is the set of parameters, i.e. including κ_1,…,κ_n - we define Λ in detail when we present
the parameterization of the kernel in the next section (<ref>).
It is obvious that a direct and naive computation of these n-Point features is hardly tractable in terms of computational costs. For the
computation of every single (voxel-wise) feature, we would have to evaluate the kernel at all possible combinations of ϕ,θ,ψ
while transforming the n kernel points respectively.
To cope with this massive computational complexity, we generalize the methods for the fast computation of 3D 2- and 3-Point features
<cit.> via fast convolution in the harmonic domain. The main challenge for this generalization is that we need to be able to couple
the n sparse kernel points during the rotation in order to meet the separability criteria (<ref>)
in (<ref>).
Previously, we were able to avoid the coupling problem:
in the case of “2-point” kernels no coupling is needed, and the 3-Point kernels take advantage of
the exception that the third point always lies on a circle centered in the second point (see section <ref>).
For the general np case, we need to derive a new approach which actually solves the coupling problem.
As in the previous sections,
we will first derive the theory in a continuous setting before we deal with the implementation issues for actual applications in a discrete
world (see section <ref>).
§.§.§ Parameterization
As in the 2p and 3p case, we fix the first kernel point x_1 := X( x) at the point of the local feature extraction,
while the other points x_i, i∈{2,…,n} are placed at concentric spherical neighborhoods with radii r_i:
x_i ∈ S[r_i]( x). Hence, each x_i is parameterized by the spherical angles
Φ_i ∈ [0,…, 2π], Θ_i ∈ [0,…, π], the input data channel c_i and the radius r_i ∈ℝ
(also see figure <ref>).
Overall, we end up with set of kernel parameters:
Λ := {κ_1, {κ_2,r_2,c_2,Φ_2,Θ_2},…,
{κ_n,r_n,c_n,Φ_n,Θ_n}}.
Given this spherical parameterization, we first treat each x_i independently and perform the angular coupling of all points later on.
We represent the x_i by a spherical delta-function T_i[r_i] ∈ S^2 with radius r_i:
T_i[r_i](Φ,Θ) := δ(Φ-Φ_i)δ(Θ-Θ_i).
In its harmonic representation, T_i[r_i] is given by the according Spherical Harmonic base functions:
T_i[r_i](Φ,Θ) = ∑_l=0^∞∑_m=-l^l Y_m^l(Φ,Θ)Y_m^l(Φ_i,Θ_i).
Hence, we can obtain the Spherical Harmonic transformation of T_i[r_i] directly from the harmonic base functions:
( T_i[r_i, Φ_i,Θ_i])_m^l = Y_m^l(Φ_i,Θ_i).
In the next step, we evaluate the contribution of the kernel points at the constellation of the T_i[r_i]
given the local support of each feature extraction point x.
Due to the separability of our kernels (<ref>), each kernel point is associated with a potentially different
non-linear sub-kernel κ_i and might operate on a different data channel c_i. For each feature evaluation,
we perform Spherical Harmonic expansions around the center voxel at the radii r_i (associated with the respective kernel points)
of the non-linearly transformed input data κ_i(X[c_i]):
X[r_i,c_i]( x) = SH[r_i](κ_i(X[c_i]|_ S[r_i]( x))).
With the data and the kernel points represented in the harmonic domain, we can now apply a fast correlation to evaluate the contribution
of each kernel point on the local data and perform this evaluation over all rotations.
Given a point at position x, we compute the result C^#_i of this fast correlation over all spherical angles for the i-th kernel
point as shown in (<ref>):
C^#_i = X[r_i,c_i]( x) # T_i.
§.§.§ Rotation Invariance
The key issue regarding the construction of n-Point" kernels is that we need to couple the contributions of the individual kernel
points in such
a way that only the chosen kernel constellation (given by the Φ_i,Θ_i, r_i) has a contribution when we rotate over all possible
angles, i.e. the kernel points must not rotate independently.
Since the correlation matrices C^#_i hold the contribution at each possible angle in a 3D Euclidean space with a (ϕ,θ,ψ)
coordinate-system
(see section <ref>), we can perform the multiplicative
coupling of the separate sub-kernels (<ref>) by an angle-wise multiplication of the point-wise
correlation results: ∏_i=2^n C^#_i.
Finally, by integrating over the resulting Euclidean space of this coupling, we easily obtain rotation invariance as in (<ref>):
∫_ SO(3)( ∏_i=2^n C^#_i ) sinθdϕ dθ dψ.
With the additional coupling of x_1, we are now able to compute the n-Point Haar-Feature as shown in figure (<ref>):
T[Λ]( x) := κ_1(X( x)) ·∫_ SO(3)( ∏_i=2^n C^#_i ) sinθ dϕ dθ dψ.
§.§.§ Gray-Scale Invariance
A nice side effect of the kernel point coupling via fast correlation (<ref>) is the fact that we can obtain
real invariance towards additive and multiplicative gray-value changes: we simply use the normalized cross-correlation (<ref>) to compute the
C^#_i = X[r_i,c_i]( x) # T_i
where the individually normalized correlations are independent of gray-scale changes.
§.§ Implementation
The transformation into the harmonic domain is implemented as described in section <ref>. Hence,
we can also obtain the expansions at all points in X at once using the convolution approach (<ref>).
The implementation of the template T_t has to be handled with some care: to avoid sampling issues, we apply
the same implementation strategies as in the case of the Spherical Harmonic base functions (see section <ref>
for details).
The computation of the correlation matrices C^# follows the algorithm given in section <ref>. The
size of the padding p we need to apply strongly depends on the angular resolution necessary to resolve the given configuration of the kernel
points.
Finally, the evaluation of the Haar-Integration over all possible rotations is approximated by the sum over the combined
(ϕ,θ,ψ)-space:
T[Λ]( x) ≈ κ_1(X( x)) ·∑( ∏_i=2^n C^#_i ).
Multi-Channel Data:
As in the other cases of scalar Haar-Features, the application of np-Features to multi-channel data is limited to n channels per feature,
but straightforward: we can simply set the kernel points to be on different data channels as shown in the 2p case.
Complexity
The computational complexity of the np-Feature is dominated by the n Spherical Harmonic expansions needed to transform the kernelized
input data into the harmonic domain which takes O(n · b_max· m log m) for input data of size m. The costs for the
correlation and multiplication of the correlation matrices are negligible.
Parallelization
As stated in section <ref>, we can gain linear speed-up in the number of cores for the parallelization of the
harmonic transformation. Further, we could also split the computation of the correlation matrices into several threads, but
as mentioned before, this speed-up hardly falls into account.
§.§ Further Speed-up
Concerning computational complexity, the main bottleneck of the np-Feature is actually the transformation to the Spherical Harmonic domain.
Due to the non-linear mappings κ_i of the separable kernel, we have to compute the harmonic expansion at all points x in X for
each kernel point independently (<ref>). Without the κ_i, we would only need a single transformation
for all kernel points which lie on the same radius and the same data channel (a setting which is very common in practice). However,
we cannot simply neglect the non-linear kernel mappings.
On the other hand, we are not bound to the class of separable kernels, which were only introduced to support the development of fast
algorithms. Hence, we construct a new kernel, which is separating the kernel point x_1 = X( x) in the center from the points
x_i ∈ S[r_i]( x) in the local spherical neighborhood of x:
κ :=
κ_1(X(s_g( x_1))) ·κ_s(X(s_g( x_2)),
…,
X(s_g( x_n))),
where κ_s is some non-linear mapping of (n-1) arguments (just like in (<ref>)).
Instead of a non-linear weighting of the underlying data sensed by the kernel points (as before), we choose κ_s to provide a
non-linear weighting of the combination of the kernel points. Technically this is only a small change, but it enables us to
move the κ_i into the harmonic domain, weighting the contribution of the kernel points to the Integral:
T[Λ]( x) := κ_1(X( x)) ·∫_ SO(3)( ∏_i=2^n κ_i(
C^#_i) ) sinθ dϕ dθ dψ.
Figure <ref> shows the changes in the overall computation scheme. It should be noted that this optimized
approach is similar but not equivalent to the original np formulation.
§.§ Discussion
The np-Features provide a powerful framework for the implementation of local features which are able obtain invariance towards
rotations and multiplicative gray-scale changes via Haar-Integration.
In practice, np-Features are especially suitable for the design of highly specific features with a strong discriminative power used in
challenging image analysis tasks justifying the higher computational costs. For less complex problems, we are better off using some
of the less complex feature methods.
A major problem concerning the application of np-Features is the huge set of kernel parameters Λ (<ref>)
we have to choose. In practice, it is infeasible to try all possible parameter combinations in a feature selection process, like we suggest
for other features. Neither is it practically possible to select the best parameter settings by hand.
CHAPTER: VH-FEATURES
VH-Features
In this chapter, we derive a set of local, rotation invariant features which are directly motivated by the
mathematical formulation of the Vectorial Harmonics (see section <ref>). Analogous to the SH-Features, we take
advantage of the nice properties of the harmonic representation which allow us to perform fast feature computations in the
frequency domain.
Given 3D vector fields X, the transformation VH[r]( x)
(<ref>) of local vectors on a sphere with radius r around the
center x in X in Vectorial Harmonics is nothing more than a change of the base-functions representing the initial data.
So the new base might provide us with a nice framework to operate on spheres, but we still have to perform the actual feature construction.
Primarily, we want to obtain rotation invariance.
First we introduce a method to obtain rotational invariance which is the simple extension of SH_abs-Features
(see section <ref>) to vector fields: In section <ref> we introduce VH_abs-Features,
which use the fact that the band-wise energies of a VH representation do not change under rotation.
The second member of the VH-Feature class is also derived from its SH counter part: the fast and also rotation invariant
auto-correlation feature VH_autocorr (section <ref>)
is based on the fast correlation in Vectorial Harmonics introduced in section <ref>.
Finally, since we transfer all VH-Features directly from the class of SH-Features, one might ask if the other two
SH-Features, SH_phase and SH_bispectrum could also be extended to the VH domain. And in fact,
theoretically both extension could be done without much effort, but practically, none of them make much sense: the bispectrum features
(see section <ref>) simply become exceptionally expensive when we have to add additional couplings over the
sub-bands k. For the vectorial phase, we could simply somehow define a phase in VH, however, it is actually not evident how such a
phase should be chosen and what it actually represents with respect to the mapping of the input data.
§ VH_ABS
VH-FeaturesVH_abs
VH_abs-Features are the direct extension of SH_abs-Features (see section <ref>) to vector fields.
Again, we use the fact that the band-wise energies of a VH representation does not change under rotation.
§.§ Feature Design
Rotations R(ϕ,θ, ψ)∈ i SO(3) on 3D vector fields ℝ^3 ×ℝ^3 (see section <ref>)
are represented in the Vectorial Harmonic
domain in terms of band-wise multiplications of the expansions f^l with Wigner D-Matrices D^l (<ref>).
Hence, we can directly follow the very same power spectrum approach as for the SH_abs-Features.
This way we easily obtain a rotation invariant scalar entry for the l-th frequency in the power spectrum:
( VH_abs[r]( x))^l := √(∑_k=-1^1∑_m=-(l+k)^(l+k)(( VH[r]( x))^l_k,m)^2).
Since the rotation invariance is achieved band wise, the approximation of the original data via harmonic expansion can be cut off at an
arbitrary band, encoding just the level of detail needed for the application.
§.§ Implementation
The implementation of the VH_abs is straightforward. We follow the implementation of the Vectorial Harmonic transformation as
described in section <ref>.
Multi-Channel Data: VH_abs-Feature cannot directly combine data from several channels into a single feature. In case of
multi-channel data, we would have to compute features for each channel separately.
§.§.§ Complexity
Following the implementation given in section <ref>, we obtain the harmonic expansion to band b_max at each
point of a volume with m voxels in O(m(b_max)^2 + (m log m)). The computation of the absolute values takes another O((b_max)^3).
The additional loop over k does not effect the O-Complexity, but in practice, VH_abs takes about factor three longer
to compute than SH_abs.
Parallelization
Further speed-up can be achieved by parallelization (see section <ref>): the data can be transformed into
the harmonic domain by parallel computation of the coefficients and the computation of the absolute values can also be split into several
threads.
For C CPU cores with C≤ (b_max)^2 and C≤ m we obtain:
O(m(b_max)^3/ C) +O( m(b_max)^2 +(m log m)/ C).
§.§ Discussion
The VH-Features are a simple and straightforward extension of SH_abs to 3D vector fields. They are computationally
efficient and easy to implement. However, the discriminative properties are even more limited than the SH_abs-Features.
The band-wise absolute values capture only
the energy of the respective frequencies in the overall spectrum. Hence, we loose all the phase information which leads to strong ambiguities
within the feature mappings. The additional sub-bands k further increase this problem compared to SH_abs.
In many applications it is possible to reduce these ambiguities by combining VH-Features
extracted at different radii.
§ VH_AUTOCORR
VH-FeaturesVH_autocorr
The second member of the VH-Feature class is also derived from its SH counter part: based
on the auto-correlation feature SH_autocorr (section <ref>)
we compute invariant features directly from the Vectorial Harmonic representation. Again, this is motivated by the introduction of the
fast normalized cross-correlation in the Vectorial Harmonic domain (see introduction of chapter <ref>).
The cross-correlation VH_corr(f,g)
of two vectorial signals f,g∈ S^2 is a binary operation VH_corr: S^2× S^2
→ℝ. Hence, it cannot be used
directly as a feature, where we require a mapping of individual local signals f∈ S^2 → H into some feature space
H⊆ℝ^n.
A general and widely known method for obtaining features from correlations is to compute the auto-correlation, e.g. <cit.>. In our case, we
propose the local VH_autocorr-Feature, which performs a fast auto-correlation of f∈ (S^2×ℝ^3) with itself.
We use local dot-products of vectors to define the auto-correlation under a given rotation R in Euler angles ϕ, θ, ψ as:
( f# f)( R) := ∫_Φ,Θ⟨ f(Φ,Θ), R f(Φ,Θ)⟩ sinΘdΦ dΘ.
§.§ Feature Design
We first expand the local neighborhood f at radius r around the point x∈ X in
Vectorial Harmonics, f:= VH[r]( X( x)).
Then we follow the fast correlation method which we introduced in section <ref> to obtain the full correlation
C^# from equation (<ref>).
Invariance:
In order to obtain rotation invariant features, we follow the Haar-Integration approach (see section
<ref>) and integrate
over the auto-correlations at all possible rotations R. C^# holds the necessary auto-correlation results in a 3D
(ϕ,θ,ψ)-space (<ref>), hence we simply integrate over C^#,
VH_autocorr := ∫_ϕ,θψκ(C^#(ϕ,θ,ψ)) sinθdϕ dθ dψ
and obtain a scalar feature. Additionally, we insert a non-linear kernel function κ to increase the separability. Usually, very
simple non-linear functions, such as κ(x):=x^2, κ(x):=x^3 or κ(x):= √(x), are sufficient.
§.§ Implementation
We follow the implementation of the Vectorial Harmonic transformation as described in section <ref> and the
implementation of the fast correlation from (<ref>).
In practice, where the harmonic expansion is bound by a maximal expansion band b_max, the integral (<ref>)
is reduce to the sum over the then discrete angular space C^#:
VH_autocorr = ∑_ϕ,θψκ(C^#(ϕ,θ,ψ)).
Multi-Channel Data: VH_autocorr cannot directly combine data from several channels into a single feature. In case of
multi-channel data, we would have to compute features for each channel separately.
§.§.§ Complexity
Following the implementation given in section <ref>, we obtain the harmonic expansion to band b_max at each
point of a volume with m voxels in O(m(b_max)^2 + (m log m)). The complexity of the auto-correlation depends on b_max and
the padding parameter p (<ref>) and can be computed in O(m(b_max+p)^3 log (b_max+p)^3)). The summ over
C^# takes another O((b_max+p)^3) at each point.
Parallelization:
Further speed-up can be achieved by parallelization (see section <ref>): the data can be transformed into
the harmonic domain by parallel computation of the coefficients and the computation of the absolute values can also be split into several
threads.
For C CPU cores with C≤ (b_max)^2 and C≤ m we obtain:
O(m( (b_max+p)^3 + (b_max+p)^3 log (b_max+p)^3)/ C) +O( m(b_max)^2 +(m log m)/ C)
§.§ Discussion
Auto-correlation can be a very effective feature to encode texture properties. The discriminative power of VH_autocorr can
be further increased by combining the correlation at several different radii into a correlation result C^# as described in section <ref>.
CHAPTER: VECTORIAL HAAR-FEATURES
Haar-FeatureVectorial Haar-Feature
In this chapter we derive several features operating on vectorial data which obtain invariance via Haar-Integration. All of the methods
are strongly related to the features presented in the chapter <ref> and are based on the Haar-Integration framework
<ref>.
In the case of vectorial data, we take advantage of the Vectorial Harmonic (see section <ref>) representation of local
spherical neighborhoods S[r]( x) of radii r at position x∈ℝ^3 of the 3D vector fields X:
ℝ^3→ℝ^3 with vectorial elements X( x) ∈ℝ^3.
Please refer to the sections <ref> and <ref> for an in-depth introduction
of the Haar approach. It also might be useful to take a look at the scalar kernels in <ref>, <ref> and
<ref> first.
Analogical to the 2p,3p and np kernels, where the name indicated the number of scalar kernel points in a local, sparse and separable
kernel (<ref>), we also denote the local, sparse and separable vectorial kernels by 1v, 2v and nv:
The 1v-Feature (section <ref>) uses a kernel with a single vector component and acts as vectorial extension of
the 2p-Feature. Basically, it integrates
the local similarities of the data vectors with the normal vectors of a spherical neighborhood template. The 1v kernel is especially
suitable for the detection of sphere like convex structures and is primarily a shape feature, not a texture feature.
The 2v-Feature (section <ref>) applies a variation of the 1v kernel: instead of using the normal vectors of a spherical
neighborhood template,
the 2v kernel integrates over the similarities of the data vectors with the centering vector X( x). 2v kernels return
more texture based and less shape based features.
Finally, we introduce the nv-Feature (section <ref>) where we apply the direct extension of the np kernel
(section <ref>) to 3D vector fields in order to derive highly specific local features.
Related Work:
In general, there have not been many publications on local invariant features for 3D vector fields. One exception is the work of
<cit.>, which uses a voting scheme in a 3D gradient vector field to detect spherical structures. The results of this
feature are practically identical to those of our 1v-Features - both just follow different approaches to implement a detector which
could be considered as Hough-Transform for spheres.
§ 1-VECTOR FEATURES (1V)
Haar-FeatureVectorial Haar-Feature1v-Feature
The 1v-Feature uses a kernel with a single vector component and acts as vectorial extension of the 2p-Feature. Basically, it integrates
the local similarities of the data vectors with the normal vectors of a spherical neighborhood template.
§.§ Feature Design
Given a 3D vector field X:ℝ^3→ℝ^3, we extract local features from the spherical neighborhoods
S[r]( x) at positions x. We integrate over the dot-products between the vectorial data X( x_i) and
the normal vectors x_i^ at all positions x_i ∈ S[r]( x) on the sphere around x.
The normal vectors are defined as:
x_i^ := α( x - x_i ),
where the α∈{-1,1} factor determines whether the normal vector points towards or away from the feature extraction point.
Figure <ref> illustrates the basic kernel design
§.§.§ Rotation Invariance
If we plug the dot-product into the general Haar framework (<ref>), we can achieve invariance regarding
rotations R(ϕ,θ, ψ)∈ SO(3) parameterized in Euler angles (see section <ref>).
It is obvious that all possible positions x_i lie on the spherical neighborhood
S[r]( x) with the radius
r=| x - x_i|,
whereas the normal vector x_i^ changes with the position according to (<ref>). Because we are considering
a singe kernel vector, we can reduce the integral over all rotations to an integral over all points of the spherical neighborhood
parameterized by the angles ϕ and θ (see <ref> for a detailed justification).
The final formulation of the 1v-Feature is then:
T[r, α]( x) := ∫_ x_i ∈ S[r]( x)⟨ x_i^ , X( x_i)
⟩sinθ dϕ dθ.
§.§ Implementation
The evaluation of (<ref>) could be implemented straightforward. However, usually we want to compute features at
all voxels x simultaneously. Therefore, we propose an optimized algorithm: we pre-compute a vectorial template
T[r,α], which simply holds the normal vectors of the spherical neighborhood S[r]( x) weighted by α.
Figure <ref> shows such a template.
We then reformulate the dot-product in (<ref>) as component-wise convolution of
T[r,α] with S[r]( x):
T[r, α]( x) := ∑_c=0^2 X[c]|_ S[r]( x) * T[r,α][c],
where X( x)[c] returns the cth directional component of X( x). Hence, we can apply a fast convolution to
simultaneously evaluate (<ref>) at all voxels x:
T[r, α]( X) := ∑_c=0^2 FFT^-1( FFT( X[c]) · FFT( T[r,α][c])).
For discrete input data we have to handle the implementation of the spherical template T[r,α] with some care. To avoid sampling issues,
we apply the same implementation strategies as in the case of the Spherical Harmonic base functions (see section <ref>
for details).
Multi-Channel Data:
1v-Features cannot directly combine data from several channels into a single feature. In case of
multi-channel data, we would have to compute features for each channel separately.
§.§.§ Complexity
Using the convolution approach, we end up with a complexity of O(m · m log m) for an input volume of size m.
Parallelization:
Since there is no easy way to parallelize the Fourier Transform, we do not further parallelize the computation
of 1v-Features. But since 1v-Features can be computed so fast anyway, this is not a real drawback.
§.§ Discussion
The 1v-Feature provides a very fast and rotation invariant method for the extraction of local features from 3D vector fields.
The nature of the kernel vectors given as normals of the spherical neighborhood makes the 1v kernel an optimal detector for spherical
structures which relies mostly on shape and hardly on texture properties of the underlying data. The α factor then indicates if
we detect the inner or the outer surface of a spherical shape.
In an alternative interpretation, the 1v approach could be seen as Hough-Transform <cit.> for spheres. This could be reinforced by
an additional integration over several radii.
§ 2-VECTOR FEATURES (2V)
Haar-FeatureVectorial Haar-Feature2v-Feature
The 2v-Feature uses a variation of the 1v kernel: instead of using the normal vectors of a spherical
neighborhood template, the 2v kernel integrates over the similarities of the data vectors of the centering vector X( x).
§.§ Feature Design
Given a 3D vector field X: ℝ^3→ℝ^3, we extract local features from the spherical neighborhoods
S[r]( x) at positions x.
The basic idea of the 2v kernel is to compute the similarity (in terms of the dot-product) of the direction of the data vectors
X( x_i), ∀ x_i ∈ S[r]( x) with the direction of the center vector X( x).
Rotation Invariance:
If we plug the 2v kernel into the general Haar framework (<ref>), we can achieve invariance regarding
rotations R(ϕ,θ, ψ)∈ SO(3) which are parameterized in Euler angles (see section <ref>).
Just as in the 1v case, we use the fact that all possible positions of the x_i lie on the spherical neighborhood
S[r]( x) with the radius:
r=| x - x_i|,
And again, since we are considering
only a singe kernel vector, we can reduce the integral over all rotations to an integral over all points of the spherical neighborhood
parameterized by the angles ϕ and θ (see <ref> for a detailed justification).
The final formulation of the 2v-Feature is then:
T[r]( x) := ∫_ x_i ∈ S[r]( x)⟨ X( x) , X( x_i) ⟩sinθ dϕ dθ.
§.§ Implementation
The implementation strictly follows the convolution based algorithm introduced for the 1v case
(see section <ref>). The only difference
is that the vectors in the template T are oriented in the same direction as X(x).
Multi-Channel Data:
v2-Features can combine data from two channels into a single feature: we can simply extract the kernel direction X[c_1]( x)
and the neighborhood data X[c_2]( x_i) from different channels.
§.§.§ Complexity
Using the convolution approach, we end up with a complexity of O(m · m log m) for an input volume of size m.
Parallelization:
Since there is no easy way to parallelize the Fourier Transformation, we do not further parallelize the computation
of 2v-Features. But as 2v-Features can be computed so fast anyway, this is not a real drawback.
§.§ Discussion
The fast 2v kernels return more texture based and less shape based features. Intuitively, 2v-Features are an indicator for the
local homogeneity of the vector field. The name 2v-Feature might miss leading to some degree, since we only consider a single kernel vector.
But in contrast to the 1v-Feature approach, we actually combine two vectors from the input data X( x) and X( x_i).
§ N-VECTOR FEATURES (NV)
Haar-FeatureVectorial Haar-Featurenv-Feature
The nv-Features are the direct extension of the np-Features (see section <ref>) to 3D vector fields. Analogous to the
properties of np kernels on scalar (multi-channel) data, the goal is to derive highly specific local features for the detection of local
structures (objects) in 3D vector fields.
To obtain a strong discrimination power, we introduce a vectorial kernel which is able to handle an arbitrary number of kernel vectors
v_1,…, v_n instead of only one or two (as for 1v,2v-Features).
Since the entire derivation of the nv kernel strongly relies on the very same methods and algorithms that were introduced for the derivation
of the np kernel, the reader may to refer to section <ref> for some technical details.
Given a 3D vector field X: {ℝ^3→ℝ^3}, we extract local features from the spherical neighborhoods
S[r]( x) at positions x. For the kernel vectors v_i ∈{ℝ^3×ℝ^3},
we write v̇_i ∈ℝ^3 and v_i∈ℝ^3 to address their position and direction.
There are two major differences in the basic formulation between general sparse and local scalar (np) and vectorial (nv) kernels:
first, we do not explicitly consider a center vector for nv kernels (even though the framework would
allow such a constellation). The main reason to do so is given by the second difference: since non-linear mappings (like the κ_i
in the np kernel) are not well defined on vectorial data, we do not use a separable kernel approach (<ref>)
for the construction of the nv kernel.
Instead, we are following the alternative (fast) approach (<ref>), which allows us to formalize the n-Vector
kernels in a more abstract way:
(<ref>):
κ(s_g( v_1),
… ,
s_g( v_n)).
Figure <ref> shows an example nv kernel. Later on, we give the actual kernel mapping κ, which is still
non-linear, but does not operate directly on vectorial data.
§.§ Feature Design
The primary goal is to achieve rotation invariance.
Hence, the transformation group G is
given by the group of 3D rotations SO(3). If we parameterize these global rotations R∈ SO(3) as local rotations
of the kernel vectors in Euler angles s_g(ϕ,θ,ψ) (see Fig. <ref>), we can rewrite
(<ref>) as:
T[Λ]( x) := ∫_ SO(3)κ( s_g_(ϕ,θ,ψ)( v_1),…,
s_g_(ϕ,θ,ψ)( v_n))
sinθdϕ dθ dψ,
where Λ is the set of parameters, i.e. including κ - we define Λ in detail when we present
the parameterization of the kernel in the next sub-section(<ref>).
§.§.§ Parameterization
The key for a fast computational evaluation of (<ref>) is the smart parameterization of the kernel. Following the
approach for the np kernels, we parameterize the position of the kernel vectors as points v̇_i with i∈{1,…,n} located
at concentric spherical neighborhoods S[r_i]( x) with radii r_i surrounding the point of the feature extraction x.
Hence, each v̇_i is parameterized by the spherical angles
Φ_i ∈ [0,…, 2π], Θ_i ∈ [0,…, π] and r_i ∈ℝ (also see figure <ref>).
The overall parameter set Λ thus includes the parameterized position v̇_i, the direction v_i
(which is normalized to | v_i |=1) and the non-linear mapping κ which will be split into κ_1,…,κ_n
later on:
Λ := {{κ_1,r_1,Φ_1,Θ_1, v_1},…,
{κ_n,r_n,Φ_n,Θ_n, v_n}}.
Given this parameterization, we further follow the approach from the np derivation and introduce “delta like” vectorial template
functions T_i which represent the kernel vectors T_i[r_i] in the harmonic domain:
( T_i[r_i, Φ_i,Θ_i, v_i])_km^l = v_i^T Z^l_km(Φ,Θ).
Now we have a frequency representation of the individual kernel vectors. In the next step, we evaluate the contribution of the input data at
these kernels.
For each feature evaluation,
we perform Vectorial Harmonic expansions around x at the radii r_i (associated with the position of the respective kernel vectors)
of the input vector field X:
S[r_i]( x) = VH[r_i]( x).
With the data and the kernel vectors represented in the harmonic domain, we can apply a fast correlation to evaluate the contribution
of each kernel point on the local data and perform this evaluation over all rotations.
Given a vector at position x, we compute the result C^#_i of this fast correlation over all spherical angles for
the i-th kernel vector as shown in (<ref>):
C^#_i = S[r_i]( x) # T_i.
§.§.§ Rotation Invariance
As in the case of n-Point kernels, we need to couple the contributions of the individual kernel vectors in such
a way that only the chosen kernel constellation (given by the Φ_i,Θ_i, r_i) has a contribution to the feature while rotating over
all possible angles, i.e. the positions of the kernel vectors must not rotate independently.
Note that the correct orientation of the kernel vectors
under the rotation is guaranteed by the Vectorial Harmonic formulation.
Since the C^#_i hold the contribution at each possible angle in a 3D Euclidean space with a (ϕ,θ,ψ) coordinate-system
(see section <ref>), we can perform the multiplicative
coupling of the separate sub-kernels (<ref>) by a angle-wise multiplication of the point-wise
correlation results: ∏_i=2^n C^#_i.
By integrating over the resulting Euclidean space of this coupling, we easily obtain rotation invariance as in
(<ref>):
∫_ SO(3)( ∏_i=2^n C^#_i ) sinθdϕ dθ dψ.
Finally, we still have to introduce the non-linear mapping into (<ref>) to satisfy (<ref>)
. We follow the fast approach from (<ref>), where we split κ into n non-linear mappings κ_1,…,
κ_2 which act directly on the correlation matrices. This leads to the final formulation of the nv-Feature:
T[Λ]( x) := ∫_ SO(3)( ∏_i=1^n κ_i(
C^#_i) ) sinθ dϕ dθ dψ.
Figure <ref> shows a schematic overview of the computation of nv-Features.
§.§ Implementation
The transformation into the harmonic domain is implemented as described in section <ref>. Hence, we
can also obtain the expansions at all points in X at once using the convolution approach analogous to (<ref>).
The implementation of the template T_t has to be handled with some care: to avoid sampling issues, we apply
the same implementation strategies as in the case of the Spherical Harmonic base functions (see section <ref>
for details).
The computation of the correlation matrices C^# follows the algorithm given in section <ref>. The
size of the padding p we need to apply strongly depends on the angular resolution necessary to resolve the given configuration of the kernel
points.
Finally, the evaluation of the Haar-Integration over all possible rotations is approximated by the sum over the combined
(ϕ,θ,ψ)-space:
T[Λ]( x) ≈ ∑( ∏_i=2^n C^#_i ).
Multi-Channel Data:
nv-Features cannot directly combine data from several channels into a single feature. In case of
multi-channel data, we have to compute features for each channel separately.
§.§.§ Complexity
The computational complexity of the nv-Feature is dominated by the Vectorial Harmonic expansions needed to transform the
input data and the kernel vector templates into the harmonic domain. This takes O( b_max· m log m) for input data of size
m and O(n · b_max· m' log m') for a template size of m'. The costs for the correlation and multiplication of the
correlation matrices are negligible.
Parallelization:
As stated in section <ref>, we can gain linear speed-up in the number of cores for the parallelization of the
harmonic transformation. Further, we could also split the computation of the correlation matrices into several threads, but
as mentioned before, this speed-up hardly falls into account.
§.§ Discussion
The nv-Features provide a powerful framework for the implementation of local features which are able obtain invariance towards
rotation via Haar-Integration.
In practice, nv-Features are especially suitable for the design of highly specific features with a strong discriminative power used in
challenging image analysis tasks justifying the higher computational costs. For less complex problems, we are better off using some
of the less complex feature methods.
A major problem concerning the application of nv-Features is the huge set of kernel parameters Λ (<ref>)
we have to choose. In practice, it is infeasible to try all possible parameter combinations in a feature selection process, as we suggest
for other features. Neither is it practically possible to select the best parameter settings by hand.
CHAPTER: EXPERIMENTS
In the final chapter of the first part, we evaluate the feature methods which were introduced in the previous chapters. We start with the
evaluation of the speed and accuracy of our fast correlation in Spherical Harmonics in section <ref> and
the correlation in Vectorial Harmonics in section<ref>.
Section <ref> evaluates the computational complexity of our features on real world data. Then we use a database of
semi-artificial 3D textures (see Appendix <ref>) to perform a series of 3D texture classification (see section
<ref>).
§ EVALUATING SH-CORRELATION
RotationExperimentCorrelation
Unlike previous publications <cit.><cit.><cit.>, which only performed a small set of experiments with a fixed
number of predefined example rotations, we evaluate our methods with a series of large scale experiments on real word data.
If not mentioned otherwise, all experiments have the same basic setup: for each parameter set, we evaluate the error statistics of 100
random rotations of random objects. We generate the rotations over all possible angles ϕ,ψ∈ [0, 2π[ and θ∈
[0, π[ with a resolution of 0.001 ≈ 0.1^∘. Note that an error of 1^∘≈ 0.017. All given error rates are
the sums over the errors of all three angles.
§.§.§ Rotating Objects in the Harmonic Domain
In this first series of experiments, we extract a harmonic expansion with a fixed radius around the object center and then
rotate this expansion using (<ref>).
Pad Size: In a first experiment, we are able to show the effect of our padding method on the estimation accuracy.
Figure (<ref>) clearly shows the correlation of the pad size and the expected error.
It is also evident that we are able to achieve a high precision with errors below 1 degree. Hence, the experimental errors are found to be well within
the
theoretical bounds given in (<ref>).
Maximum Band: The next two experiments investigate the practical influence of the maximum expansion band on the
estimation errors.
Figure (<ref>) strongly supports our initial assumption that the original formulation is not able to achieve
accurate estimates for
low expansions. Our method on the other hand achieves very low error rates even for extremely low expansions with b=2.
Rotational Invariance and Computational Costs:
Rotational Invariance and Computational Costs are investigated in the last two experiments
(figure (<ref>)) of
the first series. We rotate the object in π/8 steps in every angle to show that the correlation maximum is stable and
indeed independent of the rotation.
The computational complexity is largely dominated by the costs for the inverse FFT, hence growing with the pad size. So accuracy comes at
some cost but reasonable accuracy can still be achieved well within 1 second.
§.§.§ Rotating Objects in ℝ^3
The results of figure (<ref>) suggest that the maximum expansion band has no influence on the quality
of the rotation
estimation - of course, this is only true if we are considering input signals that are limited to the very same maximum band. This is very
unlikely for very low bands in the case of real data.
In order to evaluate the actual influence of the maximum expansion band, we need to rotate the objects in ℝ^3 and extract
a second harmonic expansion after the rotation.
As mentioned before, the usability of our sinc interpolation approach is limited to correctly sampled (concerning the Sampling Theorem)
input signals (also see section <ref> for more details on sampling issues). Hence, one must not expect to obtain precise rotation
estimates for low band expansions, which act as a low pass filter, of high frequent input signals.
Luckily, for most input data, we are not depending on the high frequent components in order to find the maximum correlation. Hence, we
can apply a low pass filter (Gaussian) on the input data prior to the harmonic expansion.
Figure (<ref>) shows the impact of the maximum band and smoothing for rotations in ℝ^3. Overall,
the estimation results
are slightly worse than before, but are still quite reasonable.
§ EVALUATING THE FEATURE COMPLEXITY
ExperimentsComplexity
We evaluated the computational complexity on dummy volume data. All experiments were conducted on a 3GHz machine with 16 CPU cores and 128GB Ram.
However, only a single CPU was used if not noted otherwise.
§.§ Complexity of the Spherical Harmonic Transformation
ExperimentsComplexitySpherical Harmonics
We conducted two experiments to show the complexity of the voxel-wise Spherical Harmonic transformation of 3D volume data. The complexity
is independent of the actual data and is only influenced by the maximum expansion band (b_max) and the data size as shown in
figure <ref>.
§ EVALUATING VH-CORRELATION
RotationExperimentCorrelation
We use a sample 3D
vector field (see figure <ref>) which is rotated around the center of one
spherical patch parameterized by a single radius of r=10.
For each experiment, we evaluate the error statistics of 100
random rotations of this vector field. We generate the rotations over all possible angles
φ,ψ∈ [0, 2π[ and θ∈ [0, π[ with a resolution of 0.001
≈ 0.1^∘. Note that an error of 1^∘≈ 0.017. All given error rates
are the accumulated errors of all three angles.
Figure <ref> shows the direct effect of the maximum expansion band b_max on the
rotation estimate. But even for expensive “higher band” expansions, we encounter strong outliers
and a rather poor average accuracy.
This can be compensated by our Sinc interpolation approach (<ref>): Figure
<ref> shows how we can reduce the rotation estimation error well below 1^∘,
just by increasing the pad size p. The additional computational costs caused by the padding
are also given in figure <ref>.
Summarizing these first experiments, we are able to show that our proposed method is able
to provide a fast and accurate rotation estimation even for rather low band expansions, e.g.
if we choose p=64 and b_max=5, we can expect an average estimation error below 1^∘
at a computation time of less than 25ms.
Key Point Detection.
In a second series of experiments, we evaluate the performance of
our methods in a key point (or object) detection problem on artificial data. Figure <ref>
shows the 3D vector fields of two of our target structures. Our goal is to detect the center
of such X- and Y-like shaped bifurcations under arbitrary rotations in larger vector fields.
For each target structure, we extract a single patch, parameterized in four different radii with
b_max=3, at the center of the bifurcations.
Using (<ref>), we extract patches with the same parameterization at each point of the
test samples and apply our fast, combined (see section <ref>) and normalized
(<ref> )
cross-correlation to detect the target structures in the test vector fields.
Figures <ref> and <ref> show some example test data together with the correlation results.
It should be noted that the test bifurcations are only similar in terms of a X or Y shape,
but not identical to the given target structures. We also rotate the test data in a randomized
procedure over all angles.
Applying a threshold of 0.9 to the correlation results, we were able to detect the correct
target structures in all of our test samples without false positives.
§.§ Complexity of the Vectorial Harmonic Transformation
ExperimentsComplexityVectorial Harmonics
We also performed the experiment measuring the complexity in dependency of the maximum expansion band (b_max) for the voxel-wise
Vectorial Harmonic transformation of 3D volume data. Figure <ref> clearly shows that the complexity of the
transformation in the vectorial case is much higher than in the scalar case. This can only be compensated by the parallelization of
the transformation.
§.§ Complexity of a voxel-wise Feature Extraction
ExperimentsComplexitySH_absSH_autocorrSH_bispectrum
SH_phase2p-Feature3p-Featurenp-Feature1v-Feature2v-Feature
nv-Feature3D LBPVH_absVH_autocorr
In the final experiment regarding the computational complexity, we evaluated all features on a (250× 250 × 250) volume texture
sample. We extracted voxel-wise features simultaneously at all voxels. We used a fixed radius of r=10 and evaluated the computation time
on a single core with b_max={3,5,8}.
Figure <ref> illustrates the computation time for all features which is also given in table
<ref>. The complexity of the individual features has a wide range: from about 3 seconds for the
computation of the simple 2p-Feature (which is not based on a Spherical Harmonic transformation), to almost 4 hours needed to compute
a 4v-Feature with b_max=8 at every voxel of the (250× 250 × 250) volume.
It is obvious that some of the features are too complex to be of practical use in such a setting as presented here. Especially, a computation
of the highly specialized np and
vp-Features at all voxels and at a high expansion band b_max appears to be practically intractable.
However, it turns out that this is
not a major drawback in practice: First of all, as figure <ref> shows, the features are well suited for
parallelization, and second, it is usually not necessary to compute such specific features at all 256^3 voxels. Typically, it is very easy
to reduce the number of candidate voxels drastically, if one uses the response of “cheap” features to perform a rough
pre-segmentation.
Multicore Speed-up: We examined the potential speed-up of a parallelization of the feature computation at the example of the
5p (5) feature (see table <ref>).
Using 8 instead of a single CPU core, the complexity drops from 6350s (almost
2 hours) to 1700s (≈ 30min). Figure <ref> shows how the parallelization affects the different computation steps
like the SH transformation, the correlation step or the non-linear transformations and multiplications.
§ EVALUATING 3D TEXTURE DISCRIMINATION
ExperimentsTexture Discrimination
In a final experiment, we evaluated the texture discrimination performance of our proposed features. The experiments were conducted
on our artificial 3D volume texture database (see appendix <ref> for details on this database).
Using the SIMBA feature selection algorithm, we extracted the top 10 parameter combinations
for each of our features. Scalar features were expanded to the 5th band, vectorial features were computed on the gradient field of the scalar
input data and expanded to the 3rd band.
Given these feature vectors, we used a voxel-wise SVM classification to evaluate the 3D texture segmentation
performance of the individual features.
Our evaluation clearly shows that those features that are not invariant towards gray-scale changes strongly suffer in the case of such changes.
The vectorial features appear to be very stable, however this comes at the cost of higher computational complexity (see table
<ref>).
The highly specific np and vp-Features are not able to outperform the other approaches. These features are probably too selective to be able to
describe the large variations in the textures by just 10 parameter settings. However, these features anyway have been designed for key point and object
detections (see section <ref>) rather than texture description.
CHAPTER: ARTIFICIAL 3D VOLUME TEXTURE DATABASE
Texture DB
The following tables show a few sample images of xy-slices taken from the training samples of our artificial 3D volume texture database.
§.§ Texture Generation
The volume textures were generated from 2D texture samples which were taken from the BFT texture data base provided by the University
of Bonn (http://btf.cs.uni-bonn.de/download.html). Figure <ref> gives an overview of our very simple volume texture generation process.
The number of linear combinations n, as well as the rotations R_i and factors α_i ∈ [0,1] are chosen randomly.
§.§ Base Textures
The database contains 10 “base samples” for each of the six different textures (texture 1-6), all of which have a normalized average
gray-value. These “base samples” are used to generate separate training and test sets using arbitrary rotations and additive
gray-value changes.
§.§ Texture Segmentation
Given the “base samples” of the 3D volume textures, we generated a simple texture segmentation benchmark. One half of the “base samples”
was used to build 60 labeled training samples (see figure <ref>) and the other half was used for the test samples.
The 200 test samples consist of random combinations of two textures with a ground-truth labeling, where each the textures was
rotated randomly and subject to an additive gray-value change (see figure <ref>).
tocchapterBibliography
abbrv
|
http://arxiv.org/abs/2307.02675v1
|
20230705220732
|
Superopers revisited
|
[
"Anton M. Zeitlin"
] |
math.AG
|
[
"math.AG",
"hep-th",
"math-ph",
"math.MP",
"math.QA",
"math.RT"
] |
Superopers revisited]Superopers revisited
A.M. Zeitlin]Anton M. Zeitlin
Department of Mathematics,
Louisiana State University,
Baton Rouge, LA 70803, USAEmail: mailto:[email protected]@lsu.edu,http://math.lsu.edu/ zeitlinhttp://math.lsu.edu/∼zeitlin
equationsection
The relation between special connections on the projective line, called Miura opers, and the spectra of integrable models of Gaudin type provides an important example of the geometric Langlands correspondence. The possible generalization of that correspondence to simple Lie superalgebras is much less studied. Recently some progress has been made in understanding the spectra of Gaudin models and the corresponding Bethe ansatz equations for some simple Lie superalgebras. At the same time, the original example was reformulated in terms of an intermediate object: Miura-Plücker oper. It has a direct relation to the so-called qq-systems, the functional form of Bethe ansatz, which, in particular, allows q-deformation. In this note, we discuss the notion of superoper and relate it to the examples of qq-systems for Lie superalgebras, which were recently studied in the context of Bethe ansatz equations. We also briefly discuss the q-deformation of these constructions.
[
[
August 1, 2023
==================
§ INTRODUCTION
One of the most well-understood examples of geometric Langlands correspondence, studied by E. Frenkel and collaborators <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, is the relation between the spectrum of Gaudin models for simple Lie algebra 𝔤 and Miura ^LG-opers, namely certain meromorphic connections for the principal ^LG-bundles over ℙ^1.
Here ^LG is a simple Lie group of adjoint type with a Lalnglands dual Lie algebra ^L𝔤. In particular, the set of algebraic equations, known as Bethe equations, which describe the spectrum of the Gaudin model, provides the relation between the “moduli" parameters determining the corresponding oper connections.
Recently a q-deformation of this correspondence was studied <cit.>, <cit.>, which led to various generalizations <cit.>, <cit.> and applications <cit.>, <cit.>.
In particular, the concepts of Z-twisted Miura oper and Z-twisted Miura-Plücker oper were introduced, which have analogues in the differential case <cit.>, <cit.>.
The first notion means that a meromorphic gauge transformation produces a constant connection Z out of a given Miura oper.
The second, Z-twisted Miura-Plücker oper, is a less restrictive notion, which one can view as a first approximation to a Z-twisted condition. Namely, one applies Z-twisted condition only to the induced GL(2)-oper connections in the 2-dimensional subbundle of the associated bundle for each fundamental representation, corresponding to the vectors of fundamental weight and its elementary Weyl reflection. As a result, one obtains one-to-one correspondence between the data of Z-twisted Miura-Plücker opers and the functional system of equations called qq-system (QQ-system in q-deformed case). Such functional relations previously emerged in the study of the Gaudin model and other spin chain models. In that case, parameter Z is a twist parameter for the boundary condition. Under certain nondegeneracy conditions, these functional relations lead to the Bethe equations describing the spectrum of the related integrable model.
It was also shown in <cit.> that upon these nondegeneracy conditions, Z-twisted Miura-Plücker opers turn into Z-twisted Miura opers, where Z is an element of Cartan.
An important notion in proving that is the notion of Bäcklund transformations, the gauge transformations, which produce w(Z)-twisted Miura oper from a given one, where w is an element of the Weyl group of ^LG.
Miura oper connections are specializations of more general oper connections: they are required to preserve specific reduction to the Borel subgroup of the principal bundle to the Borel subgroup. In other words, the Bäcklund transformations allow to “travel" between various Miura oper connections corresponding to one single oper connection. In the context of Bethe ansatz on the level of qq-systems, these transformations are known as reproduction procedure and populations, and were tied to the transformations of scalar differential operators, see, e.g., <cit.>, <cit.>.
One would expect to repeat that story in the context of simple superalgebras and supergroups: that, in particular, would give an example of Langlands duality in the super-context. The quest for finding a notion of oper in the context of simple supergroups started in <cit.>, where the notion of oper on a super Riemann surface, called superoper, was introduced, which uses the concept of flat superholomorphic connection. We considered the case of supergroups that allow a purely fermionic system of simple roots. In that case, there is an explicit representation of the space of opers in terms of sections of super projective connections and superconformal vector bundles, known as superconformal fields. Also, as a particular example, we have shown in <cit.> the correspondence between the OSP(1|2)-superoper on a super Riemann sphere and 𝔬𝔰𝔭(1|2)-Gaudin model studied in <cit.>, <cit.>.
Here we consider the superopers for general simple simply-connected Lie supergroups on a super Riemann sphere. We associate with them a purely even connection on a projective line, which is an oper connection for the reductive purely even subgroup. We define the notion of Z-twisted Miura-Plücker oper in this case and formulate more general notions of
qp-system and qq-systems describing those and their specializations.
The difference between purely even cases and supergroup cases is that there are several Dynkin diagrams for simple superalgebras, each generally producing a gauge inequivalent oper connection. At the same time, one can “travel" between various Dynkin diagrams for a given Lie supergroup if one adds odd Weyl reflections generated by odd roots, which are not automorphisms of Lie superalgebra and certainly do not lift to the Lie supergroup.
In this note, we propose to unify the Miura oper connections corresponding to all Dynkin diagrams by introducing formal Bäcklund transformations, which are no longer produced by the gauge transformations of a given connection, which follow the generalized Weyl transformation property on the level of qq-systems. We conjecture that satisfying these Bäcklund transformations under certain nondegeneracy conditions will put enough constraints on the qq-system to be in 1-to-1 correspondence with Bethe equations for the corresponding Gaudin model.
Recently, in <cit.>, the authors looked at the qq-system, which describes the Bethe ansatz equations for 𝔤𝔩(n|m) Gaudin model, and studied their populations along the lines of <cit.>. That work was continued in paper <cit.>, where the authors aimed to do the same for 𝔬𝔰𝔭(n|2m) Gaudin Bethe ansatz using the same technique of the transformations of the factorizations of pseudo-differential operators.
The resulting transformations between data of the populations of these qq-systems bring remarkable transformations for the data which characterize the Miura-Plücker oper. We hope this more geometric approach can illuminate these reproduction procedures.
The structure of the paper is as follows. In Section 2 we discuss, following <cit.>, the notion of Miura superoper, which a superholomorphic connection <cit.> on a super Riemann surface <cit.>, <cit.>. We also define its reduction to the oper connection on a Riemann surface. In Section 3, we introduce the updated notion of Miura-Plücker opers and the qp-system, which generalizes qq-system for opers with regular singularities in pure even case. Section 4 is mainly devoted to the discussion of Bäcklund transformations for Miura opers related to the extended Weyl group and relation to the results of <cit.>, <cit.>. In the end, we also speculate about the q-deformation of these constructions, which should be related to the recent work <cit.>.
§.§ Acknowledgements
A.M.Z. is partially supported by Simons
Collaboration Grant 578501 and NSF grant
DMS-2203823.
§ SUPEROPERS ON SUPERCURVES AND THEIR REDUCTION
§.§ A reminder on super Riemann surfaces and super holomorphic
connections
For the general information on supermanifolds and superschemes one can consult <cit.>, <cit.>, <cit.>. For supercurves and Riemann surfaces one could follow <cit.>, <cit.>; for connections for vector bundles over super-Riemann surfaces we refer to <cit.>.
A supercurve of dimension (1|1) over some fixed Grassmann algebra S (which is fixed throughout this paper) is a pair (X,𝒪_X), where X is a topological space and 𝒪_X is a sheaf of supercommutative S-algebras over X such that (X,𝒪^red_X) is an algebraic curve: 𝒪^red_X is obtained from 𝒪_X by getting rid of nilpotents.
Locally, for some open sets U_α⊂ X and some linearly independent elements {θ_i} we have 𝒪_U_i=𝒪^red_U_i⊗ S[θ_i].
Such collection of open sets {U_i} serve as coordinate neighborhoods for supercurves with coordinates (z_i, θ_i).
The coordinate transformations on the overlaps U_i∪ U_j
are given by the following formulas: z_i=F_ij(z_j, θ_j), θ_i=Φ_ij(z_j, θ_j), where {F_ij}, {Φ_ij} are even and odd functions correspondingly.
A super Riemann surface over some Grassmann algebra S is a supercurve of dimension 1|1 over S, with one more extra structure: there is
a subbundle 𝒟 of TΣ of dimension 0|1, such that for any nonzero section D of 𝒟 on an
open subset U of , D^2 is nowhere proportional to D, i.e. one obtains the exact sequence:
0→𝒟→ T→𝒟^2→ 0.
One can pick the holomorphic local coordinates in such a way that this odd vector field
will have the form f(z,θ)D_θ for non-vanishing function f(z,θ) and
D_θ=∂_θ+θ∂_z, D_θ^2=∂_z.
Such coordinates are called superconformal. The transformation between two superconformal coordinate systems
(z, θ), (z', θ') is determined by the condition that 𝒟 should be preserved:
D_θ=(D_θθ') D_θ',
so that the constraint on the transformation coming from the local change of coordinates is
D_θ z'-θ'D_θθ'=0.
An important example of a super Riemann surface is the super Riemann sphere SC^*: there are two
charts (z, θ), (z, θ') so that
z'=-1/z, θ'=θ/z.
We call the sections of
𝒟^n the superconformal fields of dimension -n/2 following <cit.>.
In particular, taking the dual of the exact sequence (<ref>),
we find that a bundle of superconformal fields of dimension 1, namely 𝒟^-2, is a subbundle in T^*. Considering the superconformal coordinate system, a nonzero section of
this bundle is generated by η=dz-θ dθ, which is orthogonal to D_θ under the standard pairing.
Let us consider a principal bundle ℱ_G over the super Riemann surface with the Lie supergroup G over Grassmann algebra S. As
usual, locally one can associate to the connection a differential operator, so that in the chart (z,θ) the connection has the following form:
d_A=d+A=d+(η A_z+dθ A_θ)+(η̅A_z̅+dθ̅A_θ̅)=
(∂+η A_z+dθ A_θ)+(∂̅+η̅A_z̅+dθ̅A_θ̅)=(η D^A_z+dθ D_θ^A)+(η̅D^A_z̅+dθ̅D_θ̅^A).
Here A takes values in 𝔤_S, the Lie algebra of G. We note, that we used here the fact that d=∂+∂̅ and ∂=η∂_z+dθ D_θ.
The expression for
the curvature is:
F=d_A^2=dθ dθ F_θθ+η dθ F_zθ+ dθ̅dθ̅F_θ̅θ̅+ η̅dθ̅F_z̅θ̅+ηη̅F_zz̅+ η dθ̅F_zθ̅+η̅dθ F_z̅θ+dθ dθ̅F_θθ̅,
where
F_θθ=- D^A_θ^2+D^A_z, F_zθ=[D^A_z, D^A_θ], F_z, z̅=[D^A_z, D^A_z̅], F_zθ̅=
[D^A_z, D^A_θ̅], F_θθ̅=-[D^A_θ, D^A_θ̅].
It appears that if the connection d_A offers partial flatness, which implies F_θθ=F_zθ=F_θ̅θ̅=F_z̅θ̅=0, then there is a superholomorphic structure on any associated bundle (i.e. transition functions of the bundle can be
made superholomorphic) <cit.>. We are interested in the flat superholomorphic connections. In this case, since
F_θθ=0, the connection is fully determined by the D^A_θ locally. In other words it is determined by the
following odd differential operator, which from now on will denote ∇̂:
∇̂=D_θ+A_θ(z, θ),
so that the gauge transformation properties for A_θ are: A_θ→ gA_θg^-1-D_θg g^-1, where g is a superholomorphic function providing change of trivialization.
§.§ Miura Superopers
§.§.§ Notations
We refer to <cit.>, <cit.>, <cit.>, <cit.>, <cit.> for further information regarding simple Lie supergroups, superalgebras, and their representations. Let G is a simple simply connected Lie supergroup of rank r
over some Grassmann algebra S, B_- is its fixed Borel subgroup associated to a given Dynkin diagram with unipotent radical N_-=[B_-, B_-].
Let B_+ be the opposite Borel subgroup containing H and
N_+=[B_+,B_+].
Note that the Lie algebra 𝔤_S of G is a module over S, namely 𝔤_S=S⊗𝔤, where 𝔤 is a simple Lie superalgebra over ℂ.
Let {α_1,… ,α_r } be the set of
positive simple roots for the pair H⊂ B_+. For a given Dynkin diagram,
we divide the index set of simple roots
I={1,… , r} into union
I=I_w ⊔ I_g ⊔ I_b corresponding to the index set of white, grey, and black roots. W stands for the Weyl group generated the Weyl reflections corresponding to {α_i}_i∈ I_w.
Let {e_i, f_i, α̌_i}_i=1, …, r be the Chevalley generators of 𝔤, a_ji=⟨α̌_j, α_i⟩ is the Cartan matrix. We note also that for grey roots a_ii=0 for i∈ I_g, and,
in additional to standard Serre relations for {e_i}, {f_i} there are extra Serre relations related to the grey root generators.
The Lie superalgebra _-,S=_-⊗ S=(B_-) is generated by the f_i's and the
α̌_i's, and _+, S=_+⊗ S=(B_+) is generated by the e_i's and the α̌_i's.
Let π: S→ℂ is the natural projection. That can be extended to π: G→G̅, where G̅ is the underlying reductive simply connected group over ℂ. We denote by
𝔤̅,
𝔟̅_±,
𝔥̅,
𝔫̅_± the corresponding pure even reductive Lie algebra, and the pure even versions of its Borel, Cartan and the maximal nilpotent subalgebras, while G̅,
B̅_±,
H̅,
N̅_± the corresponding subgroups.
§.§.§ Definition of superopers
Now we are ready to define the notion of superoper following a similar definition in the pure even case <cit.>, <cit.> and inspired by the study of integrable hierarchies of Drinfeld-Sokolov type and related integrable models <cit.>, <cit.>, <cit.>, <cit.>.
Let us consider a principal G-bundle ℱ_G over a super Riemann surface
X and its reduction ℱ_B_- to the Borel subgroup B_-.
We assume that it has a flat superholomorphic connection determined by ∇̂.
suppose ∇̂' is another superholomorphic connection, which preserves ℱ_B_-. Then the
difference ∇̂'-∇̂ has a structure of superconformal field of dimension 1/2 with values in the associated bundle
𝔤_ℱ_B_-.
Following the purely even case we define an open B_--orbit
O_S⊂[𝔫_-_S, 𝔫_-,S]^⊥/𝔟_-,S, consisting of vectors, stabilized by N_- and such that all the simple root components of these vectors with respect to the adjoint action of H are non-zero. Here the orthogonal component is taken with respect to the nondenerate form for a given simple Lie superalgebra.
<cit.>
A G-superoper on a super Riemann surface is the triple (ℱ, ℱ_B_-, ∇), where ℱ is a principle
G-bundle, ℱ_B_- is its B_--reduction and ∇ is a long superderivative on ℱ, such that
∇/ℱ_B_- takes values in O_ℱ_B.
Therefore, locally on the open subset U, with coordinates (z, θ), with respect to the
trivialization of ℱ_B, the structure of the
superholomorphic connection is:
∇̂=D_θ+∑^r_i=1a_i(z, θ)e_i+b(z,θ),
where each a_i(z, θ) is a nonzero function of opposite parity to e_i and b(z, θ) is an odd 𝔟_-,S-valued function.
§.§.§ Miura superopers and their pure even counterparts
Let us start from a definition of Miura superoper and
A Miura G-superoper on X is a quadruple
(_G,∇,_B_-,_B_+), where (_G,∇̂,_B_-) is a
meromorphic G-oper on SC^* and _B_+ is a reduction of
the G-bundle _G to B_+ that is preserved by the
connection ∇̂.
From now on we set =SC^*. Then one can put Miura superoper in the following canonical form.
For any Miura G-oper on SC^*, there exists a
trivialization of the underlying G-bundle _G on an open
dense subset of SC^* for which the superoper connection has the form
∇̂=D_θ+∑^r_i=1g_i(z,θ)α̌_i+∑^r_i=1a_i(z,θ)e_i,
where g_i(z,θ), a_i(z)∈S(z)[θ], so that g_i(z,θ) are all even and a_i(z,θ) are opposite in parity to e_i.
The proof of this proposition is similar to the one in <cit.> with the use of the cell partition via super extension of Weyl group in <cit.>, see also <cit.>.
From now on we assume that {D_θa_i(z,θ)}_i∈ I_w, as well as {a_i(z,θ)_i∈ I_g∪ I_b are invertible.
A Ẑ-twisted G-superoper on SC^* is a G-superoper
that is equivalent to the constant element A(θ, z)=θẐ, where Ẑ∈𝔤_S ⊂𝔤(z,θ) under the gauge action of G(z, θ).
For simplicity from now on we will assume that Ẑ∈𝔥_S is regular semisimple. One can generalize most of the results to Z∈𝔟_+,S as it was done in <cit.>.
Note, that instead one could consider any element Z'=ξ+θ Z, where xi i an odd element of 𝔤_S, instead of θ Z in the above definition, but one can remove xi by gauge transformation given by exp(θξ)∈ G(θ).
Now we want to get rid of extra odd variables to see the relation of superopers to opers on ℙ^1 for a certain reductive group.
First, we will get rid of the θ variable. Let us represent the superoper connection as
∇̂=D_θ+θ M(z) +N(z)
making the dependence on θ explicit. The gauge transformations can be factorized this way:
g(z,θ)=(1-θ R(z))U(z),
where R(z)∈𝔤(z), U(z)∈ G(z). There is a unique R(z) , namely R(z)=N(z), such that
g(z,θ)^-1∇̂g(z, θ)=∂_θ+θ U^-1(z) ∇̃U(z),
where the meromorphic connection ∇̃ is given locally by a differential operator
∇̃=∂_z+1/2[N(z),N(z)]+M(z).
Thus we obtain the following Proposition.
Z-twisted condition for Miura G-superoper just implies that connection
∇̃ on ℙ^1 is gauge equivalent to a constant connection Ẑ∈𝔤.
Now let us proceed to Miura G-superoper, so that the resulting connection is as in (<ref>) and let's construct such connection
∇̃. This way, we obtain G-connection on ℙ^1.
Now, let us apply the map π: S→ℂ, which strips dependence on all the odd parameters. This way, we obtain the connection ∇=π (∇̃) on a principal G̅-bundle, which have locally the following form:
∇≡π(∇̃)=∂_z+u(z)+∑_i∈ I_w L_i(z)e_i+1/2∑_i,j∈ I_b∪ I_g L_i(z)L_j(z)[e_i, e_j],
where {L_i(z)}_i=1, …, r are nonzero rational functions,
so that in the original connection (<ref>):
π (a_i(z,θ))=
{[ L_i(z) if i∈ I_w; θ L_i(z) if i∈ I_b∪ I_g. ].
That gives rise to the following definition.
We say that the quadruple
(ℱ_G̅, ℱ_B̅_+, ℱ_B̅_-,∇),
where ∇ is a connection on a principal bundle
ℱ_G̅ over
ℙ^1 together with reductions ℱ_B̅_± to Borel subgroups B̅_± is a Miura oper associated to Miura G-superoper
(ℱ_G, ℱ_B_+, ℱ_B_-,∇̂).
In particualr, we notice that the differential operator which isolates the Cartan part of <ref>:
∇^H=∂_z+u(z)
defines an H-connection on ℙ^1, which we call a Cartan connection
∇^H associated to Miura oper ∇. We say that H-connection is Z-twisted if it is gauge equivalent to the constant connection ∂_z+Z, where Z∈𝔥.
i)We note here that in general, even for distinguished Dynkin diagrams the collection
{α̌_̌ǐ}_i=1, …, r, [e_i,e_j] for all i,j∈ I_g∪ I_b and e_i for all i∈ I_w do not give Chevalley generators producing B̅_+, but instead a Borel subgroup of a smaller reductive group.
ii) If the superoper was Z-twisted, he resulting connection ∇ is gauge equivalent to π(Z).
§ MIURA-PLÜCKER OPERS, QP-SYSTEMS, AND QQ-SYSTEMS
§.§ Z-twisted Cartan connections
Let us fix the component notation for the Cartan connection, associated to a given superoper and the corresponding twist element Z:
u(z)=∑_i∈ I_w+I_gu^i(z)α̌_i+∑_i∈ I_bu^i(z)α̌_i/2; Z=∑_i∈ I_w∪ I_gζ_iα̌_i+∑_i∈ I_bζ_iα̌_i/2
This leads to the following proposition.
If the Cartan connection parametrized by u(z) in (<ref>) is Z-twisted, then there exist rational functions {p_i(z)}_i=1, …, r such that
u_i(z)=ζ_i+ ln'[p_i(z)], 1, …, r.
One can view it as a first approximation for Z-twisted condition for the Miura G̅-oper connection (<ref>).
Now let us consider in detail the rank r=1 examples of Z-twisted Miura opers.
§.§ Low rank cases
§.§.§ Z-twisted Miura SL(2)-opers
In this case we are dealing with the meromorphic SL(2)-connection for SL(2) bundle
which has the following form:
∇=∂_z+u(z)α̌+L(z)e
where e, f, α̌ are the Chevalley generators of 𝔰𝔩(2). The Z-twisted condition states that there exists U(z)∈ B_+(z), such that
U(z)∇ U^-1(z)=∂_z+Z.
where Z=ζα̌.
We can represent the resulting group element as
U(z)=e^q(z)ep(z)^α̌,
where p(z), q(z)∈ℂ(z). Looking at the coefficient of α̌ in (<ref>), we obtain
u(z)=ζ+ln '[p(z)]
The e-coefficient gives the equation:
q'(z)+2ζ q(z)=p^2(z)L(z)
Representing
q(z)=q_-(z)/q_+(z),
so that
q_±(z)∈ℂ[z], q_+(z) is monic, we obtain the following equations:
W(q_-,q_+)(z)+2ζ q_-(z)q_+(z)=Λ(z),
so that
Λ(z)=q_+^2(z)p^2(z)L(z)
is a polynomial.
Expanding u(z)=ũ(z)/Λ̃(z) and
p(z)=p_-(z)/p_+(z), where p_±(z) do not have common roots and p_-(z) is chosen to be monic, we obtain:
Λ̃(z)=p_+(z)p_-(z).
Note, that given the factorization of denominator Λ̃, the numerator
is determined uniquely:
ũ(z)=W(p_-,p_+)(z)+2ζ p_-(z)p_+(z).
Thus let us call the following system of equations:
W(q_-,q_+)(z)+2ζ q_-(z)q_+(z)=q_+^2(z)p_-^2(z)L(z)/p_+(z)^2,
p_+(z)p_-(z)=Λ̃(z)
the pq-system for 𝔰𝔩(2).
There is one-to-one correspondence between Z-twisted Miura SL(2)-opers with the connection (<ref>) and solutions of the pq-system (<ref>), where p(z)=p_-(z)/p_+(z) as well as q(z)=q_-(z)/q_+(z) are irreducible fractions, so that ũ(z)=W(p_-,p_+)(z)+2ζ p_-(z)p_+(z).
A simplification of the 𝔰𝔩(2) pq-system is given by the following identification
Λ̃(z)=p_+(z)=q_+(z),
which leaves just one equation:
W(q_-,q_+)(z)+2ζ q_-(z)q_+(z)=Λ(z),
so that
u(z)=ζ-ln'(q_+(z)) and L(z)=Λ(z) is a polynomial. That is known as 𝔰𝔩(2) qq-system, which is in one-to one correspondence with the SL(2)-opers with regular singularities: the positions of the singularities on ℙ^1 are given by the roots polynomial Λ(z).
Under nondegeneracy conditions, namely q_+(z) have distinct roots and have no common roots with Λ(z), a simple calculation shows that there is a
bijection between such nondegenerate solutions of 𝔰𝔩(2) qq-system and the the Bethe equations of 𝔰𝔩(2) Gaudin model:
2ζ+∂_zlog[Λ(z)(z-w_ℓ)^2]|_z=w^i_ℓ=0,
i=1,…, r; ℓ=1, …, (q_+(z)).
§.§.§ Z-twisted Miura SL(1|1) opers and abelian connections
Although it is not a simple superalgebra because of nontrivial center, one can apply the notion of oper to SL(1|1)-group, which technically corresponds to the grey node of Dynkin diagram, and it is still useful to consider it.
We see that in this case the connection ∇̅ and ∇ are reduced to the abelian connection corresponding to the central element α̌ of SL(1|1):
∇=∂_z+u(z)α̌.
The Z-twisted condition leads to the equation (<ref>). Expressing u(z)=ũ(z)/Λ̃(z),
one can say that this is a particular case of the first example when L(z)=0, so one can call the equation
p_+(z)p_-(z)=Λ̃(z)
the 𝔰𝔩(1|1) pq-system, although q-part is absent here.
There is one-to-one correspondence between SL(1|1) -opers (<ref>) and the solutions to the equation (<ref>), so that p_±(z)
have no common roots and
u(z)=Λ̃(z)^-1[W(p_-,p_+)(z)+2ζ p_-(z)p_+(z)].
The equation (<ref>) actually gives the Bethe ansatz solutions for 𝔤𝔩(1|1) Gaudin model. Namely, let's suppose we can re-express
Λ̃(z)=ln'(Λ(z))π(z),
where π(z)=∏^n_k=1(z-z_k) where z_k are all distinct roots of Λ(z)=∏^n_k=1(z-z_k)^d_k.
In this case the Λ determines the weights for the appropriate 𝔤𝔩(1|1) representation and the equation <ref> is equivalent to the Bethe ansatz equations for 𝔤𝔩(1|1) Gaudin model <cit.>, <cit.>:
∑^n_k=1d_k/w_j-z_k=0, j=1, …(p_+(z)),
where {w_j} are the roots of p_+(z).
§.§.§ Z-twisted Miura OSP(1|2)-opers
Consider an 𝔬𝔰𝔭(1|2)-triple: e, f, α̌, where e, f are odd Chevalley generators. The corresponding OSP(1|2)-Miura oper connection is:
∇=∂_z + α̌/2u(z)+1/2L^2(z)[e,e].
Notice that this is exactly the Miura SL(2)-oper we considered above, since α̌/2, [e,e] are Chevalley generators of 𝔰𝔩(2) subalgebra. The only difference is that we have a square as a coefficient of e^2.
Thus we have the following Proposition.
There is a one-to-one correspondence between Z-twisted OSP(1|2)-opers and Z-twisted Miura SL(2) opers where Λ(z)=L^2(z), L(z)∈ℂ[z].
That correspondence was first noted in <cit.>: in particular, it was shown that the resulting Bethe ansatz equations describing coincide with the Bethe ansatz equations for 𝔬𝔰𝔭(1|2) Gaudin model <cit.>, <cit.>.
§.§ Miura-Plücker opers
Z-twisted condition is a complicated one to solve. Instead one can look at the intermediate object. We already have seen the first approximation to that condition given by the Proposition <ref>. Now, we introduce a useful object, known as Z-twisted Miura-Plücker G̅-oper, which is a next interation approximating Z-twisted condition. In pure even case, for Miura G-opers with regular singularities, it was introduced in <cit.> following the q-deformed version in <cit.>.
Let us consider the induced Miura G̅-oper B̅_+-bundle connections on 𝒱_i: the associated bundles, corresponding to highest weight irreducible modules of 𝔤̅: i) V_ω_i if i∈ I_w∪ I_g; ii) V_2ω_i when i∈ I_b.
Let us define a B_+-subbundle 𝒲_i, the rank of which will depend on i.
* If i∈ I_w, W_i is spanned by the line subbundles
ℒ_i, ℒ̃_i, which correspond to the vectors of weight ω_i, ω_i-α_i.
* If i∈ I_b, let 𝒲_i be spanned by
ℒ_i, ℒ̃_i corresponding to the vectors of weight 2ω_i, 2ω_i-2α_i.
* If i∈ I_g we take W_i to be a line bundle which correspond to the vector of highest weight ω_i.
Let ∇_i be the induced connection on W_i.
We say that Miura G̅-oper is Z-twisted Miura-Plücker if there exists v(z)∈ B_+(z) such that
∇_i=v(z)(∂_z+Z)v(z)^-1|_W_i=v_i(z)(∂_z +Z_i) v_i(z)^-1,
where v_i(z) = v(z)|_W_i and Z_i = Z|_W_i.
The element v(z) is not uniquely determined by Miura-Plücker oper. Let Ñ_+(z) be the subgroup, generated by all even commutators of [e_i, e_j], i≠ j.
We have the following proposition, which gives equivalence classes of such v(z).
For a given v(z) from the Definition <ref> any element of coset
v(z)HÑ_+(z)
also satisfies (<ref>).
Following <cit.>, we call such a coset a framing of Miura-Plücker oper.
The Miura-Plücker datum is a pair (∇, v(z)Ñ_+(z)) consisting of Miura-Plücker oper and related framing.
One can fix the corresponding representative in the coset as follows:
v(z)=∏_i∈ I_w∪ I_gp^i(z)^-α̌_i∏_j∈ I_bp^j(z)^-α̌_j/2∏_i∈ I_w
e^-q^i(z)e_i∏_j∈ I_be^-1/2q^j(z)[e_j,e_j].
Now we will explore the condition <ref>,
For the Cartan part of ∇ we obtain the following equations:
u^i(z)=ζ_i+ln'(p^i(z)).
Now we will show how it works off-diagonal in each of the cases:
* Let i∈ I_w. Then we have the following condition:
We first compute the matrix of v(z) and Z acting on the
two-dimensional subspace W_i. The following calculation gives
v(z)|_W^i=
[ p^i(z)^-1 0; 0 p^i(z)∏_j≠ i, j∈ I_w∪ I_g p^j(z)^a_ji∏_j≠ i, j∈ I_b p^j(z)^a_ji/2 ][ 1 - q^i_-(z)/q^i_+(z); 0 1 ]
and
Z^H|_W_i=[ ζ_i 0; 0 -ζ_i-∑_j≠ ia_jiζ_j ].
That implies the following equation from the top right corner of 2× 2 block:
∂_z q^i(z)+⟨ Z, α_i⟩ q^i(z)=L_i(z)[p^i(z)]^2∏_j≠ i, j∈ I_w∪ I_g p^j(z)^a_ji∏_j≠ i, j∈ I_b p^j(z)^a_ji/2.
* If i∈ I_g, we do not have any extra equations in addition to (<ref>).
* If i∈ I_b, we are dealing with the same 2× 2 block as in i∈ I_w. Thus we have the following equation:
∂_z q^i(z)+⟨ Z, α_i⟩ q^i(z)=[L_i(z)p^i(z)]^2∏_j≠ i, j∈ I_w∪ I_g p^j(z)^2a_ji∏_j≠ i, j∈ I_b p^j(z)^a_ji.
§.§ qp-and qq-system for Lie superalgebra 𝔤.
§.§.§ Definition of the pq-system and relation to Z-twisted Miura-Plücker opers.
Let us choose the simple root system of Lie superalgebra 𝔤 with the index set I=I_w∪ I_b∪ I_g and
the datum of rational functions {L_i}_i∈ I(z), polynomial functions {Λ̃_i}_i∈ I and rational functions {p^i(z), q^i(z)}_i∈ I be the irreducible fractions:
p^i(z)=p^i_-(z)/p^i_+(z), q^i(z)=q^i_-(z)/q^i_+(z), i∈ I.
We call the following system of equations:
∂_z q^i(z)+⟨ Z, α_i⟩ q^i(z)=F_i(z), i∈ I_w∪ I_b,
F_i(z)={[ L_i(z)[p^i(z)]^2∏_j≠ i, j∈ I_w∪ I_g p^j(z)^a_ji∏_ j∈ I_b p^j(z)^a_ji/2 if i∈ I_w; [L_i(z)p^i(z)]^2∏_ j∈ I_w∪ I_g p^j(z)^2a_ji∏_j≠ i, j∈ I_b p^j(z)^a_ji if i∈ I_b ].
p^i_+(z)p^i_-(z)=Λ̃_i(z), i=1, …, r,
the pq-system associated to superalgebra 𝔤.
Using this definition we can restate the result of the previous section as follows.
There is a bijection between the datum of Z-twisted Miura-Plücker opers modulo the data, solutions of the generalized pq-systems, so that
q^i_+(z),q^i_-(z) as well as p^i_+(z),p^i_-(z) for all i=1,…, r have no common roots and
u_i(z)=ũ_i(z)/Λ̃_i(z)=ζ_i+ln'(p_i)(z).
We remark here, that we do not make any assumptions/nondegeneracy conditions on far on the roots/poles of the data {Λ̃_i}, {L_i(z)}.
In the following we will see examples when {Λ̃_i(z)}_i∈ I_g depends on q^i(z). Notice also that there are no equations on {L_i(z)}_i∈ I_g. Those will be take into account when the full Z-twisted condition is implemented.
§.§.§ Reduction to the qq-system, nondegeneracy conditions and Bethe equations
To get in touch with the Bethe ansatz equations for Gaudin model, we will impose the following condition on the qp-system: we require that the reduction of the pq-system to the simple even subgroups of 𝔤̅ reproduces the qq-system for Miura opers with regular singularities <cit.>. Namely, we assume:
Λ̃_i(z)=p_+^i(z)=q_+^i(z) for i∈ I_w∪ I_b.
Also, we redefine:
p^i_±(z)≡ q^i_±(z), for i∈ I_g.
We call the resulting system of equations on the pq-system data, the qq-system associated to 𝔤̅.
W(q^i_-, q^i_+)(z)+⟨ Z, α_i⟩ q_+^i(z)q_-^i(z)=F_i(z), i∈ I_w∪ I_b,
F_i(z) = {[ L_i(z)
∏_j≠ i, j∈ I_w q_+^j(z)^-a_ji∏_ j∈ I_b q_+^j(z)^-a_ji/2∏_ j∈ I_g q^j(z)^a_ji if i∈ I_w; [L_i(z)]^2∏_ j∈ I_w q_+^j(z)^-2a_ji∏_j≠ i, j∈ I_b q_+^j(z)^-a_ji∏_ j∈ I_g q^j(z)^2a_ji if i∈ I_b ].
q^i_+(z)q^i_-(z)=Λ̃_i(z), i∈ I_g.
This way we reduced the number of independent functions to {L_i(z)}_i∈ I_w∪ I_b and {Λ̃_i(z)}_i∈ I_g.
Under certain conditions, one can write solutions to the (<ref>) in terms of algebraic equations. First, we introduce the nondegeneracy conditions:
The solutions to the qq-system is called nondegenerate, if F_i(z) has no common roots with q^i_+(z) and all the roots
of q^i_+(z) are distinct for all i. Moreover p_+^i(z) and p_-^i(z) have no common roots.
Then the following Proposition is true.
If Z is regular semisimple, there is a bijection between nondegenerate solutions to (<ref>) and the following algebraic equations
for the roots {w^i_ℓ}_
ℓ=1, …, (q^i_+(z)) of q^i_+(z):
⟨α_i,Z⟩+∂_zlog[F_i(z)(z-w^i_ℓ)^2]|_z=w^i_ℓ=0,
i=1,…, r; ℓ=1, …, (q^i_+(z)).
The equations (<ref>) are known in particular cases as Bethe equations for Gaudin model associated with simple Lie algebras, we will refer to them as Bethe equations of even type.
The algebraic equations emerging from (<ref>), which are solved just by division are Bethe equations of odd type. Of course, in the case, when 𝔤 is a simple algebra, one has only equations of even type. The following proposition follows.
In the case when 𝔤 is a simple Lie algebra, i.e. I=I_w the qq-system (<ref>),(<ref>) reduces to well-known qq-system from <cit.>, <cit.>, imposing the condition that L_i(z)=Λ_i(z) is a polynomial:
W(q^i_-, q^i_+)(z)+⟨ Z, α_i⟩ q_+^i(z)q_-^i(z)=Λ_i(z)
∏_j≠ i q_+^j(z)^-a_ji, i=1, …, r,
where a_ji is a Cartan matrix of 𝔤.
The corresponding G-oper connections are called Miura G-opers with regular singularities, where the position of singularities are regulated by polynomials {Λ_i(z)}_i=1,…, r <cit.>. Locally the connection has the form
∇=∂_z+Z-∑^r_i=1ln'[q^i_+(z)]α̌_i +∑^r_i=1Λ_i(z)e_i.
§ Z-TWISTED MIURA OPERS AND THE EXTENDED WEYL GROUP
§.§ Overview of the pure even case
In <cit.>, in the case of simple Lie algebras for opers with regular singularities, we have shown that under the nondegeneracy conditions Z-twisted Miura-Plücker opers turn out to be Z-twisted Miura opers, i.e., the solutions of the qq-system completely determine Z-twisted Miura opers.
Let us look at the related Z-twisted oper for regular semisimple Z. We find that there are precisely |W| (W is a Weyl group of 𝔤) related Z-twisted Miura opers, when Z is regular semisimple, each described by the solutions of the corresponding qq-systems.
The action of the Weyl group on the space of solutions of such qq-systems is given using the following transformations, corresponding to elementary Weyl reflections w_i:
Z→ w_i(Z) , q_±^j(z)→{[ q^i_∓(z) if j=i; q^j_±(z) if j≠ i ]..
On the level of Miura G-oper connections that can be achieved by special gauge transformations from B_-(z):
∇→ e^μ_i(z)f_i ∇ e^-μ_i(z)f_i, μ_i(z)=Λ_i(z)^-1[∂_zlog(q^i_-(z)/q^i_+(z))+⟨α_i,Z⟩],
which we called Bäcklund transformations in <cit.>, <cit.>.
These transformations were previously discussed in the context of Bethe ansatz equations <cit.>, <cit.> leading to the so-called “populations" of Bethe ansatz equations.
§.§ Conjectures for simple superalgebras
In the case of superalgebra 𝔤, Weyl reflections of even roots generate the Weyl group W. One can construct a larger group W̃ by adding the reflections, corresponding to the odd roots, which changes the Dynkin diagram for 𝔤 <cit.>, <cit.>. In particular, applying such reflections to a given system of simple roots, one can generate all systems of simple roots for a given superalgebra 𝔤. However, these are not automorphisms of 𝔤; of course, one cannot lift these transformations to G.
The relation between Z-twisted Miura opers corresponding to a given oper in the pure even case, as discussed in the previous subsection, motivates introducing the following notion, generalizing the one from <cit.>.
Consider two qq-systems based on Dynkin diagrams based on
simple root systems related by a simple reflection s_i∈W̃.
We say that two solutions of such qq-systems are i-composable if they are obtained from each other by
the transformation (<ref>) accompanied with certain transformations
{L_i}, {Λ̃_i}→{L^w_i_i}, {Λ̃^w_i_i}. We call two solutions of qq-systems w-composable, where w∈W̃ if a sequence of such transformations relates them. We call the related Z-twisted Miura-Plücker opers w-composable if their datum expressed via the solution of the qq-system is w-composable.
Notice, that we still did not specify the transformations of
{L_i(z)}_i∈ I_b∪ I_w, {Λ̃_i(z)}_i=1, …, r, for the w-composable qq-systems because they may vary from one qq-system to another, unlike the pure even case. Also, those may depend on {q^j_±(z)}_i=1,…, r if j and i are adjacent on the Dynkin diagram.
In fact, we will see below such an example of w-composable family of qq-systems associated with 𝔰𝔩(n|m).
Let us formulate the following conjecture, which is the analogue of the main results of <cit.>.
i) Under certain nondegeneracy conditions, Z-twisted Miura-Plücker opers are Z-twisted Miura opers for certain choices of {L_i(z)}_i∈ I_g.
ii) There exists such data of {L^w_i(z)}_i∈ I and {Λ̃^w_i(z)}_i∈ I for the qq-systems associated to 𝔤 such that there exists a family of w-composable Z-twisted Miura opers, which are described by the Bethe equations of Gaudin model for a certain simple superalgebra ^L𝔤.
Indeed, this way, we relate various Z-twisted opers for different Dynkin diagrams making an object, which one may call a generalized Z-twisted superoper, which unifies all w-composable Z-twisted Miura opers.
We do not know what ^L𝔤 could be for the cases beyond 𝔤= 𝔰𝔩(m|n): in the next subsection we will discuss the discovered class of such w-composable qq-systems.
§.§ What is known: qq-systems for 𝔰𝔩(n|m) and Gaudin models
In <cit.> a certain version of qq-system was considered in the case of 𝔰𝔩(n|m) and Z=0. For a given Dynkin diagram for 𝔤 one has
Wr(q^i_-,q^i_+)(z)=Λ_i(z)q_+^i+1(z)q^i-1_+(z) if i∈ I_w,
q^i_-(z)q^i_+(z)=Λ̃_i(z), if i∈ I_g.
Here
Λ_i(z)=T_i(z)/T_i+1(z); Λ̃_i(z)=ln'(T_i(z)T_i+1(z)q^i-1_+(z)/q^i+1_+(z))π_i(z)q^i+1_+(z)q^i-1_+(z),
so that {Λ_i(z)}, {T_i(z)} are polynomials and π_i(z)=∏_k(z-z_k), where z_k are distinct roots of T_i(z)T_i+1(z).
The solution of this qq-system under nondegeneracy condition is in 1-to-1 correspondence with Bethe ansatz equations for 𝔤𝔩(n|m) Gaudin model.
The authors of <cit.> produced the w_i-composable solutions, which they call populations as in original papers <cit.>.
We note, that in this case the extended Weyl group W̃ can be identified with S_n+m and in defining representation it can be realized in the standard way using permutation matrices. The architecture of the reproduction procedure is linked to pseudo-differential operator, which for distinguished Dynkin daigram, where one grey root separates Dynkin diagrams for 𝔰𝔩(n) and 𝔰𝔩(m)) looks as follows:
R(z)=∏^n_i=1(∂_z-log'[T_i(z)q^i-1_+(z)/q^i_+(z)])∏^m+n_n+1(∂_z+log'[T_i(z)q^i-1_+(z)/q^i_+(z)])^-1,
where q_+^0(z)=q_+^m+n(z)=1.
R^w(z)=
∏^n+m_i=1(∂_z-s_i(w)log'[T^w_i(z)q^w, i-1_+(z)/q^w,i_+(z)])^s_i(w).
Here s_i(w)=± 1 and correspond to original permutation w of original s_i in R(z) for distinguished Dynkin diagram. By extra index w we denoted the qq-system corresponding to w-transformed Dynking diagram. It turns out that w-composability of the related solutions implies identification of R^w and R <cit.>, using the following transformations corresponding to elementary Weyl reflections w_i (this also implies the proper formulas for the T^w_i(z)).:
(∂_z-s_i(w)log'[T^w_i(z)q^w, i-1_+(z)/q^w,i_+(z)])^s_i(w)(∂_z-s_i+1(w)log'[T^w_i+1(z)q^w, i_+(z)/q^w,i+1_+(z)])^s_i+1(w)=
(∂_z-s_i+1(w)log'[T^s_iw_i(z)q^w, i-1_+(z)/q^w,i_-(z)])^s_i+1(w)(∂_z-s_i(w)log'[T^s_iw_i+1(z)q^w, i_-(z)/q^w,i+1_+(z)])^s_i(w).
This is generalization of original work from <cit.>, where it was done for the case of 𝔰𝔩(n) and differential operators.
At the same time, in that pure even case the shortcut (see <cit.>, <cit.>, <cit.>) between the Gaudin model and opers was achieved by taking a certain analogue of determinant of KZ connection:
D^KZ_k,l=δ_k,l∂_z-∑_ν=1,…, NΦ^ν_k,l/z-z_ν, k,l=1, …, n,
where Φ^ν_k,l∈ U(𝔤𝔩(n))^⊗ N is the operator acting as the 𝔤𝔩(n) generator e_k,l in the ν-th place in that tensor product. This determinant is understood in a formal way using permutations description. The coefficients in the expansion of the resulting differential operator are the Gaudin Hamiltonians. The factorization of that differential operator of type R(z) corresponds to bringing that operator to the normal form in the sense of <cit.>, <cit.>, which coincides with the operator-valued Miura oper connection; in this way the determinant, when applied to a Bethe vector gives exactly the factorization of R(z).
It is natural to expect that the similar formula should exist in the quantum case, since (<ref>) is very similar to a Berezinian of the corrresponding differential operator. To obtain R^w factorization one can apply the Weyl transformation w∈W̃ represented as a permutation w̃ on (m|n) superspace:
Ber(w̃[D^KZ_k,l]w̃^-1).
One has to emphasize that w here is not an element of supergroup SL(m|n), but only a formal permutation. Bringing the D^KZ-operator to various versions of analogue of normal form for (m|n)× (m|n) supermatrix should produce an oper connection, which we discussed: after applying formal Berezinian one obtains the R^w(z) operator.
We mention here a related paper <cit.> studying center of affine superalgebra 𝔤𝔩(n|m), where Berezinian formulas naturally emerged; the relation between the center of affine algebras and opers in the pure even case was thorougly investigated (see e.g. <cit.> for review).
We note, that the generalization of such qq-system to the case of
𝔤=𝔬𝔰𝔭(m|2n) was introduced in <cit.>, based on appropriate Bethe equations:
Wr(q^i_-,q^i_+)(z)=Λ_i(z)∏_i [q^i_+(z)]^-c_ij if i∈ I_w∪ I_b
q^i_-(z)q^i_+(z)=Λ̃_i(z), if i∈ I_g.
Here
Λ̃_i(z)=log'[Λ_i(z)∏_jq^j_+(z)^-c_ij]π_i(z)∏_j, c_ij≠ 0q^j_+(z),
where {c_ij} is the corresponding Cartan matrix, Λ_i(z)∈ℂ[z] for all i=1, …, r, so that π_i is the denominator of the fraction log'(Λ_i(z)) of minimal possible degree.
The construction of the
w-composability (or reproduction procedure) in the language of some pseudo-differential operators similar to 𝔰𝔩(n|m) case, but much more involved.
§.§ (G,q)-opers for supergroups
There is a generalization of the notion of oper for simple simply-connected Lie groups to (G,q)-oper: the difference analogue of G-oper connection, which uses natural multiplicative action of ℂ^×_q on ℙ^1, i.e. z→ qz[Such difference connection could be defined for the additive action as well.].
That(G,q)-oper is again a triple
(ℱ_G, ℱ_B_-, A), where ℱ_G, ℱ_B_- are the principal G-bundle on ℙ^1 and its B_--reduction correspondingly. The q-oper connection A is an element of Hom(ℱ_G, ℱ^𝓆_G), where ℱ^𝓆_G is a pull-back bundle with respect to ℂ^×_q-action. Here A belongs to the Coxeter cell: B_-(z)c(z)B_-(z) in the Bruhat decomposition of G(z), where c(z) is a lift of Coxeter element to G(z), which as usual is a product of the lifts of simple Weyl reflections to G(z).
The Miura condition adds the requirement that there exists another reduction ℱ_B_+ to the Borel subgroup which A preserves.
The Z-twist condition means, as in differential case, gauge euivalence of the q-connection A to the constant q-connection Z∈ H.
In <cit.> we used the notion of Z-twisted Miura-Plücker (G,q)-oper to find correspondence with q-defomation of the qq-system, known as QQ-system. Following that path we constructed the explicit correspondence between Z-twisted (G,q)-opers and the XXZ spin chain models XXX spin chains if we use additive action), which are deformations of the Gaudin model, constructing an example of q-Langlands correspondence <cit.>, <cit.>.
Following same principles, one can build a q-oper analogue of ∇̅, ∇ for a given simple supergroup and a chosen Dynkin diagram, as an element of:
B_-(z)[∏_i∈ I_w w̃_i(z)∏_j≤ k, j,k∈ I_g∪ I_bw̃_jk(z)]B_-(z),
where B_- stands for the Borel subgroup for a chosen Dynkin diagram either for
supergroup G or the reductive group G̅, and
w̃_i(z), w̃_jk(z)
stand for the lifts of Weyl reflections corresponding to the roots α_i, α_j+α_k correspondingly.
The explicit expression for Miura(G,q)-oper may be quite complicated because of the absence of usual Bruhat decomposition for simple supergroups. At the same time, following the results of <cit.> one can obtain that the corresponding
(G̅,q)-oper connection has the following form:
∏^r_i=1 [r_i(z)]^α̌_i∏_i∈ I_we^R_i(z) e_i∏_j,k∈ I_g∪ I_be^R_ij(z) [e_j,e_k],
where r_i(z), R_jk(z) are rational functions.
Using the expression (<ref>) one can relate the notion of Z-twisted Miura-Plücker (G̅,q)-oper and the QQ-systems for simple superalgebras as it is done in the differential case. One may expect that it should lead, using the analogue of the notion of w-composable QQ-systems to Bethe ansatz equations for XXX, XXZ models related to simple superalgebras. In the case of 𝔤𝔩(n|m) such examples are also known <cit.>, which are based on the study of rational difference operators: a deformation of the constructions of <cit.>.
|
http://arxiv.org/abs/2307.02211v1
|
20230705113717
|
Object Recognition System on a Tactile Device for Visually Impaired
|
[
"Souayah Abdelkader",
"Mokretar Kraroubi Abderrahmene",
"Slimane Larabi"
] |
cs.CV
|
[
"cs.CV"
] |
organization=USTHB University,
addressline=BP 32 El Alia,
city=Algiers,
postcode=16111,
country=Algeria
People with visual impairments face numerous challenges when interacting with their environment. Our objective is to develop a device that facilitates communication between individuals with visual impairments and their surroundings. The device will convert visual information into auditory feedback, enabling users to understand their environment in a way that suits their sensory needs.
Initially, an object detection model is selected from existing machine learning models based on its accuracy and cost considerations, including time and power consumption. The chosen model is then implemented on a Raspberry Pi, which is connected to a specifically designed tactile device. When the device is touched at a specific position, it provides an audio signal that communicates the identification of the object present in the scene at that corresponding position to the visually impaired individual.
Conducted tests have demonstrated the effectiveness of this device in scene understanding, encompassing static or dynamic objects, as well as screen contents such as TVs, computers, and mobile phones.
§ INTRODUCTION
People with visual impairments face numerous challenges in their daily lives. They are unable to perceive the world in the same way as those with sight and encounter multiple difficulties, including orientation, obstacle detection and avoidance, limited mobility, and an inability to recognize shapes and colors of objects in their surroundings. In addition to these challenges, they are completely excluded from understanding and interacting with the real world scene.
Numerous technological advancements have been made to assist people with visual impairments. Among the different technological solutions deployed to address this specific need, computer vision-based solutions appear as one of the most promising options due to their affordability and accessibility.
Systems with human-scene interaction generate outputs after processing the captured scene. They consist of a set of computer vision and machine learning techniques aimed at improving the user's life in various activities such as content interpretation, navigation, etc. Generally, these systems process the data received from the real world using depth or RGB sensors and transform them into instructions and signals <cit.> <cit.>.
The goal of this work is to assist individuals with visual impairments in perceiving the information contained in an image by displaying the coded scene on a tactile device. They can explore the image by touching the pins on the device, with each pin representing a corresponding object in the scene.
The developed system prototype is illustrated in Figure <ref>.
This paper is organized as follows. In the next section, we review the main deep learning methods for object detection and compare these methods to select the most accurate model in terms of time and precision.
The third section is devoted to proposed system and includes the design of the device, which takes the identities of recognized objects as input and outputs a signal for human-machine interaction.
In Section 4, we present the implementation of the system on a Raspberry Pi board and the conducted tests. The results are presented and discussed.
We conclude with some future perspectives.
§ RELATED WORKS
Object detection has been well studied and recent models give accurate precision. In <cit.> <cit.> two major categories of object detection networks have been observed <cit.>:
Two-stage networks, were historically the first ones used for object detection <cit.>. They employ two successive neural networks, called stages: the first one is a region proposal network (Region Proposal Network), which proposes potential bounding boxes. The second stage regresses the position and label of the bounding boxes. Networks in this category include R-CNN <cit.>, which was later improved by Fast R-CNN <cit.> and Faster R-CNN <cit.>. These networks perform better in terms of precision metric but at the expense of inference speed metric.
One-stage networks, also known as one-stage, have only one stage responsible for generating bounding boxes and labels. Networks in this category include RetinaNet <cit.>, SSD (Single-Shot Multibox Detector) <cit.>, and the YOLO (You Only Look Once) family of networks <cit.>. These networks are less accurate in terms of precision metric but are faster.
Object detection models based on Transformers use neural network architectures that rely on attention mechanisms, enabling them to consider different parts of the image at various levels of abstraction <cit.>.
These methods have recently demonstrated impressive performance on object detection tasks, with results comparable to or surpassing those of traditional object detection methods based on CNNs <cit.>.
Once objects are detected, they serve to build systems to assist visually impaired individuals in various ways including understanding the scene from depth images <cit.> <cit.> <cit.>, identifying objects <cit.>, navigating their environment and avoiding obstacles <cit.>, visual positioning from depth images <cit.> <cit.> <cit.>, image captioning for visually impaired <cit.> <cit.> and Human Action Recognition and Coding based on Skeleton <cit.> <cit.>.
In general, human-scene interaction aims to facilitate understanding and exploration of the environment for visually impaired individuals, and it can be categorized as Tactile-Sound Interaction and Tactile-Tactile Interaction.
In <cit.>, the authors proposed a method for semantic scene labeling using RGB-D images to facilitate human-scene interaction. The obtained objects are converted into semantic codes inspired by the Braille system and the Japanese Kanji writing system.
Additionally, work has been done based on the sonification of images. In <cit.>, a software tool was developed to assist visually impaired individuals in identifying the color and brightness of an image through sonification. This software tool extracts color information from an image or video using HSV (hue, saturation, value) information, which is then converted into audio attributes such as pitch, timbre, and loudness. This tool can be used to gather information about the range of colors present in images, the presence or absence of light sources, as well as the location and shape of objects in the images.
image sonification was also developed has been proposed n <cit.> <cit.>, where individuals would actively explore an image on a touchscreen and receive auditory feedback on the content of the image at the current position. In this system, feature extracted and classified, objects are detected and recognized and
are acoustically represented using drum sounds.
Even if such solutions can help individuals to understand the scene content, in majority of cases, they do not need to explore the details of the image content but the content of the scene in terms of objects and in dynamic case, image exploration became more and more heavy.
In this direction, we propose to use the latest technologies of deep learning to build a system that help individual to be informed about the surrounding scenes even if is moving.
§ PROPOSED SYSTEM
Our system aims to assist visually impaired individuals in identifying objects and their locations from images. A tactile device has been developed to provide auditory feedback corresponding to the identity of the detected object, thereby helping these individuals obtain information about the scene. The proposed system is capable of identifying 17 types of objects in the observed scene.
This system is divided into three processes as illustrated in Figure <ref>.
The first two processes cooperate in interpreting and detecting objects in the observed scene. Since the system is embedded on a Raspberry Pi, it has limited resources (low RAM and processing power). Therefore, we have developed three object detection models, each responsible for detecting objects in a specific environment: Office, Kitchen, and Bedroom.
The main process is responsible for recognizing the appropriate environment in order to load the corresponding model. This process is reactivated at each new camera location and when the detection rate falls below a predefined threshold.
The second process is responsible for detecting objects and their locations in the image, and it transfers the coordinates of the object locations to the next process.
The third process involves associating the detected objects with a location on the tactile device and interacting with the user to produce corresponding sound feedback for the detected object.
§.§ The Detection Processes
The acquired image is input into the main process, which determines the appropriate environment or scene category (e.g., office, kitchen, bedroom) based on the visual cues and characteristics present in the image.
These three models are based on YOLOv5 and have been retrained on a dataset consisting of seven specific object classes. The goal of each model is to detect and recognize objects belonging to these seven classes in their corresponding environment.
The system operates using an object detection model that is responsible for detecting characteristic objects in each environment. Then, the k-nearest neighbors algorithm is executed to recognize the observed environment. In this algorithm, objects represent the features, and environments represent the target classes.
Once the appropriate environment is determined, the second process takes over for object detection and location, while the first process is paused. It transfers the coordinates of each detected object to the final process. This process also has the responsibility of reactivating the main process when the detection rate falls below a predefined threshold (e.g., when the object detection model fails to detect more than 20% of the objects in the observed environment).
§.§ Mapping process
This process is responsible for converting the coordinates of objects in the image into relative coordinates on the tactile device. In cases where there is overlap between two objects, where both objects may appear in the same grid cell of the tactile device, or when a relatively large object occupies multiple grid cells, we have developed an algorithm to determine the order of objects belonging to the same grid cell.
Additionally, this process is responsible for interacting with the user through the tactile device. It produces sound feedback corresponding to each detected object. When the user touches or interacts with a specific pin on the tactile device, a specific sound is emitted to provide feedback to the user.
§.§ Model selection
In order to select the appropriate model for the specific task of integrating an object detection model into a Raspberry Pi, we conducted a comparative study of object detection algorithms based on Convolutional Neural Networks (CNNs).
Considering our objective of achieving acceptable precision and recall values while working with embedded systems, we conducted the comparison while considering the following constraints:
We focused on using the latest reduced versions of each model, commonly referred to as "tiny models," such as YOLOv5, Faster R-CNN, and SSD.
The object detection models used in the comparison were trained on the same image dataset and shared the same backbone architecture. This ensured that we could make meaningful observations regarding the advantages and disadvantages of these methods.
Initially, we conducted a comparison using the publicly available MS-COCO dataset <cit.>. We selected 20 classes with over 200 images per class. These datasets include classes of different sizes, ranging from small classes like spoons, mice, and remote controls, to medium-sized classes like televisions and laptops, and large classes like people, beds, and dining tables. These selected classes represent three environments: office, kitchen, and bedroom, to ensure a relatively accurate comparison. Additionally, we compared these classes using hundreds of images collected from the internet and captured with a Raspberry Pi device.
Initially, we conducted a comparison using the publicly available MS-COCO dataset <cit.>. We selected 20 classes with over 200 images per class from this dataset. These selected classes encompass a range of sizes, including small classes like spoons, mice, and remote controls, medium-sized classes like televisions and laptops, and large classes like people, beds, and dining tables. By including these diverse classes, we aimed to ensure a comprehensive and representative comparison.
To further enhance the accuracy of the comparison, we also incorporated additional datasets consisting of hundreds of images collected from the internet and captured with a Raspberry Pi device. These images were carefully chosen to cover the selected classes and represent three specific environments: office, kitchen, and bedroom. By including images captured with the Raspberry Pi device, we aimed to account for any specific characteristics or challenges that may arise when using the object detection models in an embedded system setup.
Tables <ref> and <ref> present the results obtained on the MS-COCO benchmark and a collected image dataset in terms of mAP0.5 (mean Average Precision at IoU threshold of 0.5). These results validate the effectiveness of YOLOv5 in comparison to Faster R-CNN and SSD. The tables demonstrate that the YOLOv5 structure is better suited for real-time applications due to its faster processing speed compared to the other structures.
The selected model, determined by the main process, utilizes a YOLOv5-based object detection method to identify and locate objects in the image. It generates bounding boxes that enclose each detected object, accompanied by confidence scores that must exceed 0.5 to be deemed valid.
The coordinates of the detected objects, represented by the bounding boxes, are extracted from the object detection model and transmitted to the final process for encoding them on the tactile device.
§ EXPERIMENTAL RESULTS
In Figure <ref>, we present the components of our interactive device designed for visually impaired individuals.
This compact and portable device is a Raspberry Pi equipped with a high-definition camera, a 2GB CPU, and RAM. The device analyzes the user's environment and detects objects in real-time. The gathered information is then transmitted to the user through a haptic feedback system.
The haptic feedback system utilizes a device with 16 photoresistor sensors, enabling visually impaired individuals to comprehend their environment using their fingers. These sensors detect the presence of fingers and convert this information into audio feedback.
§.§ Making the device
Our main goal is to enable tactile-audio interaction with visually impaired users. To accomplish this, we have opted to utilize photoresistor technology, an electronic component that exhibits varying electrical resistance in response to incident light. The resistance of a photoresistor changes inversely proportional to the intensity of light it receives. In tactile interaction, this technology can be employed to detect changes in light caused by the user's touch on a light-sensitive surface.
By arranging multiple photoresistors as pins on a surface, we can detect which resistors are touched by the user. This enables tactile interaction where the user can interact with different pins and trigger actions, such as audio feedback through an audio output module connected to the Raspberry Pi. Each touched resistor corresponds to a specific sound based on the detected object's position in the image relative to the pin.
Figure <ref> illustrates the circuit connecting 16 photoresistors, with each photoresistor connected to one of the Raspberry Pi's pins. The associated code utilizes the RPi.GPIO library to manage GPIO pins on the Raspberry Pi. It configures the port for the photoresistor as an input. In the main loop, it checks the state of the photoresistor. If it is triggered (HIGH), it displays a message indicating that the user has touched the photoresistor.
§.§ Object Detection
YOLOv5 is an extremely efficient object detection algorithm based on deep learning. It has been trained on a dataset that includes images acquired at our faculty (refer to Fig <ref>), as well as a collection of publicly available images. The training process is conducted using PyTorch and the TensorFlow Lite platform.
By utilizing connected modules on the tactile device, the video images captured by the piCamera module are analyzed to search for objects that have been learned by the embedded model on the Raspberry Pi. Subsequently, a corresponding sound associated with each detected object is played.
The primary objective of this project is to identify and detect 17 different classes distributed across three categories: office, kitchen, and bedroom. During the data collection phase, we obtained a total of 2677 images specifically for the office environment.
The dataset consists of a total of 2677 samples, which are divided into 7 classes representing office environments. The smallest class contains approximately 290 samples. Each class has an adequate number of images distributed across the training, validation, and test sets. The image dataset is organized into three files: train (70%), validation (20%), and test (10%).
It is important to note that the lack of data can impact the problem of overfitting or underfitting that may arise during the training process. To address these issues, data augmentation techniques are employed. Among the commonly used data augmentation methods, geometric transformations are particularly effective.
§.§ Transfer Learning
In the COCO dataset, there are 80 object categories, resulting in an output tensor dimension of 3 x (5 + 80) = 255. Here, 3 represents the number of models for each grid prediction, 5 indicates the coordinates (x, y, w, h), and confidence for each prediction field, and 80 denotes the number of classes in the COCO dataset. In our specific dataset, such as the office environment, we have 7 classes. Hence, the output dimension of the classifier is 3 x (5 + 7).
To address the object detection challenge, we employed YOLOv5 for both detection and classification tasks. We divided the 20 classes into 3 categories, and each category was trained independently using its own model. This approach was adopted to ensure that a relatively small number of classes per category were trained, as the model would be deployed on an embedded system.
We fine-tuned and configured the YOLOv5 architecture specifically for our dataset. To achieve this, we employed transfer learning, adapting the YOLOv5 framework to be compatible with our dataset. We utilized pre-trained weights from a different model that had been trained on the extensive COCO dataset.
For training our model (yolov5s.pt), we utilized the standard Colab VM with 12GB of GPU memory. To enhance the robustness of the trained model and better utilize the available GPU resources, we set the batch size to 4. Additionally, we conducted training for a total of 100 epochs, observing that the trained model reached stability.
Throughout the experiments, we incorporated various hyperparameters. Some of these included weight decay = 0.0005, initial learning rate = 0.0042, final learning rate = 0.1, and momentum = 0.937. These parameters were maintained at their default values. Ultimately, we trained and tested YOLOv5 on the Colab VM using our dataset.
§.§ Results of objects detection
Figure <ref> depicts the performance of the fine-tuned YOLOv5 model during the transfer learning process. The top row of images represents the model's performance on the training set, while the bottom row represents its performance on the validation set.
Upon examining these images, it is evident that the object detection loss for our classes in the training set decreased to a value below 0.02 after 100 epochs. A similar trend can be observed in the validation set, where the object detection loss also decreased over the course of training.
To provide a more detailed analysis of the model's training process and performance, Figure <ref>(top) displays a plot showcasing the precision and recall mapping for detecting the seven classes during training. From the figure, it is evident that the model achieved a mean Average Precision (mAP) of 86.3%. This mAP value represents the area under the curve, indicating the trained model's ability to accurately detect objects with high precision and recall values.
To highlight the superiority of our selected object detection model for the desk environment (comprising the previously mentioned 7 classes), we conducted a comparison with the detection model prior to transfer learning, namely yolov5s. Figure <ref> showcases the mAP results obtained by both models on a training image set.
It is evident from the results that our model surpasses yolov5s in terms of average precision for the 7 classes. It is important to highlight that the object labeled as dining table in yolov5 is distinct from the desk object. Based on the conducted experiments, our transfer learning model derived from yolov5s exhibits superior performance. Therefore, we can confidently utilize our model for the project.
§.§.§ Mapping
The pin grid provides an organized structure and spatial reference for each object based on its position and size. This facilitates further processing or interaction with the detected objects within the project's context.
Figure <ref> shows how the detected objects in the image are associated to their corresponding cells. Each object's bounding box is associated to the grid cell as long as the majority of its surface is within that cell. A bounding box can be associated with multiple cells, and likewise, a cell can have multiple bounding boxes .
§ CONCLUSION
The designed and developed device aims to assist visually impaired individuals by providing information about the objects present in their surroundings, whether they are static or dynamic. Additionally, it enables users to determine their position within the scene.
Tests conducted on the device indicate its usefulness for visually impaired individuals. The system is also capable of capturing images from a TV or desk screen and dynamically mapping the recognized objects in real time on the device.
It's important to note that the current version of the system does not take depth information into account. However, future work will focus on incorporating depth information to accurately locate objects in the device based on their positions in the scene, rather than just as they appear in the image.
00
Ibel2022
Farah Ibelaiden and Slimane Larabi.
Visual place representation and recognition from depth images.
Optik,260,2022.
Zatout2019
Zatout, Chayma and Larabi, Slimane and Mendili, Ilyes and Barnabé, Soedji Ablam Edoh.Ego-Semantic Labeling of Scene from Depth Image for Visually Impaired and Blind People. IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019,
pp. 4376-4384.
Ibel2020
Ibelaiden, Farah and Sayah, Brahim and Larabi, Slimane.Scene Description from Depth Images for Visually Positioning.
2020 1st International Conference on Communications, Control Systems and Signal Processing (CCSSP), 2020, pp.101-106.
Benhamida2022
Benhamida, Leyla and Larabi, Slimane. Human Action Recognition and Coding based on Skeleton Data for Visually Impaired and Blind People Aid System.
2022 First International Conference on Computer Communications and Intelligent Systems (I3CIS), 2022, pp. 49-54.
Delloul2022_2
Delloul, Khadidja and Larabi, Slimane. Egocentric Scene Description for the Blind and Visually Impaired.
5th International Symposium on Informatics and its Applications (ISIA), 2022.
pp. 1-6.
Delloul2022
K. Delloul and S. Larabi. Image Captioning State-of-the-Art: Is It Enough for the Guidance of Visually Impaired in an Environment?
Advances in Computing Systems and Applications, 2022,
pp. 385-394.
Ibel2020_2
Ibelaiden, Farah and Larabi, Slimane. A Benchmark for Visual Positioning from Depth Images.
4th International Symposium on Informatics and its Applications (ISIA), 2020, pp. 1-6.
34
Bhole, Swapnil and Dhok, Aniket.
Deep Learning based Object Detection and Recognition Framework for the Visually-Impaired.
Fourth International Conference on Computing Methodologies and Communication (ICCMC), 2020, pp. 725-728.
31
Hegde, Pavan, et al.
Smart Glasses for Visually Disabled Person.
Journal of Research in Engineering and Science (IJRES), 9(7), pp. 62-68, 2021.
benhamida2023theater
Leyla Benhamida and Slimane Larabi. Theater Aid System for the Visually Impaired Through Transfer Learning of Spatio-Temporal Graph Convolution Networks.
arXiv 2023, eprint. 2306.16357.
CCSSP2020
Zatout, Chayma and Larabi, Slimane.
A Novel Output Device for visually impaired and blind people’s aid systems.
2020 1st International Conference on Communications, Control Systems and Signal Processing (CCSSP), pp. 119-124.
these-chaymal-arabi
Zatout, Chayma and Larabi, Slimane.
Semantic scene synthesis: application to assistive systems.
The Visual Computer, 38, pp. 2691-2705, 2022.
Arkin2022
Ershat Arkin, Nurbiya Yadikar, Xuebin Xu, Alimjan Aysa and Kurban Ubul.
Object detection methods from CNN to transformer.
Multimedia Tools and Applications, 1, 2022, pp. 1573-7721.
Jiao_2019
Licheng Jiao and Fan Zhang and Fang Liu and Shuyuan Yang and Lingling Li and Zhixi Feng and Rong Qu. A Survey of Deep Learning Based Object Detection.
IEEE Access, 2019,(7), pp. 128837-128868.
girshick2014rich
Girshick, Ross and Donahue, Jeff and Darrell, Trevor and Malik, Jitendra.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580-587.
girshick2015fast
Girshick, Ross. Fast R-CNN.
2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440-1448.
LARABI2009
Slimane Larabi. Textual description of shapes.
Journal of Visual Communication and Image Representation, 20(8), pp. 563-584,
2009.
ren2016faster
Ren, Shaoqing and He, Kaiming and Girshick, Ross and Sun, Jian.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.
Advances in Neural Information Processing Systems, 28, 2015.
8417976
Lin, Tsung-Yi and Goyal, Priya and Girshick, Ross and He, Kaiming and Dollár, Piotr.
Focal Loss for Dense Object Detection.
IEEE Transactions on Pattern Analysis and Machine Intelligence,42(2),pp. 318-327, 2020.
Liu_2016
Wei Liu and Dragomir Anguelov and Dumitru Erhan and Christian Szegedy and Scott Reed and Cheng-Yang Fu and Alexander C. Berg. SSD: Single Shot MultiBox Detector. Computer Vision ECCV, 2016, pp. 21-37
wang2021scaledyolov4
Chien-Yao Wang and Alexey Bochkovskiy and Hong-Yuan Mark Liao.
Scaled-YOLOv4: Scaling Cross Stage Partial Network.
arXiv, eprint. 2011.08036, 2021.
dosovitskiy2021image
Alexey Dosovitskiy et al.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.
arXiv, eprint. 2010.11929, 2021.
Lin2014
Lin, Tsung-Yi et al.",
Microsoft COCO: Common Objects in Context.
Computer Vision ECCV 2014, pp. 740-755.
naftali2022
Martinus Grady Naftali and Jason Sebastian Sulistyawan and Kelvin Julian.
Comparison of Object Detection Algorithms for Street-level Objects.
arXiv, eprint. 2208.11315, 2022.
Banf2016
Michael Banf, Ruben Mikalay, Baris Watzke and Volker Blanz.,
PictureSensation – a mobile application to help the blind explore the visual
world through touch and sound. Journal of Rehabilitation and Assistive
Technologies Engineering, (3), 2016, pp. 1–10.
Diwan2023
Tausif Diwan and Grandhi Sai Anirudh and Jitendra V. Tembhurne.
Object detection using YOLO: challenges, architectural successors, datasets and applications. Multimedia Tools and Applications, 82, pp. 9243 - 9275, 2023.
CAVACO2013
Sofia Cavaco and J. Tomás Henriques and Michele Mengucci and Nuno Correia and Francisco Medeiros.
Color Sonification for the Visually Impaired.
Procedia Technology,(9), pp. 1048-1057,2013.
Banf2013
Banf, Michael and Blanz, Volker.
Sonification of Images for the Visually Impaired Using a Multi-Level Approach.
Proceedings of the 4th Augmented Human International Conference, 2013, pp. 162–169.
|
http://arxiv.org/abs/2307.00518v1
|
20230702085310
|
DSTCGCN: Learning Dynamic Spatial-Temporal Cross Dependencies for Traffic Forecasting
|
[
"Binqing Wu",
"Ling Chen"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
sequation
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
Shell et al.: Bare Demo of IEEEtran.cls for IEEE JournalsDSTCGCN: Learning Dynamic Spatial-Temporal Cross Dependencies for Traffic Forecasting
Binqing Wu, Ling Chen
This work was supported by the National Key Research and Development Program of China under Grant 2018YFB0505000. (Corresponding author: Ling Chen.)Binqing Wu and Ling Chen are with the College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China (emails: [email protected], [email protected]).
=======================================================================================================================================================================================================================================================================================================================================================================
Traffic forecasting is essential to intelligent transportation systems, which is challenging due to the complicated spatial and temporal dependencies within a road network. Existing works usually learn spatial and temporal dependencies separately, ignoring the dependencies crossing spatial and temporal dimensions. In this paper, we propose DSTCGCN, a dynamic spatial-temporal cross graph convolution network to learn dynamic spatial and temporal dependencies jointly via graphs for traffic forecasting. Specifically, we introduce a fast Fourier transform (FFT) based attentive selector to choose relevant time steps for each time step based on time-varying traffic data. Given the selected time steps, we introduce a dynamic cross graph construction module, consisting of the spatial graph construction, temporal connection graph construction, and fusion modules, to learn dynamic spatial-temporal cross dependencies without pre-defined priors. Extensive experiments on six real-world datasets demonstrate that DSTCGCN achieves the state-of-the-art performance.
Traffic forecasting, spatial-temporal graph neural networks, fast Fourier transform
§ INTRODUCTION
Traffic forecasting is an essential part of an intelligent transportation system and a crucial technique for developing a smart city <cit.>. Accurate traffic forecasting will provide reliable guidance for scheduling transportation resources, mitigating traffic congestion, raising early warnings for public safety, and offering suggestions to citizens for their daily commuting <cit.>. Since traffic forecasting has a wide range of real-world applications, it has become a popular research focus in academic and industrial communities for decades.
Traffic forecasting aims to accurately predict future traffic data, e.g., traffic flow and speed, given historical traffic data recorded by sensors on a road network. It is highly challenging due to complicated spatial and temporal dependencies within the road network. Spatially, traffic data collected by a sensor are influenced by nearby traffic conditions, as the traffic dynamics propagate along the road. Temporally, the current features of traffic data are influenced by historical features. Moreover, spatial dependencies and temporal dependencies are entangled and time-varying in real-world traffic systems.
In the past decades, many works have been proposed for this challenging task, from using shallow machine learning <cit.> to applying recurrent neural network (RNN) and convolutional neural network (CNN) based deep learning <cit.>. Although these works make it possible to model temporal dependencies and grid-based spatial dependencies, they cannot capture graph-based spatial dependencies within an irregular road network in reality <cit.>. Towards this problem, graph neural network (GNN) based works have been proposed to leverage the graph structure of a road network effectively <cit.>. Specifically, these works use a graph to define a road network, where each node represents a sensor, and each edge represents a spatial dependency between sensors. More recently, researchers have integrated GNNs to capture spatial dependencies with RNNs <cit.>, CNNs <cit.>, or Attentions <cit.> to model temporal dependencies. This type of network, known as spatial-temporal graph neural networks (STGNNs), has shown the state-of-the-art performance for traffic forecasting <cit.>.
Despite the success, the performance of many existing STGNNs is highly constrained by utilizing static dependencies. They usually construct static graphs, e.g., distance graphs <cit.>, POI similarity graphs <cit.>, temporal similarity graphs <cit.>, and static adaptive graphs <cit.>, to model spatial dependencies, which neglect the changing nature of the spatial dependencies within road networks. Some explorations have been conducted to model such dynamics. For example, a static graph and dynamic attribute based graphs are integrated to obtain time-varying structures <cit.>, and attention mechanisms are exploited to construct structures changing with time <cit.>. However, these works only focus on the dynamics of spatial dependencies and ignore dependencies crossing spatial and temporal dimensions, which may fail to extract some effective features carried by cross dependencies.
The effectiveness of spatial-temporal cross dependencies has been empirically shown for traffic forecasting. These works <cit.> usually represent cross dependencies by a fused graph, e.g., a spatial-temporal synchronous graph <cit.> constructed by distance graphs and temporal connection graphs, and a spatial-temporal fusion graph <cit.> constructed by distance graphs, time similarity graphs, and temporal connection graphs. However, these works still rely on static graphs, which cannot capture dynamic cross dependencies.
To address the aforementioned problems, we propose a Dynamic Spatial-Temporal Cross Graph Convolution Network (DSTCGCN). To the best of our knowledge, DSTCGCN is the first work that learns dynamic spatial and temporal dependencies jointly via graphs to explore and utilize time-varying cross dependencies for traffic forecasting. The main contributions of our work are as follows:
* Introduce an FFT-based attentive selector to choose the relevant time steps for each time step based on real-world traffic data, which can model the dynamics of temporal dependencies. Moreover, it can limit the temporal neighbors of each time step to a small size and reduce the computational complexity.
* Introduce a dynamic cross graph construction module to fuse time-varying spatial graphs and temporal connection graphs in a directed and sparse way, which can model dynamic spatial-temporal cross dependencies without introducing over-fitting problems.
* Evaluate DSTCGCN on six real-world datasets for traffic flow and traffic speed forecasting. The comprehensive experimental results demonstrate the state-of-the-art performance of DSTCGCN.
§ RELATED WORK
§.§ STGNNs for traffic forecasting
GNNs have shown superior performance in many applications due to their ability to model non-Euclidean dependencies <cit.>. In particular, for traffic forecasting, STGNNs have shown the state-of-the-art performance, as they can learn spatial dependencies and temporal dependencies more effectively compared with other deep learning works <cit.>.
Many STGNNs <cit.> integrate GNNs to capture spatial dependencies with RNNs, CNNs, or Attentions to model temporal dependencies. For example, STGCN <cit.> deploys GCN and 1-D convolution to capture spatial and temporal dependencies, respectively. ASTGCN <cit.> improves STGCN by introducing spatial and temporal attention mechanisms into the model to capture the dynamics of traffic data. ASTGNN <cit.> develops a GCN to model the spatial dependencies and a temporal trend-aware multi-head self-attention to capture the temporal dependencies. Since these STGNNs mainly use pre-defined graphs, e.g., geometric distance, functional similarity, and transportation connectivity <cit.>, they might miss some implicit spatial dependencies.
To address this problem, some graph learning works have been proposed to construct graph structures from observed data end-to-end. Graph WaveNet <cit.> learns a self-adaptive adjacency matrix to capture spatial dependencies by multiplying two learnable node embeddings. AGCRN <cit.> constructs an adjacency matrix directly by multiplying one learnable node embedding and its transpose. MTGNN <cit.> learns a uni-directional adjacency matrix using two node embeddings. RGSL <cit.> further regulates the learned graphs by Gumbel-softmax. Although these learned graphs can alleviate the limitation of pre-defined graphs, they still model static spatial dependencies, which neglect the changing nature of the spatial dependencies within road networks.
There have been some explorations of modeling dynamic spatial dependencies. For example, SLCNN <cit.> learns dynamic structures by a function of the current samples. DGCRN <cit.> constructs dynamic adjacency matrices by integrating dynamic features (time stamps and speeds). DSTAGNN <cit.> and D2STGNN <cit.> obtain dynamic adjacency matrices using attention mechanisms. However, these works only focus on the dynamics of spatial dependencies, which ignores time-varying dependencies crossing spatial and temporal dimensions.
§.§ Cross dependency modeling
Existing works usually learn spatial and temporal dependencies separately, ignoring the dependencies crossing spatial and temporal dimensions. Recently, some works have started to model such cross dependencies. STSGCN <cit.> first proposes a spatial-temporal synchronous graph constructed by spatial graphs and temporal connection graphs to capture the cross dependencies at the adjacent time steps. Following STSGCN, STFGCN <cit.> fuses the global temporal similarity graphs with spatial graphs and temporal connection graphs to get a spatial-temporal fusion graph, which extends the time range of cross dependencies to all time steps. TAMP-S2GCNETS <cit.> constructs directed supra graphs via spatial graphs and temporal connection graphs. AutoSTS <cit.> designs a set of candidate cross graphs for automated spatial-temporal synchronous modeling by neural architecture search algorithm.
However, these works still rely on static graphs, which cannot capture dynamic cross dependencies. More recently, TravseNet <cit.> unifies all spatial dependencies and temporal dependencies via attention mechanisms. Although TravseNet can model dynamic cross dependencies to some extent, it suffers from too many parameters and has to rely on sparse implementation of deep graph library for computation. Since dynamic cross dependencies are critical for traffic forecasting but not well explored yet, we propose DSTCGCN to capture time-varying cross dependencies by learning dynamic spatial and temporal dependencies jointly without introducing over-fitting problems.
§ PROBLEM DEFINITION
Following previous studies <cit.>, the task of traffic forecasting is defined as forecasting the future traffic data given the historical traffic data of a road network. Formally, we define these traffic data recorded by sensors located in the road network as a set X^1:T={X^1,X^2,…,X^T}∈ℝ^N× T × C, where N is the total number of time series, T is the input length of each time series, and C is the dimension of input features. X^t ∈ℝ^N × C denotes observed values of N time series at time step t. The traffic forecasting task can be formulated as:
X^T+1: T+H=ℱ(X^1:T ; Θ) ∈ℝ^N × H × F,
where H denotes the forecasting horizon, F is the dimension of output features. F is the deep learning network for forecasting. Θ denotes all learnable parameters of ℱ.
§ METHODOLOGY
§.§ Model overview
Learning dynamic spatial and temporal dependencies jointly via graphs is challenging for 1) controlling the parameter number and computational complexity of the model to avoid the risk of over-fitting; and 2) handling the intricate characteristics of different nodes at different time steps. Therefore, we propose DSTCGCN (demonstrated in Fig. <ref>) to tackle these difficulties.
Specifically, an FFT-based attentive selector is introduced to choose relevant time steps for each time step based on time-varying traffic data. Since we renovate the vanilla attention mechanism by FFT and limit the relevant time steps to a small size, the parameter number and computational complexity of the designed selector are well-controlled. A dynamic cross graph construction module is introduced to learn dynamic spatial and temporal dependencies jointly without pre-defined prior, consisting of spatial graph construction, temporal connection graph construction, and fusion parts. Since the constructed temporal connection graphs are diagonal and the cross graphs are upper triangular considering directed information propagation, the graph convolution on the cross graphs does not make the overall model heavy-computing. Moreover, we apply the idea of decomposition to generate parameters of spatial/cross graph convolutions, which can further reduce the model parameters. We utilize GRU as the backbone model and replace the MLP layers in GRU with our graph convolution layers, which can capture spatial, temporal, and spatial-temporal cross dependencies jointly.
§.§ FFT-based attentive selector
To alleviate the over-fitting problem and heavy computational burden, we introduce an FFT-based attentive selector. Since attention mechanisms have an excellent ability to deal with dynamic spatial and temporal features <cit.>, we design our selector based on an attention mechanism and reduce its computational complexity using FFT.
Drawing insights from instance normalization (IN) <cit.> for computer vision tasks, we apply temporal normalization (TN) <cit.> to extract high-frequency components of traffic data, which can reflect the changing characteristics of inputs. We concatenate them with original traffic data and feed these enriched features into the FFT-based attentive selector. The process can be formulated as:
X_norm = TN(X)
X̃ = Concat(X,X_norm),
where X∈ℝ^N × T × C, X̃∈ℝ^N × T × 2C.
As illustrated in Fig. <ref>, the inputs are transformed into query Q∈ℝ^N × T × d_h and key K∈ℝ^N × T × d_h by linear projections. Inspired by <cit.>, applying FFT for attention mechanisms can not only reduce the computational cost but also help to get better temporal representations. Thus, we project Q and K into Fourier space and use Hadamard production instead of matrix multiplication to calculate the attention weights. Formally, the attention weights for the query at time step iQ^i∈ℝ^N × d_h and the key at time step jK^j∈ℝ^N × d_h can be calculated by:
Q^i = Linear(X̃^i)
K^j = Linear(X̃^j)
M^ij =ℱ^-1((ℱ(Q^i) ⊙ℱ(K^j)),
where ℱ denotes FFT and ℱ^-1 is its inverse. (·) is the conjugate operation. ⊙ is the Hadamard production. ℱ(Q^i)∈ℂ^N × d_F and ℱ(K^j)∈ℂ^N × d_F are the Fourier transform of Q^i and K^j , respectively, with d_F<d_h.
To further reduce the computational complexity and obtain more representative temporal characteristics, the values among the node and feature dimension of M^ij are aggregated by mean aggregation to get a single attention value M_agg^ij for measuring the relevance between time step i and time step j. Given the attention values between all time steps M_agg∈ℝ^T × T, the top-τ relevant time steps are selected for each time step:
I_sel,W_sel = Top-τ (M_agg),
where the selected index and relevant weights are denoted as I_sel∈ℝ^T ×τ and W_sel∈ℝ^T ×τ, respectively. The corresponding relevant traffic data for all time steps are denoted as X_sel∈ℝ^N × T ×τ× C.
Computational complexity analysis: The complexities of FFT, Hadamard production, and mean aggregation are O(Nd_Flog(d_h)), O(Nd_F), and O(1), respectively. Thus, the complexity of calculating the attention value of a pair of time steps (Calculation of M^ij in Eq. <ref>) is O(Nd_Flog(d_h)), while the complexity of the vanilla self-attention to obtain a single attention value is O((Nd_h)^2) with the input dimension Nd_h.
§.§ Dynamic cross graph construction
Learning a graph structure from observed data is promising for pushing forward the forecasting ability of STGNNs, as it breaks the bottleneck of requiring a pre-defined graph structure for GNNs. Recently, some graph learning methods have been proposed. For example, Graph WaveNet <cit.>, AGCRN <cit.>, and MTGNN <cit.> adopt learnable node embeddings to construct a static graph structure. DAAGCN <cit.> further utilizes time-varying embeddings to construct dynamic graph structures. Following these previous works, the dynamic graphs are constructed by node and time embeddings.
§.§.§ Spatial graph construction
To balance the effectiveness and simplicity of dynamic spatial graph construction, we apply the idea of decomposition. We randomly initialize a node embedding E_N∈ℝ^N × d_e to represent the node-specific characteristics shared by all time steps, and a time embedding E_T∈ℝ^T × d_e to indicate the relative dynamic time characteristics for time steps within a time range. We then combine them to learn dynamic spatial graphs. Specifically, for the spatial graph at time step t, the construction process can be formulated as:
E^t =E_N⊕E_T^t
A_S^t =softmax(E^t(E^t)^T),
where the time embedding of time step t is denoted as E^t ∈ℝ^1 × d_e. ⊕ represents the broadcasting addition that adds E_T^t to each row of E_N. Therefore, E^t can be regarded as the embedding that integrates the characteristics of nodes and time step t. Then, E^t is used to construct the spatial graph at time step t that is denoted as A_S^t ∈ℝ^N × N.
§.§.§ Temporal connection graph construction
In previous works <cit.>, temporal connection graphs are diagonal matrices with shared values for all nodes and all time steps. Such designs make all nodes share the same temporal dependencies for all time steps, which neglects the specific characteristics of nodes and time steps. To address this problem, we construct temporal connection graphs based on the self-dependencies of spatial graphs and the relevant weights of the selected time steps calculated by the FFT-based attentive selector.
For each time step, we construct τ temporal connection graphs. Taking time step t as an example, the diagonal values of the spatial graph A_S^t are chosen to represent the self-impacts of nodes, denoted as D_S^t ∈ℝ^N × 1. Then, the weights of selected time steps W_sel^t ∈ℝ^1 ×τ (calculated by the FFT-based attentive selector) are used as the coefficients to adjust the self-impacts. Since D_S^t and W_sel^t are adaptive to nodes and time steps, respectively, the adjusted self-impacts can be used to construct temporal connection graphs, considering the specific characteristics of nodes and time steps. The learning process can be formulated as:
A_T,i^t =fill_diagonal(D_S^t ⊙W_sel, i^t)
A_T^t ={A_T, t_1^t, A_T, t_2^t, …, A_T, t_τ^t},
where i ∈{1,2,…,T} indicates the temporal index of the selected time step. A_T,i^t ∈ℝ^N × N is the corresponding temporal connection graph. A_T^t ∈ℝ^N × N ×τ with t_1 < t_2< ⋯ <t_τ. It is worth mentioning that D_S^t contains the relative temporal characteristics from the time embedding E_T^t (introduced in (Section IV C.1)) and W_sel^t contains the absolute temporal characteristics from X^t (introduced in (Section IV A)). Therefore, the constructed temporal connection graphs are time-varying and can model dynamic temporal dependencies, which are more flexible than static temporal connection graphs used in previous works <cit.>.
§.§.§ Fusion
Given a spatial graph and multiple temporal connection graphs at each time step (both are time-specific), we fuse them into a cross graph. To reduce the computational cost, we follow the assumption that the propagation direction of information is from past to present to future <cit.>. Thus, the cross graph is directed, and the temporal impact from the time step with a larger index is prioritized during fusing.
Specifically, for time step t, the cross graph A_C^t ∈ℝ^τ N ×τ N is a directed graph. The spatial graph A_S^t is assigned on the diagonal of the cross graph to maintain the spatial dependencies between nodes. The temporal connection graphs A_T^t, regarded as self temporal impacts from selected time steps, are assigned to the upper triangular part of the cross graph. The combination of A_S^t and A_T,i^t on the diagonal is the addition for simplicity. Formally, the construction process of the cross graph at time step t can be formalized as:
A_C^t=[[ A_S^t+A_T, t_1^t A_T, t_2^t ⋯ A_T, t_τ^t; 0 A_S^t+A_T, t_2^t ⋯ A_T, t_τ^t; ⋮ ⋮ ⋱ ⋮; 0 0 0 A_S^t+A_T, t_τ^t ]],
where A_C^t ∈ℝ^τ N ×τ N is the cross graph at time step t. t_1 < t_2< ⋯ <t_τ, and τ is the number of the selected time steps from the FFT-based selector. An example shown in Fig. <ref> (d) illustrates the fused cross graph.
Comparison with baselines: Fig. <ref> gives an example to illustrate the cross graphs constructed by baselines and DSTCGCN. STSGCN <cit.> and STFGCN <cit.> use static fused graphs for all time steps, ignoring the changing nature of spatial and temporal dependencies within traffic systems. For TraverseNet <cit.> and DSTCGCN, the cross graphs are dynamic. Under the settings of the example, the selected time steps of TraverseNet are three continuous time steps chosen by the fixed rule. In contrast, ours are chosen by the FFT-based attentive selector based on the real-time traffic inputs, which are more flexible. Moreover, since the selecting range is the whole time series, the temporal receptive field of our cross graphs is global, which may handle local outliers such as data missing. In addition, TraverseNet unifies all spatial and temporal dependencies via attention mechanisms, suffering from the high computational cost and the risk of overfitting.
§.§ Graph convolution
We introduce the cross graph convolution to extract dynamic spatial and temporal features jointly. Instead of learning the parameters of the cross graph convolution directly, we apply the idea of decomposition to generate them following <cit.>, which can reduce the model parameters. The process can be formalized as:
W_C^t =M^t K_C, weights
B_C^t =M^t K_C, bias,
where M^t ∈ℝ^N × d_e represents the specific characteristics of each node at each time step. K_C, weights∈ℝ^d_e× d_i× d_o (d_i and d_o are the input and out dimensions) and K_C, bias∈ℝ^d_e× d_o represent the hidden shared pattern for all nodes at all time steps. In particular, as mentioned in Section IV C.1, E^t can be regarded as the matrix that integrates the characteristics of nodes and time step t. Thus, we utilize E^t ∈ℝ^N × d_e as the character matrices M^t when generating parameters for time step t directly.
The cross graph convolution based on the message passing theory can be formalized as:
H_C, out^t=σ((A_C^t+I) H_C, in^t W_C^t+B_C^t),
where H_C,in^t ∈ℝ^τ× N × d_i and H_C,out^t ∈ℝ^τ× N × d_o are the input and output of cross graph convolution. For the first layer, H_C,in^t is X_sel^t ∈ℝ^τ× N × C. I∈ℝ^τ N ×τ N is the identity matrix, which represents the self-loop. We also introduce the spatial convolution following the same design of the cross graph convolution to preserve the pure spatial dependencies at the time step t, and extract the current features denoted as H_S,out^t ∈ℝ^N × d_o. Then, H_C,out^t and H_S,out^t are fused as the final output of graph convolution module, which can be formulated as:
H_G, out^t=Linear(Pooling(H_C, out^t), H_S, out^t),
where H_G,out^t ∈ℝ^N × d_o. Pooling(H_C, out^t) denotes the pooling operation on H_C, out^t to obtain the representive features for τ time steps. The shape of the corresponding output is the same as H_S, out.
§.§ Forecasting module
We utilize GRU as the backbone and replace the MLP layers in GRU with our graph convolution layers as <cit.>. The forward propagation equations for time step t are as follows:
z^t =σ(Gonv_z(X^t, X_sel^t, h^t-1))
r^t =σ(Gonv_r(X^t, X_sel^t, h^t-1))
c^t =tanh(Gonv_c(X^t, X_sel^t,(r^t ⊙h^t-1)))
h^t =z^t ⊙h^t-1+(1-z^t) ⊙c^t,
where X^t ∈ℝ^N × C denotes the traffic data at time step t. X_sel^t is the traffic data of the selected time steps. We take the final hidden state h^T to predict traffic data of all nodes for the next H steps by a linear transformation, which decreases the time consumption and avoids cumulative error caused by sequential forecasting.
The loss function in this work is L1 loss function:
ℒ(Θ)=∑_t=T+1^T+H|X^t-X^t|,
where Θ denotes all the learnable parameters in DSTCGCN. X^t and X^t are the ground truth and the forecasting results, respectively.
§ EXPERIMENTS
§.§ Dataset
We evaluate DSTCGCN on six public traffic datasets, including PEMS03/4/7/8 provided by <cit.> and METR-LA/PESM-BAY provided by <cit.>, and follow the corresponding suggested data preprocessing strategies. Specifically, we fill the missing data by linear interpolation and aggregate data into 5-minute intervals, resulting in 12 time steps per hour. Besides, we normalize all datasets using the Z-Score normalization. The detailed statistics are shown in Table <ref>. In the “Signals” column, F represents traffic flow, S represents traffic speed, and O represents traffic occupancy rate. For PEMS03/4/7/8, we only use the traffic flow in the following experiments.
§.§ Experimental settings
We split datasets chronologically for training, validation, and testing with the ratio 6:2:2 for PEMS03/4/7/8 and 7:1:2 for METR-LA/PESM-BAY. For our traffic forecasting task, one-hour data are used as input to forecast the next hour’s data as output. Each hour has 12 continuous time steps. DSTCGCN is implemented in Python with PyTorch 1.9.0 and trained on an NVIDIA GeForce RTX 3080 Ti GPU card using Adam optimizer with an initial learning rate of 0.003 and batch size of 64. Neural Network Intelligence (NNI) toolkit is applied to tune important hyperparameters automatically, which can reduce computational costs efficiently. The search space of important hyperparameters and their final choices for different datasets are summarized in Table <ref>. The codes are available at https://github.com/water-wbq/DSTCGCN/.
The evaluation metrics, including Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE) on test data, are reported based on the saved best model that runs for 100 epochs on the validation data.
MAE =1/H∑_t=T+1^T+H|X^t-X^t|
RMSE =√(1/H∑_t=T+1^T+H(X^t-X^t)^2)
MAPE =1/H∑_t=T+1^T+H|X^t-X^t/X^t|.
§.§ Baselines
The baselines used for our comparative evaluation can be divided into four categories. The first category is classic time-series forecasting methods, including HA, ARIMA <cit.> with Kalman filter, VAR <cit.>, and FC-LSTM <cit.>.
The second category is STGNNs with static graphs with two sub-categories:
* With static pre-defined adjacency matrice:
* STGCN <cit.> combines graph convolutional and two temporal gated convolutions to capture spatial and temporal dependencies.
* ASTGCN <cit.> applies spatial attention to model spatial dependencies between different locations and temporal attention to capture the dynamic temporal dependencies between different time steps.
* With static adaptive adjacency matrice:
* Graph WaveNet <cit.> constructs a self-adaptive adjacency matrix using two node embeddings dictionaries and applies graph convolution with dilated casual convolution to capture spatial and temporal dependencies.
* AGCRN <cit.> generates an adaptive adjacency matrix using one node embedding and uses node adaptive graph convolution and GRU to capture node-specific spatial and temporal dependencies, respectively.
* MTGNN <cit.> learns a uni-directional adjacency matrix using two node embeddings and uses GNN and dilated convolution for multi-variate time series forecasting.
* StemGNN <cit.> embeds the inputs using GRU and treats the attention score of the last hidden state as the adjacency matrice. It then combines graph Fourier transform to model inter-series correlations and discrete Fourier transform to model temporal dependencies.
* Z-GCNETs <cit.> integrates the new time-aware zigzag topological layer into time-conditioned GCNs for modeling spatial and temporal dependencies.
* RGSL <cit.> regularizes the adaptive adjacency matrix learned by node embeddings using Gumble-softmax. It applies GNN and GRU to model spatial and temporal dependencies, respectively.
The third category is STGNNs with dynamic graphs, including:
* DCGRN <cit.> constructs the adjacency matrix using static node embeddings and integrates dynamic features (time stamps and speeds) to adjust the matrix for modeling dynamic spatial dependencies.
* DSTAGNN <cit.> obtains the adjacency matrix by calculating the cosine similarity of input traffic data and uses attention mechanisms to adjust the matrix for dynamics.
The fourth category is STGNNs considering spatial-temporal cross dependencies, including:
* STSGCN <cit.> utilizes localized spatial-temporal subgraphs to capture the localized spatial-temporal dependencies synchronously.
* STFGCN <cit.> assembles a gated dilated CNN module with a spatial-temporal fusion graph module in parallel to capture local and global dependencies simultaneously.
* TraverseNet <cit.> utilizes attention to capture spatial and temporal dependencies.
§.§ Overall Comparison
Table <ref> shows the average forecasting performance over 12 horizons of DSTCGCN and other baselines on four traffic flow datasets, i.e, PEMS03/4/7/8. Bolt denotes the best performance of each metric and Underline is the second best. The following phenomena can be observed:
* Static graph-based methods (second category) perform better than those of the classic time-series forecasting models (first category) significantly, as they can capture static spatial dependencies between traffic data. Moreover, static adaptive graph-based methods (Graph WaveNet, AGCRN, MTGNN, StemGNN, Z-GCNETs, and RGSL) outperform pre-definded graph-based methods (ST-GCN and ASTGCN). One of the possible reasons for this phenomenon is that learning graphs from historical observed traffic data can discover some underlying spatial dependencies.
* Dynamic graph-based methods (third category) and cross graph-based methods (fourth category) have an average superior performance than static graph-based methods due to their abilities of modeling dynamics and cross dependencies, respectively.
* DSTCGCN (ours) outperforms almost all baselines on MAE, RMSE, and MAPE over four datasets, achieving a new state-of-the-art traffic flow forecasting performance. Specifically, DSTCGCN yields an average 11.47% relative MAPE reduction on four traffic flow datasets, benefiting from modeling dynamic cross dependencies.
Since the performance of some graph learning methods, including static adaptive graph-based methods (AGCRN, MTGNN, and RGSL) and dynamic adaptive graph-based method (DGCRN), are also promising in Table <ref>, we further compare DSTCGCN with them on two traffic speed datasets, i.e., METR-LA and PEMS-BAY, on 3, 6, 9, and 12 horizons. The results are in Table <ref>, where S and D mean static and dynamic graph-based methods, respectively. Table <ref> shows that DSTCGCN achieves 14 best cases out of 18 cases. One of the possible reasons is that these four baselines all focus on learning underlying spatial graphs (static or dynamic), neglecting to model temporal connection graphs. Therefore, DSTCGCN, which considers dynamic spatial and temporal connection graphs jointly, performs better than these methods.
§.§ Ablation study
To verify the effectiveness of each component of DSTCGCN, we conduct the ablation studies with the following variants on PEMS04 and METR-LA:
* Without the FFT-based attentive selector (w/o FFT-AS): (1) select τ time steps as relevant time steps randomly; (2) select τ latest time steps as relevant time steps.
* Without temporal normalization (w/o TN): select the relative time steps only based on the pure raw traffic data without temporal normalized data.
* Without dynamic spatial graphs (w/o DSG): replace the dynamic spatial graphs with the static spatial graphs calculated by A_S=softmax(E_N (E_N)^T).
* Without dynamic temporal connection graphs (w/o DTCG): replace the temporal connection graphs with identity diagonal matrices on the upper triangular parts of spatial-temporal cross graphs.
* Without dynamic spatial-temporal cross graphs (w/o DSTCG): only use dynamic spatial graphs to model the spatial dependencies.
The corresponding results are shown in Table <ref>. The performance of DSTCGCN is better than those of other variants, which confirms the effectiveness of each component in our model. We also observe that the forecasting accuracy of the variant w/o DSTCG drops most compared to others, which verifies the significance of modeling dynamic spatial and temporal dependencies jointly via graphs for real-world traffic flow and speed forecasting. In addition, the performance of the variants w/o DSG and w/o DTCG degrades obviously, showing the effectiveness of modeling dynamics
Moreover, since the variant w/o FFT-AS-1 selects τ time steps randomly, it ignores the temporal dependencies between time steps and leads to a performance drop. On the other hand, the variant w/o FFT-AS-2, considering the latest τ time steps, has a slight decrease in performance, which may suffer from some local noise. These results verify the effectiveness of our designed FFT-based attentive selector experimentally, as it can select the relevant time steps based on the similarity of real-world traffic data.
§.§ Hyperparameter Evaluation
To further understand the effectiveness of critical hyperparameters in DSTCGCN, we conduct the hyperparameter evaluation. We utilize NNI to optimize our model. Considering the fairness, we fix other hyperparameters when investigating a certain hyperparameter.
We highlight the following two evaluations closely related to dynamic cross graphs:
* Evaluation of the dimension of the node and time embeddings d_e. The node and time embeddings represent the characteristics of each node and each time step, respectively. d_e significantly affects their representative abilities and influences the construction of dynamic spatial, temporal, and cross graphs. We increase d_e from 5 to 17 with a step size of 1.
* Evaluation of the number of the selected relevant time steps τ. τ determines the size of the spatial-temporal cross graphs, which controls the number of spatial and temporal neighbors when modeling cross dependencies. We increase τ from 1 to 4 with a step size of 1.
Fig. <ref> shows the evaluation results of d_e on PEMS04. It can be seen that increasing d_e can improve the representative capacity of the model, leading to a drop in MAE. However, MAE rises quickly when it is larger than 10. The possible reason is that increasing dimensions brings more parameters to learn, making the model suffer from the over-fitting problem.
Fig. <ref> shows the evaluation results of τ on PEMS04. It can be seen that both a small and large τ lead to weaker prediction performance and the optimal setting is 3. It may be because a small τ limits the number of spatial and temporal neighbors, which cannot fully discover the cross dependencies. Although a large τ has a large size of spatial-temporal neighbors, it may introduce noises into the model. Practically, τ=3 on PEMS04 is suitable for DSTCGCN, as it balances the number of considered spatial-temporal neighbors and noises involved.
§.§ Visualization
Fig. <ref> shows the forecasting performance for all nodes at each horizon on PEMS04 and PEMS08, which demonstrates that DSTCGCN has a stable forecasting advantage in short-term and long-term forecasting.
To see the forecasting performance for specific nodes in the real-world cases, we randomly select one-day traffic flow data of 4 nodes in PMES04 and plot their ground truth and forecasting values of STSGCN and DSTCGCN. As shown in Fig. <ref>, DSTCGCN has a more quick and more accurate response to dynamic changes compared to STSGCN at the peak of the traffic flow curve. One of the possible reasons is that STSGCN utilizes a static distance matrix and static temporal connection graphs (identity matrix), which fails to model dynamic spatial and temporal dependencies jointly.
To further illustrate the dynamic spatial and temporal dependencies modeling, we conduct case studies on METR-LA. We select an area with latitude from 34.10 to 34.16 and longitude from -118.30 to -118.36. The distribution of traffic speed sensors in that area and the corresponding distance adjacent matrix are illustrated in the above part of Fig. <ref>. We randomly select a period (12 continuous time steps) from the test dataset and display the learned dynamic spatial matrices at time steps 0, 2, and 11, as shown in the below part of Fig. <ref>. Comparing the dependencies between a node with others (see red windows) and within a sub-group of nodes (see orange windows) from the distance matrix and learned matrices, we find that our learned dynamic spatial matrices can not only capture the highly correlated geometric spatial dependencies but also can capture some dynamic hidden spatial dependencies.
In addition, we randomly choose three periods from the test dataset and illustrate the FFT-attentive values that represent the temporal relevant weights between time steps in Fig. <ref>. For time step 3, the top-3 relevant time steps are 3, 11, and 4 for the period 1; 3, 5, and 6 for the period 2; and 3, 4, and 2 for the period 3. These phenomena support our motivation that the relevances between time steps are not static for all time, dynamic temporal dependencies should also be emphasized when modeling dynamics for traffic forecasting.
§.§ Model Parameters and Training Time
We compare the model parameters and training time of DSTCGCN with ASTGCN, RGSL, STSGCN, STFGCN, and TraverseNet on PEMS07 in Table <ref>.
RGSL and DSTCGCN have impressive performance with relatively small amounts of model parameters and fast training time, as both of them adopt decomposition methods for graph convolution networks. In particular, DSTCGCN has an advantage in terms of accuracy compared with RGSL, as DSTCGCN can model temporal dependencies from a dynamic perspective, which matches the changing nature of the traffic system. The results show that DSTCGCN can achieve a good trade-off between computational cost and forecasting accuracy.
In addition, DSTCGCN has significant superiority compared with methods modeling cross dependencies, i.e., STSGCN, STFGCN, and TraveseNet. For example, due to the decomposition design, DSTCGCN has much fewer parameters than STSGCN and TraverseNet. The training time of DSTCGCN is almost half of that of TraverseNet, as DSTCGCN selects the top relevant time steps to model dynamic temporal dependencies rather than using all time steps as TraverseNet does. Such designs help DSTCGCN balance the computational cost and forecasting performance and reduce the risk of over-fitting when modeling cross dependencies.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we propose DSTCGCN to learn dynamic spatial and temporal dependencies jointly via graphs for traffic forecasting. Specifically, we introduce 1) an FFT-based attentive selector to choose relevant time steps for each time step based on time-varying traffic data, and 2) a dynamic cross graph construction module to fuse time-varying spatial graphs and temporal connection graphs in a directed and sparse way. The results of our extensive experiments on six real-world traffic datasets show that DSTCGCN outperforms the state-of-the-art traffic forecasting baselines.
In the future, we focus on learning spatial-temporal cross dependencies with more casual properties. From Fig. <ref>, we find that the learned dynamic spatial matrices are dense, calling for a proper sparse strategy to regularize them and make the model more lightweight. Since causal dependencies have inherent sparsity, it is possible to introduce casual properties, e.g., Granger causality, when modeling spatial or cross dependencies. Moreover, some methods from explainable AI, e.g., ShapFlow <cit.>, can be applied to analyze the causal effect of the learned dependencies, which can increase our understanding of traffic systems and provide reliable guidance for scheduling transportation resources.
IEEEtran
[
< g r a p h i c s >
]
Binqing Wu received her B.S. degrees in computer science from Southwest Jiangtong University, China, and University of Leeds, UK, in 2020. She is currently a Ph.D. candidate with the College of Computer Science and Technology, Zhejiang University, China. Her research interests include time series forecasting and data mining.
[
< g r a p h i c s >
]
Ling Chen received his B.S. and Ph.D. degrees in computer science from Zhejiang University, China, in 1999 and 2004, respectively. He is currently a professor with the College of Computer Science and Technology, Zhejiang University, China. His research interests include ubiquitous computing and data mining.
|
http://arxiv.org/abs/2307.03237v1
|
20230706180119
|
Asteroseismology with the Roman Galactic Bulge Time-Domain Survey
|
[
"Daniel Huber",
"Marc Pinsonneault",
"Paul Beck",
"Timothy R. Bedding",
"Joss Bland-Hawthorn",
"Sylvain N. Breton",
"Lisa Bugnet",
"William J. Chaplin",
"Rafael A. Garcia",
"Samuel K. Grunblatt",
"Joyce A. Guzik",
"Saskia Hekker",
"Steven D. Kawaler",
"Stephane Mathis",
"Savita Mathur",
"Travis Metcalfe",
"Benoit Mosser",
"Melissa K. Ness",
"Anthony L. Piro",
"Aldo Serenelli",
"Sanjib Sharma",
"David R. Soderblom",
"Keivan G. Stassun",
"Dennis Stello",
"Jamie Tayar",
"Gerard T. van Belle",
"Joel C. Zinn"
] |
astro-ph.IM
|
[
"astro-ph.IM",
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.SR"
] |
Roman CCS White Paper
Asteroseismology with the Roman Galactic Bulge Time-Domain Survey
Scientific Categories: Stellar physics and stellar types; Stellar populations and the interstellar medium Principal Authors:
Daniel Huber, University of Hawaii ([email protected]) & University of Sydney
Marc Pinsonneault, Ohio State University ([email protected])
List of contributing authors :
Paul Beck, Universidad de La Laguna & Instituto de Astrofisica de Canarias Timothy R. Bedding, University of Sydney Joss Bland-Hawthorn, University of Sydney Sylvain N. Breton, INAF, Osservatorio Astrofisico di Catania Lisa Bugnet, Institute of Science and Technology Austria ISTA William J. Chaplin, University of Birmingham Rafael A. García, Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM
Samuel K. Grunblatt, Johns Hopkins University Joyce A. Guzik, Los Alamos National Laboratory Saskia Hekker, Heidelberg University & Heidelberg Institute for Theoretical Studies (HITS) Steven D. Kawaler, Iowa State University Stéphane Mathis, Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM Savita Mathur, Instituto de Astrofísica de Canarias & Universidad de La Laguna Travis Metcalfe, White Dwarf Research Corporation Benoit Mosser, LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Université de Paris Melissa K. Ness, Columbia University & Center for Computational Astrophysics Anthony L. Piro, Carnegie ObservatoriesAldo Serenelli, Instituto de Ciencias del Espacio (ICE, CSIC) Sanjib Sharma, Space Telescope Science Institute David R. Soderblom, Space Telescope Science Institute Keivan G. Stassun, Vanderbilt University Dennis Stello, University of New South Wales Jamie Tayar, University of Florida Gerard T. van Belle, Lowell Observatory Joel C. Zinn, California State University, Long Beach
Abstract:
Asteroseismology has transformed stellar astrophysics. Red giant asteroseismology is a prime example, with oscillation periods and amplitudes that are readily detectable with time-domain space-based telescopes. These oscillations can be used to infer masses, ages and radii for large numbers of stars, providing unique constraints on stellar populations in our galaxy.
The cadence, duration, and spatial resolution of the Roman galactic bulge time-domain survey (GBTDS) are well-suited for asteroseismology and will probe an important population not studied by prior missions. We identify photometric precision as a key requirement for realizing the potential of asteroseismology with Roman. A precision of 1 mmag per 15-min cadence or better for saturated stars will enable detections of the populous red clump star population in the Galactic bulge. If the survey efficiency is better than expected, we argue for repeat observations of the same fields to improve photometric precision, or covering additional fields to expand the stellar population reach if the photometric precision for saturated stars is better than 1 mmag. Asteroseismology is relatively insensitive to the timing of the observations during the mission, and the prime red clump targets can be observed in a single 70 day campaign in any given field. Complementary stellar characterization, particularly astrometry tied to the Gaia system, will also dramatically expand the diagnostic power of asteroseismology. We also highlight synergies to Roman GBTDS exoplanet science using transits and microlensing.
§ BACKGROUND & MOTIVATION
Asteroseismology – the study of stellar oscillations – has been revolutionized by space-based time domain surveys such as CoRoT, Kepler/K2, and TESS. It has led to major breakthroughs in stellar astrophysics such as the discovery of rapidly rotating cores and magnetic fields in evolved stars <cit.> and the systematic measurement of stellar masses, radii, and ages <cit.>.
While the discussion here focuses on stochastically-excited solar-like oscillators, many of these processes can also be probed in classical pulsators such as OB stars, Cepheids, RR Lyrae and δ Scuti stars <cit.>, providing a window into not only stellar physics, but galactic astronomy and – through contributions to understanding the distance ladder – to cosmology.
Asteroseismology of red giants is a particularly powerful tool for studying stellar populations. This is because red giants oscillate with ≳ 0.1 mmag amplitudes and periods of hours to days, allowing detections with moderate cadence and photometric precision out to large distances. Global frequency properties can be used to infer masses, ages, and evolutionary states for large numbers of giants <cit.>. Kepler asteroseismology was used as a fundamental calibrator in spectroscopic surveys <cit.> and Gaia <cit.>, yielded age estimates of nearby thin and thick disc stars <cit.>, and has been used to age-date mergers of dwarf satellites <cit.>. It also uncovered unexpected populations, such as massive stars – typically an indication of youth – that have high alpha-capture to iron – typically, an abundance pattern indicating old age <cit.>. While the origin of this population is contested <cit.>, it provides an example of the value of asteroseismology for population studies.
However, asteroseismology has so far sampled only a limited portion of the galaxy. The seminal Kepler field is relatively nearby, and red giants had a complex selection function <cit.>.
The K2 mission covered populations along the ecliptic plane, but with a relatively modest total sample size and depth <cit.>. TESS provides an all-sky survey that is large but shallow <cit.> due to the small aperture. No space-based asteroseismic survey has so far sampled the crowded and distant stellar populations in the Galactic bulge. The bulge harbors a unique population born with high star formation efficiency in the earliest phases of Galactic evolution <cit.>. Much of what we do not know about the formation of the Milky Way stems from our current inability to probe stellar populations in the inner Galaxy, which harbors a non axisymmetric bar whose chemodynamical evolution is complicated and poorly understood. Exploring how the structure, kinematics and chemistry of the inner Galaxy depends on age offers a way to unlock its formation history. The bulge is also the closest analog to the spheroidal populations of other disk galaxies and entire elliptical galaxies, and thus will aid in understanding star formation in high-redshift galaxies currently studied with JWST. The Roman Galactic Bulge Time Domain Survey (GBTDS) provides the first opportunity to apply the powerful tool of asteroseismology to measure masses and ages of stellar populations in the galactic bulge. It will also serve as an important precursor for planned dedicated space-based asteroseismology missions in dense stellar environments <cit.>.
Figure <ref> illustrates the potential of the Roman GBTDS for red giant asteroseismology. The Roman yield was estimated with a Galaxia simulation <cit.> of a 2.8 square degree FOV (corresponding to ten pointings) centered at (l,b)=(0.5,-1.5), using a photometric precision (σ) model for saturated stars <cit.>, scaling relations for oscillation amplitudes (A) <cit.> and requiring SNR=A/(σ/√(N))>15 with N=41472 for six 70-day long campaigns with 15 minute cadence. Roman will for the first time perform space-based asteroseismology towards and beyond the galactic center, and is expected to increase the current yield by at least one order of magnitude (≈ 6 × 10^5 detections).
§ SCIENCE REQUIREMENTS
§.§ Photometric Precision
Oscillation amplitudes increase linearly with luminosity, ranging from a few parts per million in the Sun to a few percent at the tip of the red giant branch. Photometric precision is the primary driver for the feasibility of asteroseismology, and in most cases is more important than cadence for a fixed photon noise.
A large fraction of red giants in a given stellar population are Helium-core burning (“red clump”) stars, a long-lived phase of stellar evolution following the tip of the red giant branch. A red clump star in the galactic bulge has H≈ 13 mag and thus will saturate the Roman detector in a single read. Techniques developed for saturated star photometry with Kepler/K2 <cit.> will not be directly applicable to Roman due to the different behaviour of H4RG detectors. Photometric precision estimates for saturated stars with Roman either assume a nominal noise floor of 1 mmag / 15-min cadence <cit.>, or that the precision improves for brighter stars with careful modeling of the wings of the PSF <cit.>. Figure <ref> shows a simulated red clump star power spectrum using the original Kepler data and different assumptions for photometric precision. Oscillations are clearly detectable (SNR≳ 15) with the improved saturated star precision, but become nearly undetectable (SNR < 10) with a nominal noise of 1 mmag/15-min, which would reduce the yield by a factor of 3. Assuming a strategy where Roman can observe fields twice as fast (and thus increase the nominal precision by a factor √(2)) nearly compensates for the SNR loss and would result in a similar yield as with improved saturated star precision, partially because of additional detections for non-saturated stars.
We conclude that asteroseismology of red giant stars using the GBTDS requires a minimum photometric precision of 1 mmag / 15-min or better for saturated stars. Investigations of saturated star photometry will be critical for the success of GBTS asteroseismology.
§.§ Observing Cadence, Field Selection, and Filters
Oscillation periods scale inversely with surface gravity, ranging from 5 minutes in Sun-like stars to weeks and months for evolved red giants. The nominal 15 min GBTDS cadence yields a Nyquist frequency of 560 , which
is sufficient for stars in and above the red clump as well as most other pulsators, except compact pulsators (e.g. hot subdwarfs and white dwarfs) and rapidly oscillating Ap stars.
The upper cadence limit for red-giants is 30 min, corresponding to a Nyquist frequency at the base of the red-giant branch.
Asteroseismic inference is often divided into “global asteroseismology”, which involves the determination of stellar radii, masses and ages, and “boutique asteroseismology”, which aims to infer interior properties such as rotation.
The latter will not be feasible with Roman given time baseline limitations, and require a dedicated asteroseismology mission in crowded fields such as HAYDN <cit.>.
The former is sufficient to study stellar populations, and relies on sampling a large area to probe the formation history of our galaxy. Recent results using luminous giants from ground-based surveys have demonstrated the strong potential to study the galactic bulge, providing the first kinematic map of the far side of the galactic bar <cit.> (Figure <ref>). However, ground-based surveys only reach very luminous stars for which asteroseismic mass and age measurements are not possible. Roman will extend this success by enabling mass and age measurements over similar distance ranges (see Figure <ref>). Increasing the number of fields observed by the Roman GBTDS would significantly enhance the asteroseismic science yield. For example, additional fields covering a range longitudes will help explore the horizontal structure of the non-axisymmetric bar and fields at a wider range of latitudes would provide insights into the vertical structure of stellar populations in the bulge.
Red giant asteroseismology is relatively insensitive to the timing of the GBTDS campaigns throughout the mission. However, pulsation timing for classical pulsators such as δ Scuti stars <cit.> would benefit from having campaigns spaced as widely in time as possible.
Oscillation amplitude increases for blue wavelengths <cit.>.
Compared to Kepler, oscillation amplitudes decrease by ≈ 65% in the Roman F149 filter <cit.>. Detecting oscillations in multiple passbands can be used for mode identification, which is important for classical pulsators. Observing the same fields in multiple filters thus provides benefit for the time-domain study of classical pulsators (>1.3), but overall has lower priority than increasing the number of fields.
§.§ Astrometry and Saturated Star Photometry
Global asteroseismology collapses the oscillation spectrum into two properties: the frequency of maximum power ν_ max and the average frequency spacing Δν. When combined with T_ eff and abundances, these data can be used to infer mass, radius, and age. However,
we will be able to measure ν_ max for many more targets than those for which Δν can be measured.
Fortunately, there is an alternative: one can combine an independent radius with ν_ max to infer mass without Δν. Radii can be inferred from a combination of Gaia luminosity and T_ eff. The resulting mass uncertainties are competitive with those from full asteroseismic characterization <cit.>. In the bulge, extinction map uncertainties in the optical will result in large errors in Gaia luminosities.
However, idifferential Roman astrometry could be used to infer precise relative parallaxes <cit.>; when tied to the Gaia system, this could be translated to precise radii. Since red clump stars have similar intrinsic , their radii can be inferred even without detailed spectroscopic data.
Red clump stars are saturated at the distance of the Galactic center, which could be used to centroid images using diffraction spikes <cit.>. Both the detection of oscillations and precise Roman parallaxes will therefore benefit from an emphasis on precise and accurate saturated star astrometry and photometry.
§ SYNERGIES WITH EXOPLANET SCIENCE
§.§ Characterization of Stellar Populations for Transiting Exoplanet Demographics
The Roman GBTDS will detect tens of thousands of transiting exoplanets <cit.>.
Because most asteroseismic detections will be in red clump stars, stars with detected oscillations and transiting planets will be rare. However, GBTDS asteroseismology will constrain the underlying stellar population in the galactic bulge, which will be important for interpreting the exoplanet yield. For example, the demographics of transiting exoplanet hosts (such as their distances and association to the thin and thick disc populations) will be sensitive to the importance of host star abundances on planet formation <cit.>. Asteroseismology will complement this by mapping the mass and age distributions of the bulge, thin disc, and thick disc populations in the same galactic regions where transiting exoplanets will be found.
§.§ Characterization of Microlensing Lens and Source Stars
Characterizing exoplanets using microlensing requires knowledge of distances to the lens and the source stars.
Distances to source stars are typically assumed since in most cases the lens and source star cannot be resolved. In some cases, however, the source star will be an evolved red giant, which will result in oscillation signal in the microlensing light curve. Stellar oscillations can be used to derive precise distances by constraining the luminosity, and thus help constrain the properties of planets detected using microlensing. First detections have already been made using OGLE (Figure <ref>) <cit.>.
Time-domain stellar variability from oscillation and granulation imprinted on microlensing light curves may allow the systematic characterization of lens and source stars in microlensing detections with the Roman GBTDS.
§ CONCLUSIONS
There are strong links between the goals for exoplanet demographics and asteroseismology with the Roman GBTDS. The most natural linkage is the study of red clump (core He-burning) stars in the Galactic bulge.
Relative to microlensing studies, asteroseismology is more sensitive to photometric precision, and prime asteroseismic targets will likely be saturated at the distance of the Galactic center. As a result, controlling photometric noise at the level of 1 mmag/15-min cadence or better is a key scientific requirement for Roman GBTDS asteroseismology.
If saturated star photometry is limited to 1 mmag/15-min cadence, observing the nominal fields at higher cadence will be important to reduce noise. If saturated star photometry performs better than 1 mmag/15-min cadence, observing additional fields will increase the scientific yield by probing additional stellar populations. The minimum cadence requirement is 30 minutes.
A second synergy is precise and accurate astrometric data for saturated stars.
In particular, tying relative Roman astrometry to the Gaia system is a promising approach that could yield reliable mass estimates for low signal to noise asteroseismic detections.
The requirements for red giant asteroseismology with the Roman GBTDS will map to most other types of pulsators in the H-R diagram. Asteroseismology will also provide natural synergies with exoplanet science, including both a deeper understanding of the underlying stellar populations and the characterization of lens and source stars in microlensing events.
unsrtnat
|
http://arxiv.org/abs/2307.01944v1
|
20230704222620
|
Text + Sketch: Image Compression at Ultra Low Rates
|
[
"Eric Lei",
"Yiğit Berkay Uslu",
"Hamed Hassani",
"Shirin Saeedi Bidokhti"
] |
cs.LG
|
[
"cs.LG",
"cs.CV",
"cs.IT",
"math.IT"
] |
[
Text + Sketch: Image Compression at Ultra Low Rates
equal*
Eric Leiyyy
Yiğit Berkay Usluyyy
Hamed Hassaniyyy
Shirin Saeedi Bidokhtiyyy
yyyDepartment of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA
Eric [email protected]
Machine Learning, ICML
0.3in
]
Recent advances in text-to-image generative models provide the ability to generate high-quality images from short text descriptions. These foundation models, when pre-trained on billion-scale datasets, are effective for various downstream tasks with little or no further training. A natural question to ask is how such models may be adapted for image compression. We investigate several techniques in which the pre-trained models can be directly used to implement compression schemes targeting novel low rate regimes. We show how text descriptions can be used in conjunction with side information to generate high-fidelity reconstructions that preserve both semantics and spatial structure of the original. We demonstrate that at very low bit-rates, our method can significantly improve upon learned compressors in terms of perceptual and semantic fidelity, despite no end-to-end training.
§ INTRODUCTION
Recent works from the lossy compression literature have demonstrated that when human satisfaction or semantic visual information is prioritized, compression schemes that manually encode images using human-written text descriptions as the compressed representation <cit.> yield significant improvements compared to traditional compressors. These works show that when operating at such low bit-rates, high levels of human satisfaction can still be achieved despite low pixel-wise fidelity. <cit.> argues that transmitting the compressed information directly in the form of human language, known as textual transform coding, encodes information that scales with the semantic content in the image as interpreted by a human, rather than pixel-wise content.
Concurrent work in text-to-image generative models have provided the ability to generate high-quality images that represent the semantic information of the text across many domains <cit.>. These models, when scaled to orders of magnitude larger parameter counts and billion-scale datasets, have achieved remarkable capabilities in terms of converting language concepts to high quality images when assessed by humans. At such scale, these foundation models provide impressive zero-shot capabilities, allowing them to be used as a backbone when designing models for tasks not explicitly trained for.
Prior neural compression paradigms, such as generative compression, attempt to align its reconstructions with human assessment at low bit-rates by enforcing a distribution matching constraint. In contrast, our work investigates neural compression schemes that target human satisfaction by directly transmitting text containing human-aligned semantic information. By leveraging the recent advances in pre-trained foundation models that operate with vision and language, we demonstrate how neural compression can benefit from the scale of such models, whereas similarly scaled neural compressors would require extensive resources to train end-to-end.
Directly using an off-the-shelf text-to-image model (with no further training) to implement a textual transform code can yield good results in terms of preserving coarse semantic information at very low bit-rates. However, current language-vision models, typically built on top of CLIP <cit.>, are limited in the amount of semantic concepts they can synthesize, especially pertaining to the spatial placement of objects. As shown in Fig. <ref>, when sending a text that is CLIP-optimized as the compressed representation, coarse semantic information is kept, but lower-level details of the image such as the placement of objects is poor. We show how transmitting limited side information in the form of a sketch can preserve lower-level structures. Our full contributions are as follows.
* We design a neural compressor that uses text-to-image models in a zero-shot manner to implement compression schemes preserving human semantics at rates below 0.003 bits-per-pixel (bpp), which is an order of magnitude lower than previously studied regimes.
* We show how side information in the form of a compressed spatial conditioning map can be used to provide the high-level structural information in the image along with a transmitted text caption, producing reconstructions that improve structural preservation.
* We show that our schemes outperform state-of-the-art generative compressors in terms of semantic and perceptual quality, despite no end-to-end training.
§ RELATED WORK
Neural Compression.
The use of neural networks to design lossy compressors was initiated by merging quantization with autoencoder architectures <cit.>. These models are traditionally trained with reference distortion metrics such as MSE, MS-SSIM <cit.>, and LPIPS <cit.>. However, reconstructions suffer from blurriness at low bit-rates, motivating the field of generative compression <cit.>. In this field, distortion can be sacrificed for perceptual quality <cit.>, measured as alignment between source and reconstruction distributions. This improves human satisfaction in the rate regime of <0.1 bpp, compressors tuned for pixel-wise distortions fail to generate realistic reconstructions. At such low bitrates, pixel-wise fidelity metrics fail to align with human perception, since they largely focus on low-level details rather than the higher-level structures. Generative compression thus allows for realistic but not necessarily faithful (with regards to a distortion measure) reconstructions. However, it poses realism in terms of a distribution matching formulation which can offer some alignment with human satisfaction; textual transform coding attempts to directly encode the human-aligned semantic information in the form of language.
Text-to-Image Models.
While many architectures have been studied for text-to-image generation, such as VAEs <cit.> and GANs <cit.>, diffusion models have become the method of choice due to easier scaling to massive datasets <cit.>. These methods typically leverage CLIP <cit.>, a pre-trained model that provides a shared text-image embedding space, to retrieve an embedding corresponding to the input text. The diffusion model uses this embedding as a conditional input to denoise randomly sampled noise into an image corresponding to the text. Our work does not necessarily require diffusion models per se; it can use any foundation model that can generate images from text, pre-trained at scale.
Diffusion-based neural compressors have also been investigated <cit.>. Rather than transmit text, these models transmit a quantized embedding as the conditional input to the diffusion-based decoder. DiffC <cit.> directly transmits pixels corrupted by noise in a diffusion process. Contrary to these models, our proposed compressor uses fully pre-trained text-to-image models, transmits text directly as a compressed representation for the conditional input, and utilizes a spatial conditioning input as side information.
Human Compression.
<cit.> demonstrates a hand-crafted compression scheme in which humans write down text descriptions of the image to compress; the decoder consists of another human who has access to a database of images and image editing software. Human-rated scores for this scheme were higher than WebP at similar rates, despite the fact that the reconstructions may not necessarily be faithful at the pixel-level. Building off these results, <cit.> conjectures that human satisfaction is a function of pixel-level fidelity with a semantic fidelity, which can be interpreted via human language. At large rates, pixel-wise fidelity dominates human satisfaction; at low rates, pixel-wise fidelity becomes less meaningful when compared to the “textual” information of the image.
§ TRANSMITTING TEXT WITH SIDE INFORMATION
§.§ Textual Transform Coding via Prompt Inversion
Textual transform coding <cit.> represents the image using a text description, which gets encoded with a lossless compressor. The decoder first recovers the text, which is used to synthesize the reconstructed image. Our decoder is assumed to be some text-to-image model G that is pre-trained on a large-scale dataset. In this section, we use Stable Diffusion (SD) <cit.> for G.
One option to encode an image into text is via image captioning methods <cit.>. However, most image captioning methods such as <cit.> produce text captions that align with human language, but may not necessarily be optimal for the text-to-image model. Since SD uses pre-trained CLIP for text embeddings, it is more meaningful to directly search in the embedding space of CLIP in order to find text that represents an image for SD.
Following <cit.>, we use prompt inversion (PI), which performs projected gradient search in CLIP's embedding space, using cosine similarity between the image embedding and the text embedding as the objective. To project to a hard text, the nearest CLIP embedding is found for each token being searched over. The tokens are converted to text and losslessly compressed. At the decoder, the decoded text is simply provided to G which synthesizes a reconstructed image. We call this method Prompt Inversion Compression (PIC). PIC can achieve very low rates (around 0.002-0.003 bpp), yet preserve semantic information, since CLIP itself has semantic image comparison capabilities due to its vision-text merged feature space.
An interesting fact of language-vision models such as SD is that quantization is naturally built into the model, where the language to vision conversion takes place. Text, after converted to tokens, is directly mapped to a codebook of embedding vectors. Thus, one can interpret prompt inversion as the encoder searching for the best CLIP codeword.
§.§ Adding Spatial Conditioning Maps
One challenge with using PIC is that it is difficult to increase reconstruction quality as the bitrate of text increases. As shown in <cit.>, increasing the number of tokens after a certain point fails to improve the CLIP score of the reconstructed image. Rather than attempting to increase the textual information in a way that G can process, we instead propose to send side information in the form of a “sketch” of the original image, which contains finer structural information.
In this setting, we choose G to be ControlNet <cit.>, a text-to-image model built on top of SD that can process spatial conditioning maps in the form of edge detection maps, segementation maps, depth maps, etc. It ensures that the reconstructed images follow the spatial structure of the input map, and the style suggested by the text prompt. We use ControlNet as our decoder by sending a compressed version of the edge detection map (i.e., the sketch) as side information in addition to the prompt inversion text. In particular, we use the variant of ControlNet trained with Holistically-nested Edge Detection (HED) maps <cit.> since those were found to have lower rate-distortion compared to Canny edge and segmentation maps. To compress the sketch, we use standard learned nonlinear transform codes (NTC) <cit.> trained on a small dataset of HED maps. We call this scheme Prompt Inversion Compressor with Sketch (PICS), shown in Fig. <ref>.
§ EXPERIMENTAL RESULTS
§.§ Setup
Datasets and Evaluation: We use three evaluation datasets: Kodak <cit.>, CLIC 2021 <cit.> test, and DIV2K <cit.> validation. Since textual transform coding operates in an order of magnitude lower regime than even “extreme” compression (<0.1bpp), pixel-wise reference distortion metrics (PSNR, MS-SSIM, LPIPS) are not as meaningful. As human-aligned semantic reference metrics are still an open problem <cit.>, we use cosine similarity of CLIP embeddings as a proxy,
d_CLIP(, ) = 1 - e() · e()/e()e(),
where e(·) is the image encoder of CLIP. Ideally, a human study would be performed, which we leave for future work. In addition, we use standard no-reference metrics to measure realism according to distributional alignment, FID <cit.> and KID <cit.>.
Baseline Methods: These include a generative compression baseline, HiFiC <cit.>, and a NTC baseline <cit.> optimized for MS-SSIM.
PIC/PICS: See appendix.
§.§ Results
Quantitative Results: Shown in Figs. <ref>, <ref>, we compare PIC/PICS in terms of rate-perception and distortion (in terms of d_CLIP). At such a low rate regime, HiFiC achieves better rate and semantic and perceptual quality than MS-SSIM trained NTC models. However, PICS is able to improve upon that further, with strict improvement in all tradeoffs. Interestingly, while PIC also strictly improves the rate-perception tradeoff, it performs worse in terms of semantic quality than PICS and HiFiC (albeit at lower rate). This shows that adding the sketch actually helps the generative model achieve higher semantic quality.
Qualitative Results:
We visualize several reconstruction examples for all models and compare them with the ground-truths, in Figs. <ref>, <ref>, <ref>, <ref>, and <ref>. In general, PIC is able to reconstruct very coarse concepts contained in the ground-truth image. The NTC model optimized for rate-distortion yields blurry reconstructions in the low-rate regime. HiFiC improves realism, producing a sharper image with perhaps different textures than the original. In some cases, there are still compression artifacts, since HiFiC is not operating in the (near)-perfect realism regime. PICS is able to recover the high-level spatial structure of the ground-truth with superior sharpness, but synthesizes different textures or colors in the image. For example, Fig. <ref> shows how PICS generates a house in front of a mountain of similar shape, but completely changes the color and style of the house as well as the composition of the mountainside. Additionally, PI-encoded prompts mostly recover semantic concepts, in Figs. 6-8.
§ CONCLUSION
In this paper, we use pretrained text-to-image models to construct a compressor that transmits a short text prompt and compressed image sketch. The only training required is to learn a lightweight learned compressor on HED sketches. Experimental results demonstrate superior performance in terms of semantic and perceptual quality. Current and future work includes a human study to evaluate human satisfaction of reconstructed images.
icml2023
§ VISUAL RECONSTRUCTIONS
We place visual reconstructions referenced in the main text here.
§ IMPLEMENTATION DETAILS
§.§ Baselines and Evaluation
For HiFiC, we use an open-source implementation[<https://github.com/Justin-Tan/high-fidelity-generative-compression>] pre-trained on OpenImages <cit.> to a target bitrate of 0.14 bpp. We then fine-tune on a subset of OpenImages with a target bitrate of 0.01 bpp, by setting λ^(a) = 32, 64. For the NTC baseline, we use a model pre-trained[<https://interdigitalinc.github.io/CompressAI/>] on Vimeo90K <cit.>, fine-tuned on the same dataset for a target bitrate of 0.01 bpp.
To compute FID and KID, we use the torch-fidelity[<https://github.com/toshas/torch-fidelity>] <cit.> package.
§.§ PIC/PICS
For PI, we set the prompt length to 16 tokens, following the ablation study in <cit.>. To compress the HED sketch, we train a lightweight NTC model <cit.> on HED maps from Vimeo90K under MS-SSIM distortion, targeting a bitrate of 0.01 bpp. We found that using MS-SSIM yielded better reconstructions from ControlNet compared to PSNR.
We use HuggingFace's diffusers library <cit.> to run inference on SD and ControlNet. Although SD and ControlNet use many more parameters than NTC or HiFiC, one does not need to train these foundation models. Furthermore, with recent advances in efficient inference of diffusion models <cit.>, inference can be run efficiently on a single commodity GPU without using excessive memory. The code will be made available at <https://github.com/leieric/Text-Sketch>.
|
http://arxiv.org/abs/2307.03079v1
|
20230706154740
|
A Robust Characterization of Nash Equilibrium
|
[
"Florian Brandl",
"Felix Brandt"
] |
econ.TH
|
[
"econ.TH",
"cs.GT"
] |
Power-Aperture Resource Allocation for a MPAR with Communications Capabilities
Augusto Aubry, Senior Member, IEEE, Antonio De Maio, Fellow, IEEE, and Luca Pallotta, Senior Member, IEEE
A. Aubry and A. De Maio are with the Department of Electrical Engineering and Information Technology (DIETI), Università degli Studi di Napoli “Federico II”, via Claudio 21, I-80125 Napoli, Italy. E-mail: {augusto.aubry, ademaio}@unina.it.
L. Pallotta is with School of Engineering, University of Basilicata, via dell'Ateneo Lucano 10, 85100 Potenza, Italy. E-mail: [email protected].
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We give a robust characterization of Nash equilibrium by postulating coherent behavior across varying games: Nash equilibrium is the only solution concept that satisfies consequentialism, consistency, and rationality. As a consequence, every equilibrium refinement violates at least one of these properties. We moreover show that every solution concept that approximately satisfies consequentialism, consistency, and rationality returns approximate Nash equilibria. The latter approximation can be made arbitrarily good by increasing the approximation of the axioms. This result extends to various natural subclasses of games such as two-player zero-sum games, potential games, and graphical games.
§ INTRODUCTION
More than 70 years after the publication of 's () original work, the concept of Nash equilibrium has been engraved in economic reasoning so deeply that it is rarely questioned.
But what makes Nash equilibrium stand out from the plethora of solution concepts that have been proposed?
The game theory literature has come up with various answers to this question based on different approaches.
In this paper, we take a normative approach: we consider solution concepts for games in normal form with a fixed number of players and formulate conditions for solution concepts that capture coherent behavior across different games.
The solution concepts we consider map every (finite) multi-player game to a non-empty set of (mixed) strategy profiles.
Our first of three axioms requires that the labels of actions are irrelevant—only the payoffs matter.
Call two actions of a player clones if, irrespective of the other players' actions, they give the same payoff to all players.
In other words, clones are outcome-equivalent and only discernible by their labels.
Consequentialism.
A player can shift probability arbitrarily between clones, and deleting a clone neither changes the probabilities assigned to the player's other actions nor the strategies of the other players.
If a solution concept satisfying consequentialism returns a strategy profile and we modify the game by cloning an action, then the solution concept has to return all strategy profiles in which the probabilities on the player's uncloned actions and the other players' strategies are unchanged.
The second axiom is motivated by situations where the players are uncertain which game will be played.
Consistency.
Every strategy profile that is played in two given games with the same sets of actions for each player is also played when a coin toss decides which of the two games is played and the players choose their strategies before the coin toss.
Instead of modeling the randomization explicitly, we assume that a coin toss between two games is equivalent to the convex combination of these games.
Third, we stipulate a weak notion of rationality.
Rationality.
Players never put positive probability on actions that are dominated in pure strategies.
That is, if one action of a player has a higher payoff than another action irrespective of the other players' strategies, the latter is played with probability 0.
In fact, it suffices for our characterization to assume that dominated actions receive probability at most, say, one half.
Our main result characterizes Nash equilibrium as the unique solution concept that satisfies consequentialism, consistency, and rationality.
In particular, players' behavior has to be consistent with expected utility maximization, which is not apparent from the axioms.
Moreover, every refinement of Nash equilibrium violates at least one of the axioms.
Our second result shows that this characterization is robust: every solution concept that approximately satisfies the three axioms is approximately Nash equilibrium.
This type of stability result is common in many areas of mathematics.[A classic example is isoperimetric stability.
In Euclidean space, a ball is characterized as the unique volume-maximizing shape among all well-behaved shapes with the same surface area.
Isoperimetric stability strengthens this assertion by showing that any shape that is close to volume maximizing has to resemble a sphere.]
To make this precise, we formulate quantitatively relaxed versions of the axioms.
δ-consequentialism demands that a player can shift probability arbitrarily between clones and deleting a clone
does not change the probability on any player's action (except the cloned action) by more than δ.
Similarly, δ-consistency requires that if a strategy profile is played in two games, some strategy profile that differs by no more than δ is played in any convex combination of the two games.
Finally, δ-rationality implies that actions that are dominated in pure strategies by at least a margin of δ are played with probability at most one half.
We show that for every positive , there exists a positive δ such that every solution concept that satisfies the δ-versions of consequentialism, consistency, and rationality is a refinement of -Nash equilibrium.
This result implies one direction of our main theorem.
Moreover, it holds for various natural subclasses of games such as two-player zero-sum games, potential games, or graphical games.
§ RELATED WORK
Which assumptions can be used to justify Nash equilibrium has been primarily studied in epistemic game theory.
In this stream of research, the knowledge of individual players is modeled using Bayesian belief hierarchies, which consist of a game and a set of types for each player with each type including the action played by this type and a probability distribution over types of the other players, called the belief of this type <cit.>. Rather than assuming that players actively randomize, the beliefs about the types of the other players are randomized. Players are rational if they maximize expected payoff given their types and beliefs.
<cit.> have shown that for two-player games the beliefs of every pair of types whose beliefs are mutually known and whose rationality is mutually known constitute a Nash equilibrium.
This result extends to games with more than two players if the beliefs are commonly known and admit a common prior. Common knowledge assumptions in game theory have been criticized for not adequately modeling reality <cit.>.
<cit.>, <cit.>, and <cit.> showed that the results of <cit.> still hold under somewhat weaker common knowledge assumptions.
Building on earlier work by <cit.>, <cit.> have characterized Nash equilibrium via one-player rationality (only utility-maximizing strategies are returned in one-player games) and a consistency condition that is orthogonal to ours because it varies the set of players. Their condition requires that every strategy profile s returned for an n-player game is also returned for the (n-k)-player game that results when k players invariably play their strategies in s. The two axioms immediately imply that only subsets of Nash equilibria can be returned. Their results have no implications for games with a fixed number of players.
The work most closely related to ours is due to <cit.> who have characterized maximin strategies in two-player zero-sum games by consequentialism, consistency, and rationality.
The difference between their results and ours are as follows.
Solution concepts as considered by return a set of strategies for one player, rather than a set of strategy profiles.
Noting that Nash equilibria in zero-sum games consist of pairs of maximin strategies, their result translates to the terminology of the present paper as follows:
in zero-sum games, every solution concept that satisfies consequentialism, consistency, and rationality returns an (exchangeable) subset of Nash equilibria.[A set of strategy profiles is exchangeable if it is a Cartesian product of a set of strategies for each player.]
Our main theorem, <Ref>, is stronger since it (i) holds for any number of players, (ii) shows that all Nash equilibria have to be returned (and thus rules out equilibrium refinements), and (iii) is not restricted to games with rational-valued payoffs and rational-valued strategies (which are assumptions needed for the proof of ).
Moreover, we show that the containment in the set of Nash equilibria (iv) also holds for restricted classes of games (cf. <Ref>), and (v) is robust with respect to small violations of the axioms (<Ref>).
§ THE MODEL
Let U be an infinite universal set of actions and denote by ℱ(U) the set of finite and nonempty subsets of U.
For A∈ℱ(U), let Σ_A be the set of all permutations of U that fix U A pointwise.
If p∈ℝ_+^U, (p) = {a∈ U p(a) > 0} denotes the support of p.
Moreover, let
Δ A = {p∈ℝ_+^U(p)⊆ A and ∑_a∈ A p(a) = 1}
be the set of probability distributions on U that are supported on A.
We call Δ A the set of strategies for action set A.
For two strategies p,q∈Δ A, let p - q = ∑_a∈ A |p(a) - q(a)| be their ℓ_1-distance.
The ball of radius δ > 0 around a set S⊆Δ A is B_δ(S) = {p∈Δ Ainf{p - q q ∈ S} <δ}, the set of strategies supported on A and less than δ away from some strategy in S.[Note that B_δ(S) depends on A. In our usage, A will be clear from the context.]
Let N = {1,…,n} be the set of players.
For action sets A_1,…,A_n∈ F(U), we write A = A_1×…× A_n for the corresponding set of action profiles.
A game on A is a function G A→ℝ^n.
For i∈ N and a∈ A, G_i(a) is the payoff of player i for the action profile a.
We say that G is normalized if for every player i, either G_i has minimum 0 and maximum 1 or is constant at 1.
Strategies and norms extend to A as follows: □ A = Δ A_1×…×Δ A_n and for p,q∈□ A, p - q = max_i∈ Np_i - q_i.
We call □ A the set of (strategy) profiles.
The players' payoffs for a strategy profile are the corresponding expected payoffs.
Thus, a strategy profile p∈□ A is a Nash equilibrium of G if
G_i(p_i,p_-i) ≥ G_i(q_i,p_-i) for all q_i∈Δ A_i and i∈ N.
For two games G and G' on A = A_1 A_n and A' = A_1' A_n', we say that G is a blow-up of G' (G' is a blow-down of G) if every action of every player in G is payoff-equivalent to one of her actions in G'.
That is, there are surjective functions ϕ_i A_i→ A_i', i∈ N, such that with ϕ = (ϕ_1,…,ϕ_n), G = G'∘ϕ.
Actions in ϕ_i^-1(a_i') for a_i'∈ A_i' are called clones of a_i'.
Put differently, G is obtained from G' by “blowing up” each action a_i' to |ϕ_i^-1(a_i')| clones of it.[<cit.> have considered a more permissive notion of “blowing down” in the context of Nash equilibrium refinements for extensive-form games. Their notion of a reduced form of a normal-form game allows deleting any action that is a convex combination of other actions.]
A strategy p_i∈Δ A_i induces a strategy on A_i' via the pushforward of ϕ_i: (ϕ_i)_*(p_i) = p_i∘ϕ^-1.
Then, a strategy profile p∈□ A induces ϕ_*(p) = ((ϕ_1)_*(p_1),…,(ϕ_n)_*(p_n)).
A solution concept f maps every game to a set of strategy profiles.
That is, f(G) ⊆□ A for a game G on A∈ℱ(U)^n.
If f(G)≠∅ for all G, f is a total solution concept.
An example of a total solution concept is , which returns all strategy profiles that constitute Nash equilibria. <cit.> has shown that every game admits at least one Nash equilibrium.
(G) = {p∈□ A p is a Nash equilibrium of G}.
§ CHARACTERIZATION OF NASH EQUILIBRIUM
This section defines our axioms and states the characterization of Nash equilibrium along with the more illuminating part of its proof.
The remainder of the proof and all other proofs are given in the Appendix.
Consequentialism requires that if G is a blow-up of G', a strategy profile is returned in G if and only if its pushforward is returned in G'.
Equivalently, it asserts that (i) cloning an action does not change the probabilities of other actions and the strategies of the other players, and (ii) the probability on the cloned action can be distributed arbitrarily on its clones.
A solution concept f satisfies consequentialism if for all games G and G' such that G is a blow-up of G' with surjection ϕ = (ϕ_1,…,ϕ_n),
f(G) = ϕ_*^-1(f(G')).
Consequentialism is a common desideratum in decision theory.
It corresponds to the conjunction of 's () Postulate 6 (cloning of a player's actions) and Postulate 9 (cloning of Nature's states, i.e., of opponent's actions).
The latter also appears as column duplication <cit.> and deletion of repetitious states <cit.>.
In the context of social choice theory, a related condition called independence of clones was introduced by <cit.> <cit.>.
If G and G' are games on the same action sets, then ϕ_i permutes the actions of player i.
This special case of consequentialism, called equivariance, implies that relabeling the actions of a player results in the same relabeling of her strategies.
A solution concept f satisfies equivariance if for all games G on A and all π = (π_1,…,π_n) where π_i is a permutation of A_i,[For a strategy p_i∈Δ(A_i), p_i∘π_i is the strategy with (p_i∘π_i)(a_i) = p_i(π(a_i)). For a strategy profile p = (p_1,…,p_n), p∘π = (p_1∘π_1,…,p_n∘π_n), and this operation extends to sets of strategy profiles pointwise.]
f(G∘π) = f(G) ∘π.
We will frequently apply equivariance to strategy profiles where each player's strategy is the uniform distribution on some subset of her actions and the permutations map each action to an action with the same probability, thus giving a new game for which the same strategy profile is returned.
Consistency requires that if a strategy profile is returned in several games with the same action sets, it should also be returned in any convex combination of these games. An inductive argument shows that this is equivalent to the restriction of the axiom to convex combinations of only two games.
A solution concept f satisfies consistency if for all games G^1,…,G^k on A and any λ∈ℝ_+^k with ∑_jλ_j = 1,
f(G^1)∩…∩ f(G^k)⊆ f(λ_1 G^1 + … + λ_kG^k).
We are not aware of game-theoretic work using this consistency axiom other than that by <cit.>.
considers combinations of decision-theoretic situations obtained by taking unions of action sets.
His Postulate 9 states that any action that is chosen in two situations should also be chosen in such a combination.
In our context, this translates to a consistency condition on the support of strategies and varying sets of actions.
Closer analogs of consistency, involving convex combinations of distributions over states (i.e., strategies of Nature), have been considered as decision-theoretic axioms <cit.>.
's () characterization of the Shapley value involves an additivity axiom (which he calls law of aggregation) that is similar in spirit to consistency.
Lastly, analogs of consistency feature prominently in several axiomatic characterizations in social choice theory, where it relates the choices for different sets of voters to each other <cit.>.
FB: Unclear whether potential weakenings of consistency suffice: f(G') = f(G”) ⇒ f(G') ⊆ f(λ G' + (1-λ) G”) or the even weaker condition f(G') = {p} = f(G”) ⇒ p ∈ f(λ G' + (1-λ) G”).
Probably not. Are these weakenings satisfied by trembling-hand perfect equilibrium?
For a game G on A and two actions a_i,a_i'∈ A_i, we say that a_i dominates a_i' if G_i(a_i,a_-i) > G_i(a_i',a_-i) for all a_-i∈ A_-i; a_i is undominated if no action dominates it.
Rationality requires that only strategy profiles in which dominated actions receive probability at most 1/2 are returned.[For <Ref>, it suffices to require that the probability on dominated actions is below 1. When studying the robustness of this characterization in <Ref>, it becomes important to bound the probability on dominated actions away from 1. Other than that, any fixed bound smaller than 1 in place of 1/2 would work as well.]
A solution concept f satisfies rationality if for all games G,
f(G)⊆ B_1/2(Δ(Â_1)) B_1/2(Δ(Â_n)),
where Â_i denotes the set of undominated actions of player i in G.
Note that rationality is not concerned with mixed strategies and thus does not rely on expected payoffs.
Moreover, it does not need any assumptions about other players.
The strengthening of rationality requiring that dominated action receive probability 0 is equivalent to 's () strong domination, 's () Property (5), and weaker than 's () Postulate 2.
As a shorthand, we say that a solution concept is nice if it satisfies these three axioms.
It turns out that Nash equilibrium is the only nice total solution concept.
theoremnashtheorem
Let f be a total solution concept that satisfies consequentialism, consistency, and rationality.
Then, f =.
Totality is only required for the inclusion ⊆ f. The converse inclusion f⊆ also holds for solution concepts that fail to be total.
One consequence of <Ref> is that every refinement of Nash equilibrium violates at least one of the axioms (including totality).
We discuss some examples.
Since rationality is preserved under taking subsets, every refinement satisfies rationality.
Quasi-strict equilibrium <cit.> satisfies consistency, and, for two players, is total <cit.>.
However, it violates consequentialism (even for two players) since it does not allow for the possibility that clones of equilibrium actions are played with probability 0.
A trivial example is a game where all players' utility functions are constant for all action profiles.
Then, every full support strategy profile is a quasi-strict equilibrium, whereas consequentialism requires that every strategy profile is returned.
For three or more players, quasi-strict equilibria may not exist.
Trembling-hand perfect equilibrium <cit.> is total and satisfies consequentialism.
Consequently, it is not consistent.
Lastly, strong equilibrium <cit.> and coalition-proof equilibrium <cit.> also satisfy consequentialism and violate consistency (and fail to be total even for two players).
All properties in <Ref> are required to derive the conclusion.
For each of the four axioms (including totality), there is a solution concept different from that satisfies the three remaining axioms.
* Consequentialism: return all strategy profiles in which every player randomizes only over actions that are best responses against uniformly randomizing opponents;
satisfies totality, consistency, and rationality but violates consequentialism.
* Consistency: return all strategy profiles in which every player randomizes only over actions that maximize this player's highest possible payoff;
satisfies totality, consequentialism, and rationality but violates consistency.
* Rationality: return all strategy profiles that maximize the sum of all players' payoffs;
satisfies totality, consequentialism, and consistency but violates rationality.
* Totality: return all strategy profiles whose pushforwards are pure Nash equilibria in a blowdown of the original game; satisfies consequentialism, rationality, and consistency but violates totality.
The first three examples are neither contained in nor contain . The last one is necessarily a refinement of due to <Ref>.
Further examples that are refinements or coarsenings of are not hard to find.
Quasi-strict equilibrium (for two players) violates consequentialism but satisfies consistency and rationality as discussed in <Ref>.
Trembling-hand perfect equilibrium is not consistent but satisfies the other two axioms.
The trivial solution concept returning all strategy profiles in all games violates rationality but satisfies the remaining two axioms.
Note that rationality is so weak that even for one-player games, all three axioms are required for the characterization.
Examining the proof of the inclusion f⊆, one can see that it remains valid for any class of games that is closed under blowing-up, blowing-down, and taking convex combinations.
More precisely, it holds for any class of games 𝒢 with the following properties.
* If G is a blow-up of G', then G∈𝒢 if and only if G'∈𝒢.
* If G_1,…,G_k∈𝒢 are games on the same action profiles and λ∈ℝ_+^k with ∑_j λ_j = 1, then λ_1 G_1 + … + λ_k G_k∈𝒢.
Various well-known classes of games satisfy these properties, for example, (strategically) zero-sum games, graphical games, and potential games.
By contrast, symmetric games are not closed with respect to blow-ups.[A game is symmetric if all players have the same set of actions and permuting the actions in any action profile results in the same permutation of the players' payoffs.]
For example, cloning or permuting actions of only one player makes a symmetric game asymmetric.
More generally, it is unclear how to extend the current proof approach to symmetric games.[For example, consider the following symmetric two-player game.
cccc 0 1/2 1/2
c(ccc)
0 3,3 2,2 2,2
1/2 2,2 3,3 0,0
1/2 2,2 0,0 3,3
The indicated strategy profile is not a Nash equilibrium.
However, all symmetric games that are blow-ups of this game and convex combinations thereof have the same payoffs on the diagonal.
Hence, clones of the second and third action of either player are not dominated in any of these games, so that no contradictions to rationality occur.
This issue does not arise for symmetric zero-sum games <cit.>.
]
For our proof of the converse inclusion, ⊆ f, a class of games needs to have enough games with a unique equilibrium.
Suppressing technicalities, it is required that for every game G∈𝒢 and every equilibrium p of G, G can be written as a convex combination of games in 𝒢 that have p as the unique equilibrium.
We have not examined which classes of games, other than the class of all games, have this property.
One can “repair” any given solution concept by iteratively adding strategy profiles whenever there is a failure of consequentialism or consistency.
Equivalently, one can define the closure of a solution concept f as the smallest solution concept containing f and satisfying consequentialism and consistency.[This closure is well-defined since consequentialism and consistency are preserved under taking intersections of solution concepts.]
By <Ref>, the closure of a total refinement of Nash equilibrium is Nash equilibrium, and the closure of total non-refinements violates rationality.
FB: potential remark about single axiom that combines consequentialism and consistency. The idea is to allow a variable set of players and postulate that solutions of an n-player game, in which one player has only one action, are identical to the solutions of the corresponding n-1-player game. The axiom then looks like consequentialism for a weaker notion of clones. Two actions of player i are clones if player i's payoffs are identical. The axiom says that probability can only be shifted between clones for solutions which return the same strategies for the other players in both the uncloned and cloned games. The new axiom seems to be stronger than the conjunction of consequentialism and consistency (given the weak multi-player assumption stated above).
FB: Advantages: one axioms less, different from maximin paper (this approach would not even work for zero-sum games), no motivation for consistency required.
FB: Disadvantages: axiom perhaps more difficult to motivate, virtually every other solution concept will violate this axiom (whereas quasi-strict satisfies consistency and most other refinements satisfy consequentialism), no closely related decision-theoretic axiom, does not work for constant number of players.
The proof of <Ref> uses a lemma that is technical to state but illustrates how one can manipulate games using the above axioms.
It shows that nice solution concepts behave as one would hope under the analog of row and column operations familiar from linear algebra.
More precisely, it shows that if one adds a linear combination of some actions (with positive rational-valued coefficients) to another action, then a nice solution concept shifts probability from the former actions to the latter in proportion to the coefficients.
A linear combination of actions here means a linear combination of the corresponding payoffs for all players.
<Ref> below illustrates this.
A similar conclusion applies to adding new actions that are linear combinations of existing ones.
Let f be a nice solution concept, G be a game on A, and p∈ f(G).
Let â∈ U^N, k∈(ℤ_+^U{0})^N, and κ∈ℝ_+^N so that for all i∈ N, k_i(â_i) > 0 if â_i∈ A_i, (k_i)⊆ A_i, x_i:=κ_i k_i ≤ p_i, and x_i(â_i) = p_i(â_i).
Then, there is a game Ĝ on  with Â_i = A_i∪{â_i} so that the following holds.
* p̂∈ f(Ĝ), where p̂_i = p_i - x_i + |x_i|· e_â_i.[By e_â_i, we denote the standard unit vector in ℝ^U with a 1 in position â_i.]
* For all I⊆ N and a∈ A, Ĝ(â_I,a_-I) = G((k_i/|k_i|)_i∈ I, a_-I).
Condition <ref> states that p̂_i is obtained from p_i by shifting probability x_i(a_i) from a_i to â_i.
Condition <ref> ensures that playing â_i in Ĝ is payoff-equivalent to playing k_i/|k_i| in G.
<Ref> illustrates the proof of <Ref>.
For all i∈ N and a_i∈ A_i, let Â_i^a_i⊆ U so that |Â_i^a_i| = k_i(a_i) if a_i≠â_i, |Â_i^â_i| = max{k_i(â_i)-1,0}, and all Â_i^a_i are disjoint and disjoint from A_i^-:=A_i{â_i}.
Let Ã_i = A_i ∪⋃_a_i∈ A_iÂ_i^a_i and ϕ_iÃ_i→ A_i so that ϕ_i^-1(a_i) = {a_i}∪Â_i^a_i.
Let G̃ be a game on à = Ã_1Ã_n so that G̃ is a blow-up of G with surjection ϕ = (ϕ_1,…,ϕ_n).
Consequentialism implies that p̃∈ϕ_*^-1(p)⊆ f(G̃), where p̃_i = p_i - x_i + |x_i|·(Ã_i A_i^-).
For all i∈ N, let Σ_i ⊆Σ_Ã_i be the set of all permutations π_i of Ã_i so that π_i is the identity on A_i^-, and let Σ = Σ_1Σ_n.
Let
G̅ = 1/|Σ|∑_π∈ΣG̃∘π.
Then all actions in Ã_i A_i^- are clones of each other in G̅.
Since p̃_i assigns the same probability to all actions in Ã_i A_i^- and f is equivariant as a consequence of satisfying consequentialism, p̃∈ f(G̃∘π) for all π∈Σ.
Consistency then implies that p̃∈ f(G̅).
Moreover, for all I⊆ N, a∈ A, and ã∈ (Ã_1 A_1^-) (Ã_n A_n^-),
G̅(ã_I,a_-I) = G((k_i/|k_i|)_i∈ I, a_-I),
since any action in Ã_i A_i^- is the convex combination of actions in A_i with coefficients x_i = κ_i k_i.
Lastly, one can delete all but one of the clones of any action in Ã_i A_i^-.
For all i∈ N, let ϕ̂_iÃ_i→Â_i so that ϕ̂_i is the identity A_i^- and ϕ̂^-1(â_i) = Ã_i A_i^-.
Let G̃ be a blow-up of Ĝ with surjection ϕ̂= (ϕ̂_1,…,ϕ̂_n).
Note that p̂ = ϕ̂_*(p̃).
Consequentialism thus gives p̂∈ f(Ĝ).
Moreover, for all I⊆ N and a∈ A,
Ĝ(â_I,a_-I) = G̅(â_I,a_-I) = G((k_i/|k_i|)_i∈ I, a_-I).
Starting with the assumption that f returns a non-equilibrium strategy profile in some game G, the proof of <Ref> has two steps, each of which uses <Ref>.
First, we construct from G a game G̅ where a non-equilibrium profile is returned and every player plays some distinguished action with probability close to 1 with the distinguished action of one player, say j, not being a best response.
The second step is to replace every other action of every player other than j by a convex combination of itself and the distinguished action (with sufficiently large weight on the latter).
This results in a game Ĝ in which j's distinguished action is dominated, violating rationality.
With a third application of <Ref> (replacing every action of j by a convex combination of itself and j's distinguished action), one could construct a game where j plays dominated actions with probability equal to 1.
We omit this step since it is not required for our notion of rationality.
We prove that f⊆.
The proof of ⊆ f is given in <Ref>.
Let G be a game on A and p ∈ f(G).
Assume that G is normalized and A_1 = … = A_n = B.
The former is for convenience with obvious adjustments for non-normalized G and the latter is without loss of generality since f satisfies consequentialism.
Assume for contradiction that p∉(G).
Then, there is a player j∈ N for whom p_j is not a best response.
That is, G_j(a_j^*,p_-j) - G_j(p) > > 0 for some a_j^*∈ A_j.
Let a̅∈ (U A_1) (U A_n), k∈(ℤ_+^U{0})^N, and κ∈ℝ_+^N so that for all i∈ N, x_i:=κ_i k_i ≤ p_i and |x_i| ≥ 1 - ^2/4n.
Note that then |p_i - k_i/|k_i|| ≤^2/4n.
By <Ref>, there is a game G̅ on A̅ with A̅_i = A_i∪{a̅_i} so that
* p̅∈ f(G̅), where p̅_i = p_i - x_i + |x_i|· e_a̅_i, and
* for all I⊆ N and a∈ A, G̅(a̅_I,a_-I) = G((k_i/|k_i|)_i∈ I, a_-I).
In particular, p̅_j(a̅_j) ≥ 1 - ^2/4n and, since G is normalized,
G̅_j(a_j^*,a̅_-j) - G̅_j(a̅_j,a̅_-j) > /2.
The second step is to modify G̅ so that a_j^* dominates a̅_j by replacing every action of every player i≠ j by a convex combination of itself and a̅_i with a sufficiently high weight on a̅_i.
For every b∈ B, let k^b∈(Z_+^U{0})^N and κ^b∈ℝ_+^N so that for all i∈ N{j}, k_i^b = e_b + 2n/· e_a̅_i and κ_i^b k_i^b(b) = p̅_i(b), and k_j^b = e_b and κ_j^b = p̅_j(b) (this choice for j is to make the application of <Ref> trivial for player j).
Note that for i≠ j and x_i^b := κ_i^b k_i^b,
∑_b∈ A_i x_i^b(a̅_i) = 2n/∑_b∈ A_i x_i^b(b) = 2n/∑_b∈ A_ip̅_i(b)≤2n/·^2/4n = /2.
Sequential application of <Ref> to G̅, one for each b∈ B with â = (b,…,b), gives a game Ĝ on A̅ so that
* p̂∈ f(Ĝ), where p̂_i = p̅_i - 2n/(1-|x_i|)· e_a̅_i + 2n/∑_b∈ A_i (p_i(b) - x_i(b))· e_b for i≠ j and p̂_j = p̅_j, and
* for all I⊆ N{j} and a∈ A, Ĝ(a_j,a_I,a̅_-(I∪{j})) = G̅(a_j,(k_i^a_i/|k_i^a_i|)_i∈ I,a̅_-(I∪{j})).
The last part of the first statement implies that p̂_j assigns probability close to 1 to a̅_j, more precisely, p̂_j(a̅_j) = p̅_j(a̅_j) ≥ 1 -.
The second statement means that player i≠ j playing a_i in Ĝ is payoff equivalent to playing a_i with probability 1/1 + 2n/ and a̅_i with probability 2n//1 + 2n/ in G̅.
Thus, using again that G is normalized, we have for all a∈A̅,
|Ĝ_j(a_j,a_-j) - G̅_j(a_j,a̅_-j)| ≤ n ·1/1 + 2n/ < /2.
It follows that for all a∈A̅,
Ĝ_j(a_j^*,a_-j) - Ĝ_j(a̅_j,a_-j) > 0.
That is, j plays the dominated action a̅_j with probability at least 1- in Ĝ.
This contradicts rationality.
§ ROBUSTNESS OF THE CHARACTERIZATION
The goal of this section is to examine whether the characterization of the preceding section is robust with respect to small violations of the axioms.
Observe that robustness is not a foregone conclusion.
The fact that the three sets of solution concepts defined by our three axioms intersect only in one point, , does not imply that the intersection of slight thickenings of these sets only contains points close to .
We start by defining approximate equilibria and approximate versions of the axioms.
The standard notion of an approximate equilibrium is -equilibrium.
A strategy profile is an -equilibrium if no player can deviate to a strategy that increases her payoff by more than .
By _ we denote the solution concept returning all -equilibria in all games.
Let G be a game on A = A_1 A_n.
A profile p∈□ A is an -equilibrium of G if
G_i(p_i,p_-i) ≥ G_i(q_i,p_-i) - for all q_i ∈Δ A_i and i∈ N.
Likewise, there are natural approximate notions of consequentialism, consistency, and rationality.
They are obtained from the exact versions defined in the preceding section by allowing for small perturbations of strategy profiles.
Recall that consequentialism requires that if G is a blow-up of G', then a profile is returned in G if and only if its pushforward is returned in G'.
Approximate consequentialism weakens this condition by requiring only that the set of returned profiles for G is close (in Hausdorff distance) to the set of profiles whose pushforward is returned in G'.
A solution concept f satisfies δ-consequentialism if for all games G and G' such that G is a blow-up of G' with surjection ϕ = (ϕ_1,…,ϕ_n), f(G) = ϕ_*^-1(ϕ_*(f(G))), and
f(G)⊆ B_δ(ϕ_*^-1(f(G'))) and ϕ_*^-1(f(G')) ⊆ B_δ(f(G)).
The first assertion requires that probability can be distributed arbitrarily over clones.
The first set inclusion asserts that in the game obtained by cloning actions, the solution concept can only return profiles that differ by no more than δ from some profile obtained as a blow-up of a profile that is returned in the original game.
Conversely, the second inclusion requires that every blow-up of a profile returned in the original game differs by at most δ from some profile returned in the blown-up game.
Approximate consistency weakens its exact counterpart by requiring only that if a profile is returned in several games, then some profile close to it has to be returned in any convex combination of these games.
A solution concept f satisfies δ-consistency if for all games G^1,…,G^k on the same action profiles and every λ∈ℝ_+^k with ∑_jλ_j = 1,
f(G^1)∩…∩ f(G^k)⊆ B_δ(f(λ_1 G^1 + … + λ_kG^k)).
Unlike for the exact notion, δ-consistency as defined is not equivalent to its restriction to k = 2.[A proof attempt by induction fails for two reasons.
First, every application of δ-consistency introduces an additive error of δ, so that k-1 applications to two games only gives an error bound of (k-1)δ on the right hand side.
Second, even if f(G_1), f(G_2), and f(G_3) have a non-empty common intersection, f(λ_1/λ_1 + λ_2 G_1 + λ_2/λ_1 + λ_2 G_2) need not intersect with f(G_3), making a further application of δ-consistency useless.]
Lastly, approximate rationality asserts that actions that are dominated by a non-negligible amount are not played too frequently.
If G is a game on A= A_1 A_n and a_i,a_i'∈ A_i are actions of player i, we say that a_i δ-dominates a_i' if G(a_i,a_-i) ≥ G(a_i',a_-i) + δ for all a_-i∈ A_-i.
A solution concept f satisfies δ-rationality if for all games G,
f(G) ⊆ B_1/2(Â_1^δ) B_1/2(Â_n^δ),
where Â_i^δ denotes the set of actions of player i∈ N that are not δ-dominated in G.
Each of the δ-axioms coincides with the exact version defined in the previous section when δ = 0.
We call a solution concept δ-nice if it satisfies δ-consequentialism, δ-consistency, and δ-rationality.
It is routine to check that _ is δ-nice for small enough δ.
Conversely, we show that for small enough δ, every δ-nice solution concept is a refinement of _.
The statement is restricted to normalized games and requires equivariance for reasons that we discuss in Remarks <ref> and <ref>.
Consider solution concepts on the set of normalized games.
Then, for every > 0, there is δ>0 so that if f is equivariant and satisfies δ-consequentialism, δ-consistency, and δ-rationality, then f is a refinement of _.
Note that δ in <Ref> does not depend on f.
Otherwise the statement would follow from the fact that exact consequentialism, consistency, and rationality characterize Nash equilibrium.
The proof is similar to that of <Ref>.
However, apart from the need to keep track of error terms arising from applications of the axioms, some steps require additional care.
The proof appears in <Ref>.
The comments on subclasses of games in <Ref> remain valid for <Ref>.
<Ref> is restricted to normalized games since -equilibrium becomes too stringent without a bound on the payoffs.
More precisely, for any game G, the set of -equilibria of cG shrinks to the set of exact equilibria of G as c goes to positive infinity.
To see that the restriction to normalized games is indeed necessary, consider the following solution concept f for two-player games.
Let Ĝ = (0,0 0,-c) be a game where the first player has only one action, the second player has two actions, and c is a large positive number (c≫1/δ), and let p̂ = (1,(1-δ,δ)).
Observe that p̂ is not an -equilibrium for Ĝ.
Define f(G) = (G) ∪{p̂} if G = Ĝ and f(G) = (G) otherwise.
It is not hard to see that f is δ-nice.[δ-consequentialism and δ-rationality follow from the fact that is nice and the definition of the axioms.
To verify δ-consistency, it suffices to consider convex combinations of games with equilibrium p̂ involving Ĝ.
The only games on the same action sets as Ĝ with equilibrium p̂ are those where both actions of the second player give her the same payoff.
Any convex combination of such games with Ĝ has (1,(1,0)) as the unique equilibrium, which conforms with δ-consistency.]
Applying δ-consequentialism to two games that are the same up to permuting actions, one can see that it implies δ-equivariance defined analogously to δ-consequentialism.
However, our proof requires exact equivariance.
Whether <Ref> holds without this assumption is open.
In contrast to <Ref>, a characterization of _ as the only δ-nice solution for some δ is not possible.
If _ is δ-nice for some δ, then so is _' for all ' ≤.
A weaker converse to <Ref> would require that for every δ > 0, there is > 0 so that if f is δ-nice, then f = _' for some ' ≤.
While this statement is obviously false in general since δ-niceness is vacuous for large δ, we do not know if it holds for small enough δ.
<Ref> fails if -equilibrium is replaced by some alternative notions of approximate equilibrium.
For example, <Ref> does not hold when replacing _ by B_(), the set of profiles that are -close to some Nash equilibria.
This is because _' is δ-nice for small enough ', however, _'⊈B_() for any ' > 0.
In words, no matter how small ' is, there exist '-equilibria that are more than away from every Nash equilibrium.
§ ACKNOWLEDGMENTS
Florian Brandl acknowledges support by the DFG under the Excellence Strategy EXC-2047. Felix Brandt acknowledges support by the DFG under grants BR 2312/11-2 and BR 2312/12-1.
The authors thank Francesc Dilmé, Benny Moldovanu, and Lucas Pahl for helpful feedback.
A preliminary version of this paper was presented at the Interdisciplinary CIREQ-Workshop at Université de Montréal (Montréal, March 2023) and the Microeconomic Theory Workshop at the University of Bonn (Bonn, May 2023).
31
urlstyle
[Arrow and Hurwicz(1972)]ArHu72a
K. J. Arrow and L. Hurwicz.
An optimality criterion of decision-making under ignorance.
In C. F. Carter and J. L. Ford, editors, Uncertainty and
expectations in economics: essays in honour of G.L.S. Shackle, pages 1–11.
Basil Blackwell, 1972.
[Aumann(1959)]Auma59a
R. J. Aumann.
Acceptable points in general cooperative n-person games.
In A. W. Tucker and R. D. Luce, editors, Contributions to the
Theory of Games IV, volume 40 of Annals of Mathematics Studies, pages
287–324. Princeton University Press, 1959.
[Aumann and Brandenburger(1995)]AuBr95a
R. J. Aumann and A. Brandenburger.
Epistemic conditions for Nash equilibrium.
Econometrica, 630 (5):0 1161–1180, 1995.
[Bach and Tsakas(2014)]BaTs14a
C. W. Bach and E. Tsakas.
Pairwise epistemic conditions for Nash equilibrium.
Games and Economic Behavior, 85:0 48–59, 2014.
[Barelli(2009)]Bare09a
P. Barelli.
Consistency of beliefs and epistemic conditions for nash and
correlated equilibria.
Games and Economic Behavior, 670 (2):0
363–375, 2009.
[Bernheim et al.(1987)Bernheim, Peleg, and Whinston]BPW87a
B. D. Bernheim, B. Peleg, and M. D. Whinston.
Coalition-proof nash equilibria I. Concepts.
Journal of Economic Theory, 420 (1):0 1–12,
1987.
[Brandl and Brandt(2019)]BrBr17c
F. Brandl and F. Brandt.
Justifying optimal play via consistency.
Theoretical Economics, 140 (4):0 1185–1201,
2019.
[Brandl et al.(2016)Brandl, Brandt, and Seedig]Bran13a
F. Brandl, F. Brandt, and H. G. Seedig.
Consistent probabilistic social choice.
Econometrica, 840 (5):0 1839–1880, 2016.
[Chernoff(1954)]Cher54a
H. Chernoff.
Rational selection of decision functions.
Econometrica, 220 (4):0 422–443, 1954.
[Cui et al.(2014)Cui, Li, and Ng]CLN14a
L.-B. Cui, W. Li, and M. K. Ng.
Birkhoff–von Neumann theorem for multistochastic tensors.
SIAM Journal of Matrix Analysis and Applications, 350
(3), 2014.
[Gilboa and Schmeidler(2003)]GiSc03a
I. Gilboa and D. Schmeidler.
A derivation of expected utility maximization in the context of a
game.
Games and Economic Behavior, 440 (1):0
172–182, 2003.
[Gintis(2009)]Gint09a
H. Gintis.
The Bounds of Reason: Game Theory and the Unification of the
Behavioral Sciences.
Princeton University Press, 2009.
[Harsanyi(1967)]Hars67a
J. C. Harsanyi.
Games with incomplete information played by “Bayesian” players,
part I.
Management Science, 500 (12):0 1804–1817,
1967.
[Harsanyi(1973)]Hars73a
J. C. Harsanyi.
Oddness of the number of equilibrium points: A new proof.
International Journal of Game Theory, 20 (1):0
235–250, 1973.
[Hellman(2013)]Hell13a
Z. Hellman.
Weakly rational expectations.
Journal of Mathematical Economics, 490 (6):0
496–500, 2013.
[Kohlberg and Mertens(1986)]KoMe86a
E. Kohlberg and J.-F. Mertens.
On the strategic stability of equilibria.
Econometrica, 54:0 1003–1037, 1986.
[Lackner and Skowron(2021)]LaSk21a
M. Lackner and P. Skowron.
Consistent approval-based multi-winner rules.
Journal of Economic Theory, 192:0 105173, 2021.
[Maskin(1979)]Mask79a
E. Maskin.
Decision-making under ignorance with implications for social choice.
Theory and Decision, 110 (3):0 319–337, 1979.
[Milnor(1954)]Miln54a
J. Milnor.
Games against nature.
In Decision Processes, chapter 4, pages 49–59. Wiley, 1954.
[Myerson(1995)]Myer95b
R. B. Myerson.
Axiomatic derivation of scoring rules without the ordering
assumption.
Social Choice and Welfare, 120 (1):0 59–74,
1995.
[Nash(1951)]Nash51a
J. F. Nash.
Non-cooperative games.
Annals of Mathematics, 540 (2):0 286–295,
1951.
[Norde(1999)]Nord99a
H. Norde.
Bimatrix games have quasi-strict equilibria.
Mathematical Programming, 85:0 35–49, 1999.
[Norde et al.(1996)Norde, Potters, Reijnierse, and Vermeulen]NPRV96a
H. Norde, J. Potters, H. Reijnierse, and D. Vermeulen.
Equilibrium selection and consistency.
Games and Economic Behavior, 120 (2):0
219–225, 1996.
[Peleg and Tijs(1996)]PeTi96a
B. Peleg and S. Tijs.
The consistency principle for games in strategic form.
International Journal of Game Theory, 250
(1):0 13–34, 1996.
[Schrijver(1998)]Schr98a
A. Schrijver.
Theory of Linear and Integer Programming.
Wiley, 1998.
[Selten(1975)]Selt75a
R. Selten.
Reexamination of the perfectness concept for equilibrium points in
extensive games.
International Journal of Game Theory, 40 (1):0
25–55, 1975.
[Shapley(1953)]Shap53c
L. S. Shapley.
A value for n-person games.
Annals of Math Studies, 28:0 307–317, 1953.
[Smith(1973)]Smit73a
J. H. Smith.
Aggregation of preferences with variable electorate.
Econometrica, 410 (6):0 1027–1041, 1973.
[Tideman(1987)]Tide87a
T. N. Tideman.
Independence of clones as a criterion for voting rules.
Social Choice and Welfare, 40 (3):0 185–206,
1987.
[Young and Levenglick(1978)]YoLe78a
H. P. Young and A. Levenglick.
A consistent extension of Condorcet's election principle.
SIAM Journal on Applied Mathematics, 350 (2):0
285–300, 1978.
[Zavist and Tideman(1989)]ZaTi89a
T. M. Zavist and T. N. Tideman.
Complete independence of clones in the ranked pairs rule.
Social Choice and Welfare, 60 (2):0 167–173,
1989.
§ APPENDIX
§ OMITTED PROOF FROM <REF>
We prove the missing direction of <Ref>, that is, ⊆ f.
The main idea of the proof is simple: for every game G and every equilibrium p of G, show that G can be written as a convex combination of games for which p is the unique equilibrium.
Since f⊆ and f is total, we know that f has to return unique equilibria.
Consistency thus gives p∈ f(G).
The work lies in finding a suitable representation as a convex combination.
A first observation is that it suffices to prove that ⊆ f holds for games where the payoff functions of all players but one are 0.
More formally, we say that G is a player i payoff game if for all j≠ i, G_j≡ 0.
Then, the following holds.
Let G be a game and p∈(G).
For i∈ N, let G^i be the game with G^i_i = G_i and G^i_j ≡ 0 for all j≠ i.
Then, p∈(G^i).
First, p_i is a best response to p_-i in G^i since it is a best response in G and G^i_i = G_i.
Second, for all j≠ i, p_j (and any other strategy for that matter) is a best response to p_-j in G^i since G^i_j ≡ 0.
Hence, p∈(G).
So if we can show that for every player i payoff game G, (G)⊆ f(G), we can use <Ref> and consistency to conclude that the same conclusion holds for all games.
While this reduction is convenient, it is not as powerful as it may seem since player i payoff games have a unique equilibrium only if all players other than i have only a single action.
Thus, even when decomposing player i payoff games into games with a unique equilibrium, one needs to consider games with non-zero payoffs for all players.
§.§ Reduction to Deterministic Slice-Stochastic Tensors
The next step is a further reduction showing that it is sufficient to consider the case when G_i is a slice-stochastic tensor.
To motivate this notion, recall that the well-known Birkhoff-von Neumann theorem states that every bistochastic matrix can be written as a convex combination of permutation matrices.[A matrix M∈ℝ_+^m× m is bistochastic if the rows sums and column sums are 1.]
There are different ways one might try to generalize this statement to higher-order tensors.
For example, one might say that a tensor T A_1 A_n→ℝ_+ is n-stochastic if for all i∈ N and a_-i∈ A_-i, ∑_a_i∈ A_i T(a_i,a_-i) = 1 (which is to say that every “tube” of T sums to 1).
However, with this definition, for n ≥ 3, it is not true that every n-stochastic tensor can be written as a convex combination of n-stochastic tensors taking values in {0,1} <cit.>.
We thus opt for a different generalization of bistochastic matrices.
Let A∈ℱ(U)^n with |A_1| = … = |A_n|.
A tensor T A→ℝ is slice-stochastic for i∈ N if
* for all a_-i∈ A_-i, ∑_a_i∈ A_i T(a_i,a_-i) = 1,
* for all a_i∈ A_i, ∑_a_-i∈ A_-i T(a_i,a_-i) = m^n-2, and
* for all a∈ A, 0 ≤ T(a) ≤ 1.
We say that T is a deterministic slice-stochastic tensor if it is slice-stochastic and takes values in {0,1}.
For n = 2, T is a bistochastic matrix if and only if it is slice-stochastic for some i = 1,2.
Note that if T is slice-stochastic for i, then
∑_a∈ A T(a) = ∑_a_-i∈ A_-i 1 = ∑_a_i∈ A_i m^n-2 = m^n-1.
We omit writing “for i” when i is clear from the context.
It turns out that the Birkhoff-von Neumann theorem does extend to slice-stochastic tensors of any order.
That is, every slice-stochastic tensor is a convex combination of deterministic slice-stochastic tensors.
This will allow us to reduce the problem ⊆ f to payoff functions of the latter type.
Let A∈ℱ(U)^n with |A_1| = … = |A_n|.
Let T A→ℝ be a slice-stochastic tensor for i∈ N.
Then, there are tensors T^1,…,T^K A→{0,1} that are slice-stochastic for i and (λ^1,…,λ^K)∈Δ([K]) so that
T = ∑_k∈[K]λ^k T^k.
Viewing T as an element of ℝ^A, T is slice-stochastic for i if it is a solution to the linear feasibility program
Mx≤ v,
where the matrix M and the vector v are given by the constraints of type <ref>, <ref>, and <ref>.
Thus, M has 2|A_-i| + 2|A_i| + 2|A| = 2(m^n-1 + m + m^n) rows (2 for each constraint of each of the three types) and |A| = m^n columns; the number of rows of v is the same as for M.
Note that M is of the form M = (M̃^⊺,-M̃^⊺)^⊺ for some matrix M̃, since each constraint gives two rows in M where one is the negative of the other.
(For n = 3 and m = 2, the matrix M̃ is depicted in <Ref>.)
We want to show that the polytope defined by Mx≤ v has integral vertices.
Since v is integral (in fact, {-1,0,1}), it suffices to show that M is totally unimodular.[M is totally unimodular if every square submatrix of M has determinant -1, 0, or 1.]
But M is totally unimodular if and only if M̃ is totally unimodular.
Now M̃ is totally unimodular if for every subset R of rows of M̃, there is an assignment σ R→{-1,1} of signs to the rows in R so that for all a∈ A,
∑_r∈ Rσ(r) M̃_r,a∈{-1,0,1}.
A proof of this result appears for example in the book by <cit.>.
This condition is easy to check in the present case.
Let R be a subset of rows of M̃.
It is easy to see that we may assume that R does not contain rows corresponding to constraints of type <ref> since those rows only contain of single 1 or -1 and can thus always be signed so as to not introduce violations of (<ref>).
We then define σ as follows.
* For each r∈ R corresponding to a constraint of type <ref> for a_-i∈ A_-i, let σ(r) = 1.
* For each r∈ R corresponding to a constraint of type <ref> for a_i∈ A_i, let σ(r) = -1.
Then, for each column index a∈ A, there is a most one 1 and at most one -1 in the sum in (<ref>), which concludes the proof.
To show that considering games where every player's payoff function is slice-stochastic games is sufficient, we examine how certain changes to the payoff functions influence the set of equilibria.
For α∈ℝ_++^N and β∈ℝ^N, we write α G + β for the game with payoff function (α G + β)_i = α_i G_i + β_i for all i∈ N.
Moreover, we say that T A→ℝ is constant for i∈ N if for all a_-i∈ A_-i, T(·,a_-i) is constant.
Then, a game G is constant if for all i∈ N, G_i is constant for i.
Let G be a game on A.
Then,
* for all α∈ℝ_++^N and β∈ℝ^N, (G) = (α G + β), and
* for all constant games G̃ on A, (G) = (G + G̃).
The statement follows from a straightforward calculation.
For n = 2, G is constant if every column for G_1 and every row of G_2 is constant.
Hence, <Ref><ref> asserts that adding a constant to some column of G_1 or row of G_2 does not change the set of equilibria.
The second type of modification of payoff functions concerns multiplication by tensors.
The following notation will be convenient.
Let A∈ℱ(U)^n and T,T' A→ℝ.
Then, the Hadamard product T T' A→ℝ of T and T' is defined by
(T T')(a) = T(a)T'(a)
for all a∈ A.
Let G be a game on A.
Let i∈ N, q∈ℝ_++^A_i, and T_q A→ℝ so that for all a∈ A, T_q(a) = q(a_i).
Then,
p∈(G) if and only if (p̃_i,p_-i)∈(G^i,T_q),
where G^i,T_q_i = G_i, for all j≠ i, G^i,T_q_j = G_j T_q, and for all a_i∈ A_i, p̃_i(a_i) = (p_i(a_i)/q(a_i))/(∑_a_i'∈ A_ip_i(a_i')/q(a_i')).
The statement follows from a straightforward calculation.
Note that T_q in <Ref> is constant for all j≠ i.
For n = 2, the game G^1,T_q is obtained from G by multiplying the a_1 row of the matrix G_2 by q(a_1) for all a_1∈ A_1.
The next lemma show that in some sense, it is enough to consider games where the payoff function of every player is slice-stochastic.
This is explained in more detail after the lemma.
Let A∈ℱ(U)^n with |A_1| = … = |A_n|.
Let p∈□ A so that all p_i have full support.
Then, there are T_1,…,T_n A→ℝ_++ so that T_i is constant for all j≠ i and for all games G on A and p∈(G), there is a game G̅ on A so that the following hold.
* For all i∈ N, G̅_i = G_i T^-i + S_i, where T^-i is the Hadamard product of all T_j with j≠ i, and S_i A→ℝ is constant for i.
* For all i∈ N, G̅_i is slice-stochastic for i.
* p̅∈(G̅), where p̅ and p̅_i is the uniform distribution on A_i.
For all i∈ N, let T_i A→ℝ_++ so that for all a∈ A, T_i(a) = p_i(a_i), where > 0 is small compared to (the reciprocal of) the largest payoff in G and |A| = m^n.
Define a game Ĝ on A so that for all i∈ N, Ĝ_i = G_i T^-i.
It follows from <Ref> that p̅∈(Ĝ).
For all i∈ N, let S_i A→ℝ so that for all a∈ A,
S_i(a) = 1/m(1 - ∑_a_i'∈ A_iĜ_i(a_i',a_-i)).
Note that S_i is constant for i.
Let G̅ be the game so that for all i∈ N, G̅_i = Ĝ_i + S_i.
By <Ref><ref>, p̅∈(G̅) and so for all i∈ N, ∑_a_-i∈ A_-iG̅_i(·,a_-i)
is constant on A_i.
Moreover, for all a_-i∈ A_-i,
∑_a_i∈ A_iG̅_i(a_i,a_-i) = 1
by the choice of S_i.
Lastly, since is small, Ĝ_i(a)≈ 0 and S_i(a)≈1/m for all a∈ A.
Hence, G̅_i is slice-stochastic.
We have thus shown that G̅ satisfies all three conditions in the statement of the lemma.
The condition <ref> in <Ref> is redundant since the strategy profile where every players distributes uniformly automatically an equilibrium if the payoff function of every player is slice-stochastic (by <Ref> in the definition of slice-stochasticity).
We chose to state it to make it explicit.
The way in which we will use <Ref> is as follows.
Given a game G with the same number of actions for every player and a full support equilibrium p∈(G), we want to show that p∈ f(G) if f is a total solution concept satisfying consequentialism, consistency, and rationality.
To do this, we show that the game G̅ obtained from the lemma (by virtue of having slice-stochastic tensors as payoff functions) can essentially be written as a convex combination of games for which p̅ is the unique equilibrium.
Applying the inverse transformation of that in <ref> to each of the summands and using <Ref> shows that G can essentially be written as a convex combination of games for which p is the unique equilibrium.
Using consistency and the fact that f⊆, this shows that p∈ f(G).
The caveat “essentially” refers to the fact that one may need to multiply the G_i by positive scalars, add constant games to G, and clone actions to get the desired representation as a convex combination.
Since neither of the first two operations changes the set of equilibria and the effect of introducing clones is controlled by consequentialism, this does not cause problems.
Similarly, the restriction that every player has the same number of actions in G is without loss of generality by consequentialism.
Together, <Ref>, <Ref>, and <Ref> show the following.
If we want to show that p∈ f(G) whenever p∈(G) and p has full support, then it is enough to do so in the case when G is a player i payoff game with G_i deterministic slice-stochastic.
The full support assumption will be successively eliminated via <Ref>, <Ref>, and <Ref>.
§.§ Decomposition of Deterministic Slice-Stochastic Tensors
The first step is to construct a sufficiently rich class of games that have the strategy profile where every player randomizes uniformly as their unique equilibrium.
This will be the class of cyclic games and almost cyclic games.
In cyclic games, the payoff of every player only depends on the action of one other player and the dependencies form a cycle.
Roughly, a player gets payoff 1 if she matches the action of the preceding player and 0 otherwise.
The fact that every such game has uniform randomization as an equilibrium is not hard to see.
Uniqueness is achieved by imposing a restriction on the notion of “matching” (see <Ref><ref>).
Almost cyclic games differ only insofar that there is one exceptional player whose payoff not only depends on the action of the preceding player in the cycle but (for few action profiles) also on the actions of all other players.
Making the concepts above precise requires two definitions.
Let A be a set and π∈Σ_A.
Then, B⊆ A is a fixed subset of π if π(B) = B.
We say that π has no non-trivial fixed subset if its only fixed subsets are ∅ and A.
For any two permutations π,π', π'∘π has a non-trivial fixed subset if and only if π∘π' does.
At least when A is finite, permutations without a non-trivial fixed subset always exist.
Any cyclic permutation is an example.
Also, for is any permutation π, there is a permutation π' so that π'∘π has no non-trivial fixed subset.
Let A_1,…,A_n∈ℱ(U) with |A_1| = … = |A_n| and let A = A_1 A_n.
A set A^*⊆ A is a permutation set if for all i∈ N and a_i'∈ A_i, there is exactly one a∈ A^* with a_i = a_i'.
Another way of saying that A^* is a permutation set is that the projection from A^* onto each A_i is bijective.
If A_1 = … = A_n = [m], yet another way is requiring that there are permutations π_1,…,π_n-1 of [m] so that A^* = {(k,π_1(k),…,π_n-1∘…∘π_1(k)) k∈[m]}.
This characterization motivates the terminology.
Let A = [m]^n and G be a game on A.
We say that G is cyclic if there are π_1,…,π_n∈Σ_[m] and α∈ℝ_++^N so that the following holds.
* π_n∘…∘π_1 has no non-trivial fixed subset,
* for all j∈ N and a∈ A,
G_j(a) =
α_j if a_j = π_j-1(a_j-1),
0 otherwise.
For n ≥ 3, we say that G is almost cyclic if there are i∈ N, π_1,…,π_n∈Σ_[m], A^*⊆ A, and α∈ℝ_++^N so that the following holds.
* π_n∘…∘π_1 has no non-trivial fixed subset,
* for all j≠ i and a∈ A,
G_j(a) =
α_j if a_j = π_j-1(a_j-1),
0 otherwise, and
G_i(a) =
α_i if a_i = π_i-1(a_i-1) and a∉A^*
0 otherwise,
* A^* is a permutation set and for all a∈ A^*, a_i = π_i-1(a_i-1) and there is j≠ i,i+1 with a_j ≠π_j-1(a_j-1).
Unless otherwise noted, we assume that α = (1,…,1).
[(Almost) cyclic games for n = 2,3]
Let n = 2 and consider the following game G where both players have three actions.
The first (second) entry in each cell denotes the payoff of player 1 (player 2).
[baseline,
label distance=10pt]
[anchor = north west, matrix of math nodes, left delimiter=(,right delimiter=), row sep=.1cm, column sep=.1cm] (g)
1,0 0,1 0,0
0,0 1,0 0,1
0,1 0,0 1,0
;
Then, G is a cyclic game with π_1 = (123) and π_2 = (1)(2)(3).
Almost cyclic games for two players are degenerate in the sense that the payoff function of the exceptional player is 0.
Now let m = n = 3, i = 1, π_1 = (123), and π_2 = π_3 = (1)(2)(3).
Clearly, π_3∘π_2∘π_1 has no non-trivial fixed subset.
One can check that the permutation set A^* = {(1,2,1),(2,3,2),(3,1,3)} satisfies <ref>.
The payoff function of player 1 is shown below (player 1 chooses the matrix, player two the row, and player 3 the column; the entries corresponding to A^* are marked in boldface).
[baseline,
label distance=10pt]
[matrix of math nodes, left delimiter=(,right delimiter=), row sep=.1cm, column sep=.1cm] (g1)
1 0 0
0 0 0
1 0 0
;
[right = 2cm of g1, matrix of math nodes, left delimiter=(,right delimiter=), row sep=.1cm, column sep=.1cm] (g2)
0 1 0
0 1 0
0 0 0
;
[right = 2cm of g2, matrix of math nodes, left delimiter=(,right delimiter=), row sep=.1cm, column sep=.1cm] (g3)
0 0 0
0 0 1
0 0 1
;
Let G be a cyclic game or an almost cyclic game.
Then, G has a unique Nash equilibrium where every player randomizes uniformly over all of her actions.
We prove the statement for almost cyclic games.
The proof for cyclic games is easier.
More specifically, for cyclic games, <Ref> and <Ref> below can be combined into one that is proved in the same way as <Ref>.
Let n ≥ 3.
Let i∈ N, π_1,…,π_n∈Σ_[m], A^*, and α be as in the definition of almost cyclic games.
By <Ref>, it is without loss of generality to assume that α = (1,…,1).
Let p be the strategy profile where p_j is the uniform distribution on A_j for all j∈ N.
It is easy to see that p is an equilibrium since for all a_i∈ A_i,
∑_a_-i∈ A_-i G_i(a_i,a_-i) = m^n-2 - 1,
where the -1 comes from the fact that A^* is a permutation set.
Moreover, for all j≠ i and a_j∈ A_j,
∑_a_-j∈ A_-j G_j(a_j,a_-j) = m^n-2.
Now we show that p is the unique equilibrium.
Assume that p' is an equilibrium.
For all j∈ N, let B_j = max_a_j∈ A_j p_j'(a_j).
Note that for all j≠ i, the payoff of j only depends on her own strategy and the strategy of j-1.
Hence, a_j∈ A_j is a best response to p'_-j if a_j ∈π_j-1(B_j-1).
So B_j⊆(p_j')⊆π_j-1(B_j-1).
In particular,
|(p_i')| ≥ |(p_i+1')| ≥…≥ |(p_i-1')|.
We distinguish two cases.
Suppose that |(p_j')| ≥ 2 for some j≠ i.
By (<ref>), |(p'_i+1)| ≥ 2.
Since n≥ 3 and A^* is a permutation set, it follows that G_i(a_i,p'_-i) > 0 for all a_i∈π_i-1((p_i-1')).
Hence, (p_i')⊆π_i-1((p_i-1')), and so |(p_i')| ≤ |(p_i-1')|.
It follows that all inequalities in (<ref>) hold with equality.
But then for all j∈ N, (p'_j) = π_j-1((p'_j-1)), and so (p'_1) is a fixed subset of π_n∘…∘π_1.
By assumption, this is only possible if A_1 = (p'_1), which in turn implies that (p'_j) = A_j for all j∈ N.
Hence, p' = p.
Suppose that |(p_j')| = 1 for all j≠ i and let a'_j∈ A_j so that p'_j(a'_j) = 1.
Note that a'_j = π_j-1(a'_j-1) for all j≠ i,i+1.
Moreover, we have that G_i(π_i-1(a'_i-1),a'_-i) = 1 unless (π_i-1(a'_i-1),a'_-i) ∈ A^*.
But by the second part of <ref> in the definition of almost cyclic games, (π_i-1(a'_i-1),a'_-i) ∉A^*.
Hence, G_i(π_i-1(a'_i-1),a'_-i) = 1.
By <ref> and the fact that |(p'_i-1)| = 1, G_i(a_i,p'_-i) = 0 unless a_i = a_i' = π_i-1(a'_i-1).
Thus, a'_i is the unique best response of player i to p'_-i.
Since p' is an equilibrium, it follows that p'_i(a'_i) = 1 and |(p'_i)| = 1.
Finally, since p'_i+1 is a best response to p'_i, we have that a'_i+1 = π_i(a'_i).
So a'_j = π_j-1(a'_j-1) for all j∈ N, which means that {a'_1} is a fixed subset of π_n∘…∘π_1 and contradicts <ref>.
Let A_1,…,A_n∈ℱ(U) with |A_1| = … = |A_n| and let A = A_1 A_n.
A function T A→ℝ is a permutation tensor if there is a permutation set A^*⊆ A so that T(a) = 1 for all a∈ A^* and T(a) = 0 for all a∈ A A^*.
A game G is a permutation game if there is i∈ N so that G is a player i payoff game and G_i is a permutation tensor.
For n = 2, a permutation tensor is a permutation matrix.
We show that every permutation game can be written as a convex combination of cyclic games and almost cyclic games up to an additive constant.
Note that even though permutation games are player i payoff games, the games in the decomposition are not.
Let A_1,…,A_n∈ℱ(U) so that |A_1| = … = |A_n| = m, and let A = A_1 A_n.
Let i∈ N and A^*⊆ A be a permutation set.
Let G be the permutation game for i and A^* on A.
Then, G can be written as a convex combination of cyclic games, almost cyclic games, and a constant β∈ℝ^N.
Let G be a game as in the statement of the lemma.
For simplicity, assume that i = n and for all j∈ N, A_j = [m].
Let π_1,…,π_n∈Σ_[m] so that A^* = {(k,π_1(k),…,π_n-1∘…∘π_1(k))∈[m]^N k∈[m]} and π_n∘…∘π_1 has no non-trivial fixed subset.
This is possible since we may first choose π_1,…π_n-1 so that the first condition holds (using that A^* is a permutation set) and then choose π_n so that π_n∘…∘π_1 has no non-trivial fixed subset.
Let  = {a∈ A a_n = π_n-1(a_n-1)}.
Note that |Â| = m^n-1 and A^*⊆Â.
Let B^1,…,B^M⊆ A be a partition of  A^* into permutation sets, where M = m^n-2 - 1.
(Note that M = 0 if n = 2.)
For example, one may take
B^s_-{1,n} = {(a_1,a_2 + s_2,…,a_n-1 + s_n-1, π_n-1(a_n-1 + s_n-1))∈[m]^N a∈ A^*},
where s_-{1,n}∈ [m]^N{1,n}{0}.
Since s_-{1,n}≠ 0, the last condition in <ref> holds for B^l.
For all l∈[M], let G^l be the almost cyclic game for i = n, π_1,…,π_n, B^l, and α = (1,…,1).
In particular, G^l_n(a) = 0 for all a∈ B^l and G_n^l(a) = 1 for all a∈Â B^l.
Now let
Ĝ = ∑_l∈[M] G^l.
Then, the following hold.
* For all a∈ A^*, G_n(a) = M; for all a∈Â A^*, G_n(a) = M-1.
* For all j≠ n and a∈ A with a_j = π_j-1(a_j-1), G_j(a) = M.
* For all j∈ N and a∈ A with a_j ≠π_j-1(a_j-1), G_j(a) = 0.
Recall that for all j∈ N and (a_j-1,a_j)∈ [m]^2, there is a cyclic game G' for some π_1',…,π_n'∈Σ_[m] with a_j = π'_j-1(a_j-1) and arbitrary α'∈ℝ_++^N (see the remarks after <Ref>).
For all j∈ N and (a_j-1,a_j)∈ [m]^2 with a_j ≠π_j-1(a_j-1), let G^j,a_j-1,a_j be a cyclic game for some π_1',…,π_n'∈Π_[m] with a_j = π'_j-1(a_j-1), α'_j' = 1 for all j'≠ j, and α'_j = M+1 if j≠ n and α'_j = M if j = n.
Then, let
G̃ = Ĝ + ∑ G^j,a_j-1,a_j + ∑ G',
where the first sum ranges over all j∈ N and (a_j-1,a_j)∈[m]^2 with a_j ≠π_j-1(a_j-1), and the second sum ranges over all cyclic games with α = (1,…,1) whose tuple of permutations does not already appear in one of the games in the first sum.
Since for all j∈ N and (a_j-1,a_j)∈[m]^2, the number of cyclic games with a_j = π'_j-1(a_j-1) is the same (assuming α is fixed), the following hold.
* For all a^*∈ A^* and a∈ A A^*, G̃_n(a^*) = G̃_n(a) + 1.
* For all j≠ n and a,a'∈ A, G_j(a) = G_j(a').
Hence, G = G̃ + β for some β∈ℝ^N.
More explicitly, if L is the total number of games in the first and second sum in (<ref>), then
* For all a∈ A^*, G̃_n(a) = M + L/m; for all a∈ AA̅^*, G_n(a) = M - 1 + L/m.
* For all j≠ i and a∈ A, G̃_j(a) = M + L/m.
(The denominator m comes from the fact that for a permutation game G', the fraction of actions for which G'_j(a) = α_j is 1/m.)
This proves G can be written as a sum of games of the claimed types.
Multiplying each game in that sum by the same appropriately chosen positive scalar gives the representation of G as a convex combination.
The last lemma in this section roughly shows that every game with deterministic slice-stochastic tensors as payoff functions can be written as a convex combination of permutation games.
This conclusion is not literally true.
More precisely, we show that every game with deterministic slice-stochastic tensors as payoff functions is the blow-down of a convex combination of permutation games (with the same number of clones of every action for every player).
By <Ref> and <Ref>, each of the games in this sum can in turn be written as a convex combination of games for which uniform randomization is the unique equilibrium.
Consistency and consequentialism thus imply that uniform randomization has to be returned in the original game, possibly alongside other equilibria.
Let A_1,…,A_n∈ℱ(U) with |A_1| = … = |A_n| = m, and let A = A_1 A_n.
Let G be a game on A so that G_i is a deterministic slice-stochastic tensor for all i∈ N.
Let p∈□ A, where p_i is the uniform distribution on A_i.
Then, up to cloning actions, multiplying by positive scalars, and adding constants, G can be written as a convex combination of games for which p is the unique equilibrium.
More precisely, there are α∈ℝ_++^N, β∈ℝ^N, and a game G̅ so that
* G is a blow-down of αG̅ + β with surjection ϕ,
* ϕ_*(p̂) = p, where p̂_j is the uniform distribution on the actions of player j in G̅, and
* G̅ is a convex combination of games for which p̂ is the unique equilibrium.
First we observe that it suffices to consider the case that G is a player i payoff game (meaning that G_j = 0 for all j≠ i).
The idea now is to “blow up” G by introducing m^n-2 actions for each action of every player in G.
Then, one can define a permutation game Ĝ on the larger action sets so that for every action profile a in G for which player i has payoff 1, there is exactly one blow up â of a for which i has payoff 1 in Ĝ.
Since Ĝ is a permutation game, we know from <Ref> that, up to adding constants, it can be written as a convex combination of games for which the strategy profile where every player plays the uniform distribution is the unique equilibrium.
Now permuting all actions in Ĝ that come from blowing up the same action in G, summing up over all the resulting games, and multiplying by a positive scalar gives a game G̅ that is a blow-up of G.
This achieves the desired decomposition.
(<Ref> explains why the blowing up is necessary.)
As noted above, we may assume that there is i∈ N so that G is a player i payoff game.
Let A^* = {a∈ A G_i(a) = 1} be the actions for which player i has payoff 1 in G.
For all j∈ N and a_j∈ [m], let B_j^a_j = {a_-j∈ A_-j (a_j,a_-j) ∈ A^*} be the set of opponents action profiles for which i gets payoff 1 when j plays a_j.
Since G_i is slice-stochastic, |B_j^a_j| = m^n-2.
(For j = i, this is <ref> in the definition of slice-stochastic games; for j≠ i, for all a_j and a_-{i,j}, there is exactly one a_i so that G_i(a_i,a_j,a_-{i,j}) = 1 by <ref>.)
Let Â_j = ⋃_a_j∈ A_j{a_j}× B_j^a_j and  = Â_1Â_n.
Note that Â_j has size m^n-1.
Now let and
Â^* = {((a_1,a_-1),…,(a_n,a_-n))∈Â a∈ A^*}.
It is easy to see that Â^* is a permutation set in Â.
Let Ĝ be the permutation game for i on  for the permutation set Â^*.
By <Ref>, we have that Ĝ can be written as a sum of cyclic games, almost cyclic games, and a constant.
Moreover, by <Ref>, we know that p̂ is the unique equilibrium of cyclic and almost cyclic games, where p̂_j is the uniform distribution on Â_j.
For all j∈ N, let ϕ_jÂ_j→ A_j be projection onto the first coordinate, and let ϕ = (ϕ_1,…,ϕ_n).
That is, ϕ_j((a_j,a_-j)) = a_j.
Let Σ_j⊆Σ_Â_j be the set of all permutations π_j so that ϕ_j(â_j) = ϕ_j(π_j(â_j)) for all â_j∈Â_j (that is, all permutations that keep the first coordinate fixed).
Let Σ = Σ_1Σ_n.
So we have that for all â∈Â and π∈Σ,
G_i(ϕ(â)) = G_i((ϕ∘π)(â)).
Consider the game
G̅ = M/|Σ|∑_π∈ΣĜ∘π,
where M = m^(n-1)n.
(The factor M is necessary since for all a∈ A, there are M profiles â∈Â with ϕ(â) = a.)
Since each â∈Â^* is completely determined by its first coordinates, for all a∈ A^*, there is exactly one â∈Â^* with ϕ(â) = a.
Thus, the following hold.
* For all â∈Â with ϕ(â)∈ A^*, G̅_i(â) = 1.
* For all â∈Â with ϕ(â)∈ A A^*, G̅_i(â) = 0.
* For all j≠ i and â∈Â, G̅_j(â) = 0.
So for all j∈ N and a_j∈ A_j, all actions in {a_j}× B_j^a_j are clones of each other in G̅.
This gives that G̅ is a blow-up of G.
Moreover, ϕ_*(p̂) = p since for all j∈ N and a_j∈ A_j, there is the same number of clones (namely m^n-2) in Â_j.
This gives the desired decomposition of G.
The “blowing up” of G to G̅ in the proof of <Ref> by introducing m^n-2 clones of every action is necessary since not every deterministic slice-stochastic tensor can be written as a sum of permutation tensors.
For example, let n = 3, m = 2, i = 1, and G_1(a) = 1 for a∈{(1,1,1),(1,2,2),(2,1,2),(2,2,1)}⊆{1,2}^3 and G_1(a) = 0 otherwise.
Then, there is no permutation set B⊆{1,2}^3 so that G_1(a) = 1 for all a∈ B.
So far, we have shown the following.
To prove that p∈ f(G) whenever p∈(G) and p has full support, it suffices to show this for the case when each G_i is slice-stochastic by <Ref>.
By the generalization of the Birkhoff-von Neumann theorem, <Ref>, the fact that in every such game uniform randomization is an equilibrium, and consistency, we can further restrict to G_i's that are deterministic slice-stochastic.
Now <Ref> shows that all those games can in essence be written as convex combinations of games for which uniform randomization is the unique equilibrium and, thus, has to be returned by f.
Consistency then gives the desired conclusion.
The last step is to extend the statement beyond full support equilibria.
§.§ Reduction to Full Support Equilibria
We show that it suffices to prove that all full support equilibria have to be returned, which we have done in the previous section.
There are three steps to the argument.
First, we reduce to equilibria with support equal to the set of all rationalizable actions, then to quasi-strict equilibria, and lastly to arbitrary equilibria.[We say that an action (profile) is rationalizable if it survives iterated elimination of strictly dominated actions.]
The strategy is always to write a game with some type of equilibrium as a convex combination of games where the same equilibrium is of the type in the preceding step.
For example, <Ref> shows that any nice total solution concept has to return all equilibria whose support consists of all rationalizable actions.
Let f be a nice total solution concept.
Let G be a game on A.
Let A̅_i⊆ A_i be the sets of rationalizable actions of i∈ N in G and A̅ = A̅_1A̅_n.
Then, if p∈(G) so that for all i∈ N, (p_i) = A̅_i, then p ∈ f(G).
Consider a decreasing sequence of action profiles obtained by successively removing dominated actions until no more deletions are possible.
That is, let (A_1^0,…,A_n^0),…,(A_1^K,…,A_n^K)∈ 2^A_1 2^A_n so that for all k∈[K], there is i∈ N for which the following holds.
* A_i^k ⊊ A_i^k-1 and for all j≠ i, A_j^k = A_j^k-1.
* For all a_i∈ A_i^k-1 A_i^k, there is an action ψ(a_i) ∈ A_i^k that dominates a_i when restricting G to A_1^k-1 A_n^k-1.
* For all j∈ N, A_j^0 = A_j and A_j^K = A̅_j.
For all i∈ N and a_i∈ A_iA̅_i, let ψ̅(a_i) = ψ^s(a_i)∈A̅_i, where s∈ℕ is the unique power so that ψ^s(a_i)∈A̅_i (here we mean ψ applied s times).
Denote by G̅ the game G restricted to action profiles in A̅.
We make several reductions.
By consequentialism, we may assume that |A̅_1| = … = |A̅_n|.
By <Ref> and <Ref>, we may assume that for all i∈ N, G̅_i is slice-stochastic and p_i is the uniform distribution on A̅_i.
By consequentialism, <Ref>, and <Ref>, we may further assume that G̅ can be written as a convex combination of games for which p is the unique equilibrium.
That is, there are games G̅^1,…,G̅^M on A̅ so that for all m∈[M], p is the unique equilibrium of G̅^m, and
G̅ = 1/M∑_m∈[M]G̅^m.
For all m∈[M], we define a game G^m on A so that for all i∈ N and a∈ A,
G_i^m(a) =
G̅_i^m(a) if a∈A̅,
G_i(a) + G̅_i^m(ψ̅(a_i),a_-i) - G_i(ψ̅(a_i),a_-i) if a ∈ (A_iA̅_i)×A̅_-i, and
G_i(a) if a_-i∈ A_-iA̅_-i.
Since ψ̅(a_i) = ψ̅(a_i') for a_i,a_i'∈ A_iA̅_i with ψ(a_i) = a_i', we have that A̅ is the set of rationalizable action profiles of G^m.
It follows that p is the unique equilibrium in G^m and so p∈ f(G^m).
Observe that for a ∈ (A_iA̅_i)×A̅_-i, we have by (<ref>) that
∑_m∈[M]G̅_i^m(ψ̅(a_i),a_-i) - G_i(ψ̅(a_i),a_-i) = 0.
Hence, G = 1/m∑_m∈[M] G^m.
Consistency then implies that p∈ f(G).
A profile p∈□ A is a quasi-strict equilibrium of G if p∈(G) and
G_i(a_i,p_-i) > G_i(a_i',p_-i) for all a_i∈(p_i), a_i'∈ A_i∖(p_i), and i∈ N.
We show that nice total solution concepts have to return quasi-strict equilibria.
Let f be a nice total solution concept.
Let G be a game on A.
Then, if p∈(G) is quasi-strict, then p ∈ f(G).
For all i∈ N, let Â_i = (p_i) and  = Â_1Â_n.
By consequentialism, we may assume that |Â_i| = 2|A_iÂ_i| and the number of clones of each action in Â_i is even.
Moreover, by <Ref> and the remarks thereafter, we may assume that p_i is the uniform distribution on Â_i.
Write Â_i = {a_i^1,…,a_i^K,b_i^1,…,b_i^K} so that a_i^k and b_i^k are clones for all k∈[K] and A_iÂ_i = {c_i^1,…,c_i^K}.
The idea is to write G as a convex combination of two games G^1,G^2 for which all actions in A_iÂ_i are dominated and p is an equilibrium of G^1,G^2.
<Ref> and consistency of f will then give that p∈ f(G).
Let i∈ N.
Since p is quasi-strict and p_-i is uniform on Â_-i, we have for all a_i∈Â_i and a_i'∈ A_iÂ_i,
∑_a_-i∈Â_-i G_i(a_i,a_-i) > ∑_a_-i∈Â_-i G_i(a_i',a_-i).
For all k∈[K], let v_i^k∈ℝ^A_-i so that
∑_a_-i∈Â_-i v_i^k(a_-i) = 0
and for all a_-i∈ A_-i,
G_i(a_i^k,a_-i) + v_i^k(a_-i) = G_i(b_i^k,a_-i) + v_i^k(a_-i) > G_i(c_i^k,a_-i).
By (<ref>), such v_i^k exist.
(Note that the sum in (<ref>) is taken over Â_-i and (<ref>) is required to hold for all action profiles in A_-i.)
Now define games G^1,G^2 on A as follows.
For all i∈ N and a∈ A,
G^1_i(a) =
G_i(a_i^k,a_-i) + v^k(a_-i) if a_i = a_i^k for some k∈[K],
G_i(b_i^k,a_-i) - v^k(a_-i) if a_i = b_i^k for some k∈[K], and
G_i(c_i^k,a_-i) if a_i = c_i^k for some k∈[K].
Define G^2 similarly with the roles of a_i^k and b_i^k exchanged.
By (<ref>), p is an equilibrium of G^1,G^2, and by (<ref>), all actions in A_iÂ_i are dominated.
More specifically, in G^1, each c_i^k is dominated by a_i^k and in G^2, each c_i^k is dominated by b_i^k.
Hence, the set of rationalizable action profiles in G^1,G^2 is Â.
By <Ref>, p∈ f(G^1)∩ f(G^2).
Since G = 1/2 G^1 + 1/2 G^2, consistency implies that p∈ f(G).
<Ref> together with consequentialism allows us to push slightly beyond quasi-strict equilibria.
If p is an equilibrium of a game G so that for every player i, every action of i that is a best response against p_-i is either in the support of p_i or a clone of such an action, then we get from consequentialism that p∈ f(G).
In that case, we say that p is essentially quasi-strict.
Let G be a game on A.
An equilibrium p of G is essentially quasi-strict if there is a blow-down G' of G with surjection ϕ so that ϕ_*(p) is a quasi-strict equilibrium of G'.
Similarly, one could define essentially unique and essentially full support equilibria, but we will not need these notions.
Note that if a solution concept satisfies consequentialism and returns all quasi-strict equilibria, then it also has to return all essentially quasi-strict equilibria.
We use this fact in the proof of the last step: if a solution concept satisfies consequentialism and consistency and returns all quasi-strict equilibria, then it in fact has to return all equilibria.
Let f be a solution concept that satisfies consequentialism and consistency so that p∈ f(G) whenever p is a quasi-strict equilibrium of G.
Then, ⊆ f.
Let G be a game on A and p∈(G).
For i∈ N, let Â_i = (p_i) and for any game G' on A with p∈(G'), let
A̅_i(G') = {a_i∈ A_iÂ_i G'_i(a_i,p_-i) = G'_i(p_i,p_-i) and a_i is not a clone of an action in Â_i}.
That is, A̅_i(G') is the set of actions that are best responses against p_-i and not in the support of p_i or clones of actions in the support of p_i.
We write A̅_i = A̅_i(G).
Note that p is an essentially quasi-strict equilibrium of G' if A̅_i(G') = ∅ for all i∈ N.
We prove that p∈ f(G) by induction on the number of players for which A̅_i≠∅.
If A̅_i = ∅ for all i∈ N, the statement follows from the assumption that f satisfies consequentialism and returns quasi-strict equilibria.
Otherwise, let i∈ N with A̅_i≠∅.
We write G as a convex combination of two games G^1,G^2 so that p∈(G^1)∩(G^2), and
{j∈ NA̅_i(G^l)≠∅}⊆{j∈ NA̅_i≠∅}{i}
for l = 1,2.
By <Ref> and the remarks thereafter, we may assume that p_i is the uniform distribution on Â_i.
Moreover, by consequentialism, we may assume that Â_i = {a_i^1,…,a_i^K,b_i^1,…,b_i^K}, A̅_i = {c_i^1,…,c_i^K}, and a_i^k,b_i^k are clones in G for all k∈[K].
Let G^1,G^2 be games on A so that for all j∈ N and a∈ A,
G^1_j(a) =
G_j(c_i^k,a_-i) if a_i = a_i^k or a_i = c_i^k for some k∈[K],
G_j(a_i^k,a_-i) + G_j(b_i^k,a_-i) - G_j(c_i^k,a_-i) if a_i = b_i^k for some k∈[K], and
G_j(a_i,a_-i) if a_i∈ A_i (Â_i∪A̅_i),
and
G^2_j(a) =
G_j(c_i^k,a_-i) if a_i = b_i^k or a_i = c_i^k for some k∈[K],
G_j(a_i^k,a_-i) + G_j(b_i^k,a_-i) - G_j(c_i^k,a_-i) if a_i = a_i^k for some k∈[K], and
G_j(a_i,a_-i) if a_i∈ A_i (Â_i∪A̅_i).
Then, G = 1/2 G^1 + 1/2 G^2 and for all k∈[K], a_i^k,c_i^k are clones in G^1 and b_i^k,c_i^k are clones in G^2.
Moreover, p∈(G^1)∩(G^2) by the definition of A̅_i.
To see that p_i is a best response to p_-i, recall that for all k∈[K],
G_i(a_i^k,p_-i) = G_i(b_i^k,p_-i) = G_i(p_i,p_-i) = G_i(c_i^k,p_-i),
since c_i^k is assumed to be in A̅_i.
So for all a_i∈ A_i, G^1_i(a_i,p_-i) = G^2_i(a_i,p_-i) = G_i(a_i,p_-i).
Also, for j≠ i and a_j∈ A_j, we have
G_j(a_j,p_-j) = 1/2K∑_k∈[K] G_j(a_j,a_i^k,p_-{i,j}) + G_j(a_j,b_i^k,p_-{i,j})
= 1/2K∑_k∈[K] G^1_j(a_j,a_i^k,p_-{i,j}) + G^1_j(a_j,b_i^k,p_-{i,j})
= G^1_j(a_j,p_-j),
where we use that p_i is uniform on Â_i for the first equality, and the definition of G^1 for the second equality.
In particular, p_j is a best response to p_-j in G^1.
The same holds for G^2 with a similar argument.
Thus, (<ref>) holds.
Now, by induction, p∈ f(G^1)∩ f(G^2), and so by consistency, we get p∈ f(G).
The fact that any nice total solution concept is a coarsening of now follows from <Ref> and <Ref>.
This finishes the proof of <Ref>.
§ OMITTED PROOFS FROM <REF>
Throughout this section, we consider equivariant solution concepts that are δ-nice for some small enough δ.
Let G be a game with action profiles A and fix a strategy profile p∈ f(G).
<Ref> below then states that a new game Ĝ can be obtained by adding an extra action â_i to the action set of every player i∈ N such that the following holds.
* There is a profile q∈□ A close to p so that for all i∈ N, if i plays â_i in Ĝ, all players get the same payoff as if i played q_i in G.
* There is a profile p̂∈ f(Ĝ) so that for all i∈ N, p̂_i has all but a small fraction of probability on â_i.
Let δ > 0 and f an equivariant solution concept that satisfies δ-consequentialism and δ-consistency.
Let G be a game on A and p∈ f(G).
Then, there is a game Ĝ with action set Â_i = A_i∪{â_i} for all i∈ N so that the following holds.
* There is q∈ B_3δ(p) such that Ĝ(â_I, a_-I) = G(q_I,a_-I) for all I ⊆ N
and a_-I∈ A_-I.
* There is p̂∈ f(Ĝ) such that p̂_i(â_i) ≥ 1 - 3δ for all i∈ N.
For all i∈ N, let k_i = |A_i|⌈1/δ⌉.[For x∈ℝ, ⌈ x⌉ is the smallest integer that is at least as large as x. Similarly, ⌊ x⌋ is the largest integer that is at most as large as x.]
For each a_i ∈ A_i, let A_i^a_i∈ℱ(U), all disjoint and disjoint from A_i, with |A_i^a_i| = k_i.
Let G' be a game with action set A_i' = A_i∪⋃_a_i∈ A_i A_i^a_i for all i∈ N, and G a blow-down of G' with surjections ϕ_i A_i'→ A_i so that ϕ_i^-1(a_i) = {a_i}∪ A_i^a_i for all a_i∈ A_i and i∈ N.
That is, G' results from G by adding k_i clones of a_i for each action a_i.
Since f satisfies δ-consequentialism, ϕ_*^-1(p)⊆ B_δ(f(G')).
We construct p'∈ f(G') so that p'_i assigns probability at least 1-δ uniformly to some subset Ã_i of ⋃_a_i∈ A_i A_i^a_i.
Let p̃∈ B_δ(p) ∩ϕ_*(f(G')), which exists since f satisfies δ-consequentialism.
Let l_i = ⌊ k_i p̃_i⌋∈{0,…,k_i}^A_i and r_i = k_i p̃_i - l_i∈ [0,1)^A_i.
For each a_i∈ A_i, choose a subset Ã_i^a_i of A_i^a_i with |Ã_i^a_i| = l_i(a_i) and let Ã_i = ⋃_a_i∈ A_iÃ_i^a_i.
Let p'∈□ A' such that for all i∈ N and a_i∈ A_i,
p'_i(a_i') =
1/k_i for a_i' ∈Ã_i^a_i, and
r_i(a_i)/k_i1/|{a_i}∪ A_i^a_i - Ã_i^a_i| for a_i' ∈{a_i}∪ A_i^a_i - Ã_i^a_i.
Observe that for all a_i∈ A_i,
p'({a_i}∪ A_i^a_i) = |Ã_i^a_i|/k_i + r_i(a_i)/k_i = l_i(a_i)/k_i + r_i(a_i)/k_i = p̃_i(a_i).
Hence, p' is well-defined and ϕ_* p' = p̃.
By the choice of p̃, p'∈ f(G').
Recall that r_i∈ [0,1)^A_i, and so
|r_i|/k_i < |A_i|/k_i≤δ.
Let q∈□ A with q_i = p̃_i - r_i/|p̃_i - r_i|.
Since |r_i|/k_i < δ, it follows that q∈ B_2δ(p̃)⊆ B_3δ(p).
For all i∈ N, let Σ̃_A_i'⊆Σ_A_i' be the permutations that map the set {a_i}∪ A_i^a_i - Ã_i to itself for all a_i∈ A_i, and let Σ̃_A' = Σ̃_A_1'×…×Σ̃_A_n'.
Then, p' = p'∘π for all π∈Σ̃_A' since p'_i is a uniform distribution on Ã_i and the uniform distribution on {a_i}∪ A_i^a_i - Ã_i^a_i for every a_i∈ A_i and i∈ N.
Thus, equivariance of f implies that p' = p'∘π∈ f(G'∘π).
Let
G̅ = 1/|Σ̃_A'|∑_π∈Π G'∘π
It follows from δ-consistency that p'∈ B_δ(f(G̅)).
Let p̅∈ f(G̅)∩ B_δ(p').
By construction, for all i∈ N, all actions in Ã_i are clones of each other in G̅, and for each a_i∈ A_i, all actions in {a_i}∪ A_i^a_i - Ã_i^a_i are clones of each other.
Let Ĝ be the blow-down of G̅ with action set Â_i = A_i∪{â_i} for all i∈ N and surjections ϕ̂_i A_i'→Â_i so that ϕ̂_i^-1(â_i) = Ã_i and ϕ̂_i^-1(a_i) = {a_i}∪ A_i^a_i-Ã_i^a_i for all a_i∈ A_i.
Since f satisfies δ-consequentialism, there is p̂∈ f(Ĝ) so that ϕ̂_*(p̅) ∈ B_δ(p̂).
Hence, for all i∈ N,
∑_a_i∈ A_ip̂_i(a_i) ≤∑_a_i∈ A_ip̅_i(a_i∪ A_i^a_i - Ã_i^a_i) + δ≤∑_a_i∈ A_i p'_i(a_i∪ A_i^a_i - Ã_i^a_i) + 2δ≤ 3δ,
where the last inequality uses the definition of p_i' and |r_i|/k_i≤δ.
Equivalently, p̂_i(â_i) ≥ 1 - 3δ for all i∈ N.
Moreover, by construction of Ĝ, Ĝ(â_I, a_-I) = G(q_I,a_-I) for all I⊆ N, and a_-I∈ A_-I.
Now consider a game Ĝ with action profiles  and let â∈Â, a_i∈Â_i, and i∈ N such that action a_i yields strictly more payoff against â_-i than action â_i.
<Ref> below shows that if there is p̂∈ f(Ĝ) so that p̂_j assigns probability close to 1 to â_j for each j∈ N, then there is a game G and p∈ f(G) such that p_i assigns probability close to 1 to a dominated action.
Let ,δ> 0 with 4(⌈1/⌉ + 1)δ≤ (1-2δ) and 2δ≤.
Let f be an equivariant solution concept that satisfies δ-consequentialism and δ-consistency.
Let Ĝ be a normalized game with action sets Â_j = A_j∪{â_j} for all j∈ N and let i∈ N and a_i∈ A_i so that Ĝ_i(a_i,â_-i) > Ĝ_i(â_i,â_-i) +.
Then, if p̂∈ f(Ĝ) with p̂_j(â_j) ≥ 1-δ for all j∈ N, there is a game G̅ and p̅∈ f(G̅) so that p̅_i assigns probability at least 1-3δ to an action that is δ-dominated in G̅.
Let M = 2(⌈1/⌉ + 1).
For each j∈ N, let A_j = {a_j^1,…,a_j^|A_j|}.
Let G' be a game with action set A_i' = Â_i for i and A_j' = Â_j∪{a_j^k,l k∈[|A_j|], l ∈[M]} so that each a_j^k,l is a clone of â_j for all j∈ N{i}.
That is, let ϕ_i A_i'→Â_i be the identity, and for all j∈ N{i}, let ϕ_j A_j'→Â_j be the identity on A_j and ϕ_j^-1(â_j) = {â_j}∪{a_j^k,l k∈[|A_j|],l∈[M]}.
Then, Ĝ is a blow-down of G' with surjection ϕ = (ϕ_1,…,ϕ_n).
Since f satisfies δ-consequentialism, there is p̂'∈ B_δ(p̂) so that p'∈ f(G') whenever ϕ_*(p') = p̂'.
In particular, there is p'∈ f(G') with
∑_k = 1^|A_j| p'_j(a_j^k) ≤ 2δ for all j∈ N, and
p'_j(a_j^k,l) = p̂'_j(a_j^k) for all k∈[|A_j|], l∈[M], and j∈ N{i}.
The latter condition can be satisfied since
M∑_k = 1^|A_j|p̂'(a_j^k) ≤ 2Mδ≤ (1 - 2δ) ≤p̂'(â_j) = ∑_a_j∈ϕ_j^-1(â_j) p'(a_j),
for all j∈ N{i}.
Let Σ'_i⊆Σ_A_i' be the set consisting of the identity permutation on U, and for all j∈ N{i}, let Σ'_j⊆Σ_A_j' be the set of permutations that map {a_j^k}∪{a_j^k,l l∈ [M]} to itself for all k∈[|A_j|].
Hence, a large set of clones of â_j is permuted with each action a_j^k in all possible ways.
Let Σ' = Σ'_1×…×Σ'_n.
Note that, by construction, p'∘π = p' for π∈Σ'.
Let
G̅ = 1/|Σ'| ∑_π∈Σ' G'∘π.
For all j∈ N{i} and k∈[|A_j|], {a_j^k}∪{a_j^k,l l∈[M]} is a set of clones in G̅.
Since Ĝ_i(a_i,â_-i) > Ĝ_i(â_i,â_-i) +, M = 2(⌈1/⌉ + 1), and Ĝ is normalized, G̅_i(a_i,a_-i) > G̅_i(â_i,a_-i) + /2 for all a_-i∈ A_-i'.
Hence, a_i δ-dominates â_i.
Lastly, since f satisfies δ-consistency, there is p̅∈ f(G̅) so that for all j∈ N,
p̅_j(â_j) ≥ p_j'(â_j) - δ≥ 1 - 3δ.
Given > 0, let δ > 0 so that
3(n-1)δ ≤/4,
4(⌈3/⌉ + 1)3δ ≤ (1-6δ), and
1-9δ > 1/2.
Assume that f satisfies δ-consequentialism, δ-consistency, and δ-rationality, but is not a refinement of _ on the set of normalized games.
Then, there is a normalized game G on A, i∈ N, a_i∈ A_i, and p∈ f(G) so that
G_i(a_i,p_-i) > G_i(p_i,p_-i) + .
Observe that for all q∈ B_3δ(p) and p_i'∈Δ A_i,
|G_i(p_i',p_-i) - G_i(p_i',q_-i)| ≤ 3(n-1)δ.
By <Ref>, there is a game Ĝ with action set Â_j = A_j∪{â_j} for all j∈ N so that the following holds.
* There is q∈ B_3δ(p) such that Ĝ(â_I, a_-I) = G(q_I,a_-I) for all I ⊆ N and a_-I∈ A_-I.
* There is p̂∈ f(Ĝ) such that p̂_j(â_j) ≥ 1 - 3δ for all j∈ N.
Applying <ref> with I = N{i}, it follows from (<ref>), (<ref>), and (<ref>) that
Ĝ_i(a_i,â_-i) = G_i(a_i,q_-i) ≥ G_i(a_i,p_-i) - 3(n-1)δ
> G_i(p_i,p_-i) + 3/4≥ G_i(p_i,q_-i) + 2/4
≥ G_i(q_i,q_-i) + 1/4 = Ĝ_i(â_i,â_-i) + 1/4.
By (<ref>) and (<ref>), <Ref> applied to /4 and 3δ gives a game G' and p̅∈ f(G') so that p̅_i assigns probability at least 1 - 9δ to a 3δ-dominated action.
Since 1 - 9δ > 1/2 by (<ref>), this contradicts δ-rationality.
|
http://arxiv.org/abs/2307.02835v1
|
20230706080350
|
Intracellular Dynamics of Hepatitis B Virus Infection: A Mathematical Model and Global Sensitivity Analysis of Its Parameters
|
[
"Rupchand Sutradhar",
"D C Dalal"
] |
math.DS
|
[
"math.DS"
] |
Intracellular Dynamics of Hepatitis B Virus Infection: A Mathematical Model and Global Sensitivity Analysis of Its Parameters
Rupchand Sutradhar and D C Dalal
================================================================================================================================
§ ABSTRACT
Analysis of cell population can reveal the average information about the viral infection in host whereas single-cell analysis can capture the individual consideration. Single-cell analysis can also provide new ways to
explore viral diversities and identify the intrinsic extreme phenotype of the cells. In this study, a single-cell hepatitis B virus (HBV) infection model is proposed by considering all possible intracellular steps which are observed in the viral life cycle. To the best of our knowledge,
it is the most generalized model to date.
The effects of newly introduced factors or components such as, cccDNA, HBx proteins, surface proteins, and double-stranded linear DNA-containing capsids are explained very well by this model. The intracellular delay is also incorporated into the proposed model, and it is seen that it has no significant impacts on the persistence of infection.
The global sensitivity analysis of the parameters is also performed using partial rank correlation coefficients based on Latin hypercube sampling method. According to PRCC values, the most positive and most negative sensitive parameters are identified. Moreover, it is observed that availability of viral surface proteins switches the replication pattern from acute to chronic, whereas there is no considerable contribution of HBx proteins to the progression of HBV infection. Another striking result is that the recycling of capsids appears to act as a positive feedback loop throughout the infection.
Keywords: Single-cell model, Hepatitis B, cccDNA, Global sensitivity analysis, Partial rank correlation coefficient
§ INTRODUCTION
Hepatitis B virus (HBV) belongs to the family of Hepadnaviridae. It is a non-cytopathic virus that causes various kinds of serious liver disease such as liver damage, cirrhosis, liver cancer or hepatocellular carcinoma (HCC). HCC is the second leading cause of cancer-related death in human. Currently, HBV is one of the most common liver infections and a global public health problem worldwide. It has the potential to spread 100 times more than HIV/AIDS <cit.>. HBV has affected two billion individuals throughout the world. Every year at around 1.5 million people become newly infected despite the existence of effective vaccine. Almost 300 million people are chronically infected. At around 10% of infected persons are diagnosed, and an estimated 820,000 people die each year as a result of HBV infection and accompanying consequences, such as liver cancer <cit.>.
There are many limitations to the current treatment procedures, which often fail to provide long-term virologic control. The majority of the infected people require lifelong therapy if it is once initiated because the current treatments can only reduce the possible risks of progressing cirrhosis and hepatocellular carcinoma <cit.>. Most of the time, the available treatments can usually cure the disease. It is true that there are some approved antiviral drugs, including interferons (IFN)-alpha-2a, pegylated (PEG)-IFN-alpha-2a (immune system modulators), and some nucleoside analogues, such as lamivudine, adefovir, entecavir, telbivudine, and tenofovir. But no single therapy is sufficient to diagnose a chronic HBV patient <cit.>. As a result of discontinuation of antiviral therapy, unfortunately viruses often rebound <cit.>. A lot of evidences prove that the persistence of covalently closed circular DNA (cccDNA) is one of the major obstacle to prevent this viral infection<cit.>. There are some other reasons for this, including the fact that HBV is a DNA virus and its replication process is very complicated compared to that of other viruses, deficiency of immune responses, drug-drug intersection and drug resistance <cit.>, etc.
Although a lot of studies have been done over the last two decades, the dynamics of this viral infection is not yet well understood.
Most of previous studies <cit.> of this viral infection have mainly concentrated on the cell population. By analyzing the cell population, one can get a general overview of the clinical signs and progression of viral infection. In order to investigate the cell activities throughout the infection period and response of immune system against virus, cell population analysis is an important and useful tool. In a cell population, there are several types of cells that can differ in identity, state, functions, etc. Naturally, cells are heterogeneous, and it occurs in cells because of variation in DNA sequence <cit.>. In the literature, it is seen that when cell populations are considered to study any kind of viral infection, the total number of cells are divided into two classes: uninfected and infected cells. Due to this classification, some salient features of the cells, the roles of some specific phenotypes, the impacts of some intracellular components of virus and the effects of several parameters involved in the infection have been overlooked <cit.>. Thus, it is important to explore intrinsic processes at the level of single cells.
In 1940, Delbruck considered the phage-infected E.coli cells to study the heterogeneity of virus infected cell. In this experiment, it is observed that the amount of progeny virus released from each cell differs significantly which revealed a surprisingly broad distribution of virus growth throughout the infection <cit.>. With the development of single-cell technologies, infection kinetics and quantification of burst size of vesicular stomatitis virus (VSV) have been studied. The virus titers differ from cell to cell during viral infection which suggest a high degree of cell to cell variation <cit.>. According to some recent studies on influenza A virus (IAV), foot-and-mouth-disease (FMD) virus (FMDV), and poliovirus, virion levels varied from cell to cell unlike phenomena observed in experiments involving multicellular populations <cit.>.
Zhu et al. <cit.> found that the cell size and cell cycle of the host are also two major factors that contribute to the variability of virus yields among single cells.
Nowadays, single-cell analyses become a significant milestone in many fields, including immunology, oncology, stem cells, virology, etc <cit.>.
The following are some key advantages of single-cell analysis in virology:
* The dynamics of infection in each infected cell can be studied at the micro level.
* The intracellular components of the virus that have significant influence on infection can be identified.
* One can determine the most sensitive parameter for an infection.
* A large cell population analysis may overlook individual cell responses to the viruses under viral infection period.
* Because of the inherent heterogeneity in healthy and diseased cells, drug discovery & development, diagnostics, and prognostics encounter significant obstacles. These challenges can be mitigated by analyzing infected single cells.
Viral infections involve multiple steps that include attachment & entry, genome trafficking, fusion, nuclear import, expressing viral genes, replication of the genome, and releasing progeny. In case of HBV, there are some additional steps in the replication process, such as rcDNA repairing, translation, transcription, reverse transcription, recycling of capsids, which complicates the viral life cycle <cit.>. This infection can be spread in two ways: cell-to-cell and virus-to-cell. The primary mode of infection transmission during hepatitis B virus infection is cell-to-cell transmission <cit.>. Although many authors have attempted to study this virus infection from different perspectives, the goal of curing this disease remains a challenging task. For example, in 1996, Nowak et al. <cit.> presented basic model of virus dynamics considering three compartments: uninfected cells, infected cells and viruses. Min et al. <cit.> modified this basic model <cit.> by replacing the mass action term with standard incidence function (SIF) and concluded that SIF is more appropriate than mass action term to describe this kind of viral infection kinetics. Liu et al. <cit.> studied a modified age-structure model and proved that the age of infected hepatocytes plays a crucial role in HBV infection.
The host immune response holds significant role in regulating this viral infection. <cit.>. Two main factors in immune-mediated clearance are CTL and non-CTL effects, although the CTL effects cannot eliminate the infection on its own <cit.>.
Fatehi et al. <cit.> found that natural killer cell which is a part of innate immune system, kills the infected hepatocytes by producing perforin and granzymes. On the other hand, rather than rapid release of HBV virion from infected hepatocytes, the accumulation of HBV DNA-containing capsid in the infected cell increases the risk of exacerbation of hepatitis <cit.>.
Besides the studies mentioned above, a variety of mathematical models <cit.> have been developed to investigate the dynamics of HBV transmission. Most of them considered the large cell population. In this study, an intracellular dynamics model is proposed based on the biological and clinical findings with some basic assumptions. All possible intracellular components of virus life cycle (rcDNA, cccDNA, HBx-proteins, polymerase, surface proteins, single stranded and double-stranded DNA, double-stranded linear DNA, etc) and parameters associated with the infection are considered in this model in order to make it more realistic and reliable. To the best of our knowledge, it is the first intracellular model that depicts all possible targets by various antiviral techniques. The following topics are discussed in this study as main contributions:
* The effects of initial concentration of cccDNAs in HBV infection dynamics.
* Effects of HBx proteins on infection.
* The roles of intracellular delay in disease dynamics.
* Impacts of surface proteins on infection.
* The contributions of double-stranded linear DNA-containing capsids on cccDNA as well as on virus.
* Recycling mechanism of double-stranded DNA-containing capsids.
* The global sensitivity analysis of model parameters.
§ MODEL FORMULATION
§.§ Intracellular dynamics model
HBV is a member of the hepadnaviridae family and by virtue of its exceptional characteristics, it replicates through RNA intermediates similarly as retroviruses. In this way, the replication cycle of HBV is able to perpetuate the infection in hepatocytes through its unique features <cit.>.
HBV replication begins when the virus enters the liver cells through the sodium taurocholate cotransporting polypeptide (NTCP) receptor via receptor-mediated endocytosis. Although, at first, HBV binds to heparan sulfate proteoglycans (HSPGs) with a low affinity. Inside the hepatocytes, virus releases its core particle i.e. relaxed circular DNA (rcDNA) containing capsids (Step 1: Figure <ref>).
This model does not explicitly include the role of NTCP and HSPGs receptors in HBV entry. Our study focuses on intracellular infection dynamics of HBV. The number of viruses and rcDNA-containing capsids are designated by V and R, respectively. It is considered that the viruses uncoat their core particle with the rate α_1 and δ_r is the decay rate of rcDNA. The parameter α_2 represents the rate at which the cccDNA is formed from rcDNA , and it is discussed in detail below. The corresponding differential equation is:
dR/dt = α_1 V-α_2 R-δ_r R.
In order to release the viral genome (rcDNA), HBV nucleocapsids travel to the nucleus of hepatocytes. In order to overcome the high viscosity of the cytoplasm, HBV utilizes the microtubular network for efficient nuclear delivery. In addition, microtubule-dependent movement may also provide direct path to the nucleus periphery. In the cytoplasm (or at the nuclear pore), capsids are disassembled or partially disassembled. This causes the signal for nuclear localization (NLS) to be exposed. After the interaction of NLS with the capsids, the capsids attach to nuclear transport factors (importin α and β) and are transported into the nuclear baskets. The nuclear pores complex (NPC), which serves as a gatekeeper to the nucleus, plays an important role in the entry of HBV genomes into the nucleus. Mature capsids bind to nucleoporin 153 (Nup 153) in the nuclear basket, enter the nucleus, and disintegrate to release the viral genome. In the first step after genome entry, the rcDNA is repaired by the host DNA repair mechanism and is converted to covalently closed circular DNA (cccDNA) (Step 2: Figure <ref>). We denote the cccDNAs by C. There are different ways in which cccDNAs are lost, such as cell proliferation, cell death due to cytolytic immune response, cell cure due to non-cytolytic immune response, and natural death of infected cell. Only the natural decay rate of cccDNAs (δ_c) is considered in this model.
This reaction is described as
dC/dt = α_2 R+k_1e^-λ S_p D-δ_c C.
Here, the term k_1e^-λ S_p D represents the recycling of rcDNA-containing capsids. The details about the recycling of capsids are discussed later. In HBV replication, cccDNA plays a key role in transcription. In the nucleus, cccDNA forms minichromosomes that are the source of pregenomic RNA (pgRNA) and other viral RNAs. There are five different sets of mRNAs generated from the viral cccDNA and are encoded by the four main genes through a series of long overlapping reading frames <cit.>. The mRNAs are: three subgenomic mRNA (0.7 kb mRNA, 2.1 kb mRNA and 2.4 kb mRNA ) and two gemomics mRNA of 3.5 kb. These are all heterogeneous and positively oriented <cit.>. 3.5 kb mRNA is designated by R_g. As both 2.4 kb mRNA and 2.1 kb mRNA produce surface proteins, these are treated as a single compartment by R_s in our model. 0.7 kb mRNA is represented by R_h. The parameters λ_rg, λ_rs λ_rh are the transcription rates of 3.5 kb mRNA, (2.4 kb mRNA and 2.1 kb mRNA) and 0.7 kb mRNA, respectively. HBV X protein (HBx), which is produced from 0.7 kb mRNA, denoted by H, prevents cccDNA from becoming silent. Only HBx acts as a sole regulator protein. It possesses multi-functional roles. HBx promotes the degradation of structural maintenance of chromosome and can enhance the transcription rate of cccDNA <cit.>. In addition, HBx inhibits the development of immune response to HBV infection, preventing apoptosis of infected hepatocytes <cit.>. Therefore, HBx plays some important roles in HBV replication. In this study, the functions of HBx are included. Mathematically, these are described as follows (Step 3a, 3b, 3c: Figure <ref>):
dR_g/dt= λ_rgΦ C-μ_1R_g P-δ_r_gR_g,
dR_s/dt= λ_rsΦ C+λ_sdl D_L-λ_s_p R_s-δ_r_sR_s,
dR_h/dt= λ_rhΦ C-δ_r_hR_h,
where δ_r_g, δ_r_s δ_r_h are corresponding decay rates of R_g, R_s R_h.
Φ denotes the volume fraction of active cccDNA. The de-silencing of cccDNA depends on the concentration of HBx proteins. It is considered as
Φ=(1-111-Φ_0+H), Φ_0 is the initial volume fraction of active cccDNA. We consider that 2.4 kb mRNA and 2.1 kb mRNA are also produced from double-stranded linear DNA (dslDNA) containing capsids with production rate λ_sdl. It will be discussed later how dslDNA which is denoted by D_L, contributes to HBV infection. The meaning of two terms μ_1R_g P in equation (<ref>)
and λ_s_p R_s in equation (<ref>) are explained later.
As a result of translation of these mRNAs by ribosomes, viral proteins are synthesized. A portion of 3.5 kb RNA is translated into viral polymerase, while another portion is translated into core protein. P stands for viral polymerase, and C_p represents the viral core protein. λ_p and λ_c indicate the subsequent translation rate of the polymerase and core protein. 0.7 kb mRNA is translated into HBx proteins with translation rate λ_h and decay rate δ_h.
These phenomena can be described mathematically using the given relations (Step 4a, 4b, 7: Figure <ref>):
(Step 4a: Figure <ref>) dP/dt = λ_p R_g-μ_1R_gP-δ_p P,
(Step 4b: Figure <ref>) dC_p/dt=λ_c R_g-μ_2 ZC_p-δ_c_pC_p,
(Step 7: Figure <ref>) dH/dt= λ_h R_h-δ_h H,
where, δ_p and δ_c_p are the decay rates of polymerase and core protein. 3.5 kb RNA is reverse-transcribed to viral genome DNA by viral polymerase and an 1:1 ribonucleoprotein complex generally called RNP complex is formed by polymerase and pgRNA (Step 5: Figure <ref>). It is assembly competent. The RNP complex is denoted by Z and μ_1 is the constant rate of interaction of 3.5 kb RNA and the polymerase. The reaction equation is as follows (Step 5):
dZ/dt =μ_1R_gP-μ_2 ZC_p-δ_z Z.
Here, δ_z reflects the decay rate of RNP complex. In the next step, RNP complexes are encapsidated by core proteins or HBcAg to form nucleocapsids containing pgRNA-P (pgNC) with interaction rate μ_2. These pgNCs are also known as immature nucleocapsid and is denoted by the symbol P_g in our model. The nucleocapsid assembly depends on the RNP complex.
Consider δ_p_g is the decay rate of pgRNA containing capsids. Based on some biological studies, it is observed that a portion of pgRNA containing capsids are enveloped by the surface proteins and secrete from the hepatocytes as non-infectious viral particles. The corresponding reaction equation is (Step 6: Figure <ref>):
dP_g/dt=μ_2 ZC_p-β_1 P_g-δ_p_gP_g.
The reverse transcription is one of the key step in the virus life-cycle. Through this process, the viral RNAs are converted into the viral DNAs. The pgRNA acts as a template for DNA synthesis. This step involves a series of events involving both the host and the virus factors. After encapsidation by the core protein, the viral polymerase reverse transcribes the pgRNA into single-stranded DNA (ssDNA) with reverse transcription rate β_1. The single-stranded DNA-containing capsids are designated by S with degradation rate δ_s. It is assumed that the double-stranded DNA (dsDNA) and dslDNA are produced with rate β_2 from ssDNA. In order to determine the relative contributions of different types of rcDNA on cccDNA, this model distinguishes between infecting rcDNA and rcDNA produced by a liver cell which is called dsDNA. D represents the newly produced double-stranded HBV DNA-containing capsids.
90% of nucleocapsids possess rcDNA after reverse transcription, while the remaining 10% have double-stranded linear DNA (dslDNA) <cit.>. At this point, nucleocapsids can either gain an envelope of HBsAg by passing through the endoplasmic reticulum, pre-Golgi compartment, and be released as virions into the blood, or these can recycle back to the nucleus. cccDNA can be further amplified by recycling of rcDNA and dslDNA in the nucleus <cit.>.
The dslDNA can produce surface protein (L, M and S), but may not be able to produce functional pgRNA due to some mutations that are introduced when it is converted into cccDNA <cit.>. The reaction equation for ssDNA, dsDNA and dslDNA are given by the equations (<ref>)-(<ref>) (Step 10, 11, 15: Figure <ref>).
dS/dt =β_1 P_g-β_2 S-δ_s S,
dD/dt =0.9β_2 S-k_1e^-λ S_pD-k_2(1-e^-λ S_p)D S_p-δ_d D,
dD_L/dt =0.1 β_2 S-δ_d_L D_L.
During the replication, the newly produced capsids are mainly split into three parts. One part is used again as a core particle. In the case of low level of surface proteins, the HBV DNA-containing capsid delivers its content to the nucleus to increase the pool of cccDNA. This process is known as the `recycling’ of HBV DNA-containing capsids. The level of surface proteins is incorporated in this model. λ^-1 denotes the average level surface proteins (S_p). Here, k_1 stands for the recycling rate of capsids.
In equation (<ref>) as well as in equation (<ref>), the term k_1e^-λ S_p D represents the recycling of capsids. The parameters δ_d and δ_d_L are the natural decay rates of dsDNA and dslDNA, respectively.
Another portion of newly produced capsids is well-packaged by the viral surface proteins (L, M, and S). Surface proteins are produced from the translation of subgenomic RNA (2.4 kb and 2.1 kb mRNA) by ribosomes. 2.4 kb mRNA is translated into large surface protein whereas translation of 2.1 kb of mRNA leads to middle and small surface proteins. For simplicity, these three surface proteins are referred to as one compartment and designated by S_p in this model. The natural decay rate of the surface proteins are denoted by δ_s_p. The parameter λ_s_p is considered to be the average translation rate of mRNAs. Some portion of S (small) surface proteins forms octahedral spheres (sphere-shaped SVPs), while L (large) and M (medium) surface proteins form empty filaments and filamentous subviral particles (SVPs) via tubular budding and exit from the hepatocytes. All subviral particles are non-infectious. We denote the combined exit rate of surface proteins by η_s_p. The related dynamical equation is given below.
dS_p/dt=λ_s_p R_s-η_s_pS_p-δ_s_pS_p.
The well-packaged capsids are released into the extracellular space with release rate k_2 from the hepatocytes as infectious Dane particles or complete virions. The virions exit via the cell's secretory pathway by exocytosis.
dV/dt = k_2(1-e^-λ S_p)D S_p-δ_v V,
where δ_v is the death rate of viruses. The third portion of newly produced capsids is released to the extracellular space without being enveloped by surface proteins. It is also important to note that this kind of viral particles are non-infectious. In Figure <ref>, it is shown in step 14, but not considered in this model.
§.§ Full dynamics model
Based on the law of mass action, the temporal change of each component of the model is formulated. The following system of equations describes the full dynamics of the HBV infection with the non-negative initial conditions R(0)≥ 0, C(0)≥ 0, R_g(0)≥ 0, R_s(0)≥ 0, R_h(0)≥ 0, H(0)≥ 0, P(0)≥ 0, Z(0)≥ 0, C_p(0)≥ 0, P_g(0)≥ 0, S_p(0)≥ 0, S(0)≥ 0, D(0)≥ 0, D_L(0)≥ 0, and V(0)≥ 0.
.
rcDNA: dR/dt = α_1 V-α_2 R-δ_r R,
cccDNA: dC/dt = α_2 R+k_1e^-λ S_p D-δ_c C,
3.5 kb mRNA: dR_g/dt= λ_rgΦ C-μ_1R_g P-δ_r_gR_g,
(2.4+2.1) kb mRNA: dR_s/dt= λ_rsΦ C+λ_sdl D_L-λ_s_pR_s-δ_r_sR_s,
0.7 kb mRNA: dR_h/dt= λ_rhΦ C-δ_r_hR_h,
HBx: dH/dt= λ_h R_h-δ_h H,
Ploymerase: dP/dt = λ_p R_g-μ_1R_gP-δ_p P,
RNP complex: dZ/dt =μ_1R_gP-μ_2 ZC_p-δ_z Z,
Core protein: dC_p/dt=λ_c R_g-μ_2 ZC_p-δ_c_pC_p,
pgRNA-containing capsid: dP_g/dt=μ_2 ZC_p-β_1 P_g-δ_p_gP_g,
Surface protein: dS_p/dt=λ_s_p R_s-η_s_pS_p-δ_s_pS_p,
ssDNA-containing capsid: dS/dt =β_1 P_g- β_2 S-δ_sS,
dsDNA-containing capsid: dD/dt =0.9β_2 S -k_1e^-λ S_p D-k_2(1-e^-λ S_p)D S_p-δ_d D,
dslDNA-containing capsid: dD_L/dt =0.1 β_2 S-δ_d_L D_L,
Virus: dV/dt = k_2(1-e^-λ S_p)D S_p-δ_v V.
}
To the best of our knowledge, it is the most generalized intracellular HBV infection dynamic model so far.
The life cycle of HBV is schematically shown in Figure <ref>. The model (<ref>) consists of all possible essential steps of the viral life cycle. The description of all model variables and model parameters are summarized in the Table <ref> and Table <ref>, respectively.
§ THE SOURCES OF CCCDNA AND IT'S ROLE IN HBV PERSISTENCE
The persistence of cccDNA in the infected hepatocytes is one of the major challenges for antiviral therapies. Although, much remains unknown about the mechanism by which the incoming rcDNA is converted to supercoiled cccDNA by host DNA repairing mechanism, it appears to be accomplished through numerous steps. <cit.>.
§.§ The sources of cccDNA
There are mainly two sources of tenacious and nearly ineradicable cccDNA in HBV replication.
* The rcDNA-containing capsids from the incoming viruses are represented by R in model (<ref>). In the beginning of the infection, the cccDNAs are primarily formed from these.
* Second one is that the newly produced double-stranded DNA containing capsids (D) by the intracellular recycling pathway of capsids.
The double-stranded linear DNA is another source of cccDNA. These cccDNAs are not compatible with rcDNA synthesis but can contribute significantly to the cccDNA pool according to the work of Yang and Summers <cit.>. In this model, this source is not explicitly included since it has no direct role in infection. In Figure <ref>, the sources of cccDNA are schematically represented.
§.§ The effects of initial concentration of cccDNA
Upon entering into the nucleus by a partially known mechanism, the partially double-stranded rcDNA is converted into cccDNA.
It is thought to be a major factor for persistence of HBV infection as it is resistant to degradation and remains in the nucleus of infected cells even after treatment is completed. Due to the strong stability, cccDNAs are not lost in the course of cell division<cit.>. It persists
in the individuals despite serological evidence of viral clearance. It can also remain in cells for months or even years <cit.>.
In Figure <ref>, the effects of initial concentration of cccDNA on all components are represented. Five initial concentrations of cccDNA in a small quantity are considered, namely, 20, 40, 60, 80 and 100 unit. Simulations are conducted for a long period of time greater than four years. It is observed that the initial concentrations of cccDNA significantly influence all compartments in the course and outcome of the infection. The presence of a few copies of cccDNA in the liver can re-initiate and blow-up the infection. The small amount of cccDNA that remains in the liver can act as a reservoir for the virus. In contrast, if the antiviral therapy is discontinued or stopped during nearly curable stages of infection, then the viral infection can reactivate. Therefore, while a small amount of cccDNA may not be as clinically significant comparatively a high level of cccDNA, it still represents an important factor in the persistence and transmission of HBV infection.
§ EFFECTS OF HBX PROTEIN (H) ON HBV INFECTION
HBx plays a critical role in initiating and maintaining HBV replication during the natural infection process <cit.>. X proteins are able to stimulate de-silencing of cccDNA and prevent the silencing of cccDNA <cit.>. By inhibiting the development of immune response in HBV infection, HBx protects infected hepatocytes from immune-mediated apoptosis and alters the expression of host genes to facilitate the development of HCC <cit.>. HBV encodes only the regulatory protein HBx, which involves in multiple aspects of HBV infection. In order to keep cccDNA silent, a novel prospective treatment technique targeting HBx may be proposed. For this purpose, it is included in this model. From the simulation it is seen that the concentration level of cccDNA or concentration level of virus don't change significantly even after incorporating the effects of HBx in the model. Fatehi et al. <cit.> also got the similar results from their study. In Figure <ref>, the effects of HBx proteins on cccDNAs and on viruses are demonstrated. It is seen that the difference between the solutions is not significant. So, targeting the HBx protein as a treatment method in future may not be a promising strategy to control the HBV infection.
§ IMPACTS OF INTRACELLULAR DELAY (Τ)
Time delays play a crucial role in the intracellular replication process of viruses. A delay differential equation (DDE) model has a far more realistic dynamics than an ordinary differential equation. Time delay may be responsible for the loss of stability of a steady state and the oscillations in population dynamics. Two types of delays exist: pharmacological and intracellular. The delay between the ingestion of a drug and its appearance within cells is known as the pharmacological delay. The time elapsed between the infection of a host cell and the discharge of viral particles is known as the intracellular delay. In this study, the intracellular delay, designated by τ, is incorporated into the every step of HBV life cycle to make the process non-instantaneous. The intracellular delay model is given by the equation (<ref>) in Appendix A. The system of delay differential equation (<ref>) is solved numerically for different value of τ. As a result, it is observed that intracellular delay has very little impact on viral dynamics. (results are not shown)
§ IMPACTS OF SURFACE PROTEINS ON INFECTION
Although HBx protein and intracellular delay (τ) play some roles in cccDNA production inside the nucleus, these factors are not major contributors to the persistence of HBV infection in the host. Surface proteins (L, M, S), one kind of glycoprotein, also play significance roles in viral synthesis, viral infection, and in induction of immune responses.
The primary role of the surface proteins is to allow the virus to bind to the receptors of hepatocytes to enter into the cell. Depending on the concentration level of surface proteins inside the hepatocytes, rcDNA-containing capsids can be recycled to the nucleus to increase the number of cccDNA.
The concentration level of HBsAg in the cytoplasm mainly depends on the production rate of surface protein (λ_rs) from cccDNA. According to Nakabayashi <cit.>, there are two types of replication pattern: arrested and explosive. When the productions of 2.4 kb and 2.1 kb mRNAs dominant the production of 3.5 kb pgRNAs, it is called the “arrested replication” pattern. In this case λ_rg<λ_rs is a small quantity. On the contrary, in the explosive replication process, the ratio λ_rg>λ_rs becomes small in magnitude. Both cases are considered here to study the contribution of HBsAg in HBV infection in a single hepatocyte.
In the arrested replication pattern, λ_rg=0.1, λ_rs=2, and in the explosive replication pattern λ_rg=2, λ_rs=0.1 are considered for simulation purpose only.
In Figure <ref>, the outcomes of these two cases are demonstrated. A significant change in concentration of all intracellular components except polymerase (P) are observed. Viral load in the explosive replication pattern is extremely high compared to that in the arrested replication pattern and the solution becomes stable in an endemic equilibrium state. It is also observed that the explosive replication pattern indicates the chronic infection, whereas arrested replication pattern reflects the acute infection. Therefore, availability of HBsAg in the infected cell may drastically change the amount of newly produced virion from an infected cell as well as the situation of the patient.
§ IMPACTS OF DOUBLE-STRANDED LINEAR DNA-CONTAINING CAPSIDS
The impacts of dslDNA-containing capsids on all other intracellular components are discussed here. dslDNA is a defective form of the viral DNA. It can produce surface proteins (L, M, and S), but may not be able to produce functional pgRNA due to some mutations that are introduced when it is converted into cccDNA <cit.>.
In section <ref>, the roles of dslDNA are incorporated into the model (<ref>) by the parameter λ_sdl. Keeping other parameters fixed, the system (<ref>) is solved for different values of λ_sdl. The outcomes of cccDNA and virus are shown in Figure <ref>. No substantial change or role is observed within each class (ccDNA and virus). Therefore, targeting the dslDNA containing capsids for possible future treatment options does not seem promising. Moreover, the differential equation corresponding to the dslDNA-containing capsids can be ignored to simplify the model (<ref>) for further analysis.
§ GLOBAL SENSITIVITY ANALYSIS OF MODEL PARAMETERS
Because of uncertainties in experimental data used in the estimation of model parameters, the accuracy of outputs of a mathematical model related to specific biological phenomena becomes frequently poor. Many authors now focus on local sensitivity analysis (LSA), which is the examination of the impacts of one parameter while holding the others fixed at their estimated values. However, LSA does not offer complete necessary information about the uncertainty and sensitivity of the concerned parameter. In this case, global sensitivity analysis (GSA) performs well and can clearly describe the contributions of each model parameter irrespective of the role of other parameters. The GSA is a statistical technique which is used to study the sensitivity of parameters of a system or of a mathematical model. Various methods are used to study the global sensitivity analysis, such as Sobol indices, Fourier amplitude sensitivity test, partial rank correlation coefficient (PRCC). In this study, Latin hypercube sampling-partial rank correlation coefficient (LHS-PRCC) method is applied to the model (<ref>). This method is well-explained in the article of Marino et al. <cit.>. In this method, PRCC values can provide relevant useful information. PRCC can also aid us in determining the most influential set of parameters for achieving specific objectives in elimination of disease.
§.§ Simplified model
The full dynamics model (<ref>) is simplified for further analysis. In order to do this, some proper assumptions are made here. A similar kind of approach, as mentioned in the article of Nakabayashi <cit.> is followed here to do the simplification.
* If the intracellular components are degraded rapidly compared to the recruitment rate, then the infection will disappear on its own. The degradation rates are therefore assumed to be too small and can be ignored.
* In section <ref>, it is seen that the effects of HBx protein on infection are not significant. So, equations dR_h/dt= λ_rhΦ C-δ_r_hR_h and dH/dt= λ_h R_h-δ_h H are ignored. In this case, the volume fraction of active cccDNA Φ(t) becomes Φ_0. It means that all cccDNAs are active. Therefore, Φ_0 is taken to be equal to 1.
* The system (<ref>) does not seem to be affected by dslDNA-containing capsids, as observed in section <ref>. Thus, the differential equation corresponding to dlsDNA in full dynamics model (<ref>) is also ignored here.
Therefore, on the basis of these assumptions, the full dynamics model (<ref>) is reduced to the following:
.
rcDNA: dR/dt = α_1 V-α_2 R,
cccDNA: dC/dt = α_2 R+k_1e^-λ S_p D-δ_c C.
3.5 kb mRNA: dR_g/dt= λ_rg C-μ_1R_g P,
(2.4+2.1) kb mRNA: dR_s/dt= λ_rs C-λ_s_p R_s,
Ploymerase: dP/dt = λ_p R_g-μ_1R_gP,
RNP complex: dZ/dt =μ_1R_gP-μ_2 ZC_p,
Core protein: dC_p/dt=λ_c R_g-μ_2 ZC_p,
pgRNA-containing capsid: dP_g/dt=μ_2 ZC_p-β_1 P_g.
Surface protein: dS_p/dt=λ_s_p R_s-η_s_pS_p,
ssDNA-containing capsid: dS/dt =β_1 P_g- β_2 S,
dsDNA-containing capsid: dD/dt =β_2 S -k_1e^-λ S_p D-k_2(1-e^-λ S_p)D S_p,
Virus: dV/dt = k_2(1-e^-λ S_p)D S_p-δ_v V,
}
§.§ Latin hypercube sampling (LHS)-Partial rank correlation coefficient (PRCC)
Latin hypercube sampling is one kind of statistical method, belong to Monte Carlo class. With help of this method, a random sample of parameter values from a multi-dimensional distribution can be generated. This method was introduced in 1979 by McKay et al. <cit.>. In the context of statistical sampling, sample inputs, basically the parameters of the model are distributed in a “q-dimensional hypercube", where q denotes the number of parameters considered in the proposed model. For our simplified model (<ref>), there are 17 parameters. By partitioning the given ranges of parameters into probable equal sub-intervals, a probability density function (pdf) is employed to sample the parameter values. Sample points are placed in such a way that each should satisfy the LHS requirements. Based on the prior information and existing data, we use the uniform distribution for all parameters in this work. The model is then simulated iteratively over the hypercube. In general, the sample size N be at least (q+1). But, it is suggested that the sample size should be larger to ensure the desired precision and accuracy of the results. In this study, the sample size of 1000 is chosen.
The correlation coefficient (CC) serves as a metric to gauge the strength of linear correlation between the inputs and the outputs.
The correlation coefficient can be calculated using the following formula:
r=∑(U-U̅)(V-V̅)√(∑(U-U̅)^2∑(V-V̅)^2),
where U and V are inputs and outputs variables, respectively. U̅ and V̅ represent the sample means of U and V, and r∈[-1, 1]. Depending on the characteristics/nature of the data, the correlation coefficient may be referred to by different alternative names. In case of raw data of U and V, the correlation coefficient (r) is called sample or Pearson correlation coefficient. On the other hand, when the data is rank-transformed, the correlation coefficient is defined as the Spearman or rank correlation coefficient. LHS-PRCC provides a powerful tool to understand how the outputs of a system are affected by variations in model parameters.
Marino et al. well-described this method in their study
<cit.>.
§.§ Scatter plots: The monotonic relationship between input and output variables
In order to make better prediction about the infection and propose a new treatment strategy, it is very essential to explore how the outputs of a system or a model are influenced if the values of the associated parameters are varied within a reasonable range.
The baseline values of each parameter are taken from the literature and shown in Table <ref>.
Simulation results of the simplified model (<ref>) are shown by scatter plots in Figure 8-Figure 19.
On day 280, PRCC values of all model parameters are computed with respect to dependent variables, and are visualized in Table <ref>.
The positive PRCC value associated with a model parameter and a compartment indicates that any increase or decrease in the parameter's value, whether individually or simultaneously, leads to an enhancement or reduction in the concentration of the compartment.
On the other hand, negative correlation (PRCC value negative) tells us the opposite aspects.
The positive and negative correlated parameters for corresponding compartments are shown in second and third columns of Table <ref>, respectively. Based on the PRCC value, the insignificant or very less significant parameters are also identified and listed those in the fourth column of Table <ref>. The most positively significant (MPS) and most negatively significant (MNS) parameters for a specific compartment are highlighted in the fifth and sixth columns of the same Table <ref>.
Global sensitivity analysis reveals several new and striking results. Based on the PRCC values ( Table <ref>) and outputs noted down in Table <ref>, some of the findings are listed below.
* The exit rate of sub-viral particles (η_sp) is positively correlated with almost all except surface proteins i.e. the sub-viral particles play an important role in acceleration and persistence of HBV infection. Sub-viral particles can serve as decoys for the host immune system. These particles circulate in the bloodstream and are recognized by the immune system as foreign particles <cit.>. The immune response generated against sub-viral particles diverts the attention of immune system away from the infectious viral particles, which allows the virus to persist in the host. Despite being non-infectious, sub-viral particles hold crucial significance in understanding HBV infection due to their direct involvement in the disease progression. Henceforth, these particles hold immense potential for applications in various aspects, including their utilization as diagnostic markers, vaccine development tools, and as a basis for proposing novel therapeutic approaches to combat the disease.
* The recycling rate (k_1) is positively correlated with all components except dsDNA-containing capsids. While not being the most significant parameter for any compartment, it plays some versatile roles throughout the infection. Surface protein is the most affected component by recycling rate. All other components are moderately influenced. Therefore, recycling of capsids exerts significant effects on the overall dynamics of the HBV infection.
* The production rate of 3.5 kb pgRNA (λ_rg) is positively correlated with all compartments which implies that the transcription of cccDNA is one of the key step in the viral life cycle. It is one of the greatest influential parameters in the system. Hence, this parameter will need to be taken into account while proposing any new control strategy.
* The parameter λ is negatively correlated with almost all compartments that means rcDNA transportation rate to the nucleus are highly associated to the cccDNA synthesis within the hepatocytes.
* The decay rate of cccDNA (δ_c) and virus (δ_v) are negatively correlated with almost all except dsDNA-containing capsids. It's also a noteworthy observation.
* Production rate of 2.4 kb and 2.1 kb mRNA (λ_rs) is negatively correlated with nearly all except 2.4 kb and 2.1 kb mRNAs (R_s) and surface proteins.
This parameter is also the most negatively sensitive parameter for both cccDNA and dsDNA-containing capsids.
* It is also observed that the production rate of polymerase (λ_p) and the production rate of core proteins (λ_c) both are sensitive (positive for some compartments and negative for others) for all compartments. In addition, these two parameters are synchronously MPS and MNS i.e. as far as the infection is concerned, production rate of polymerase and core proteins play dual role in infection period. It is one of the crucial findings in this study. Probably, dual role of these two parameters are noticed in this study for the first time. No one has informed about this kind of behaviors of parameters so far.
The sensitivity analysis of this intracellular dynamics model helps us in determination of those factors that has immense influence in causing the infection. Although, it might be very difficult to pinpoint the most significant parameter for this infection, but we are able to enlist successfully ten parameters out of thirty-four, which have relatively greater impacts. Considering all these results and findings, it is possible to determine the best way to control this disease and to select the most effective drug regimens.
Furthermore, the most beneficial and best suited combination of available drugs (according to their chemical ingredients and maintaining the drug-drug interactions) can be chosen for the patient. In a nutshell, it is expected that this study will be useful in a wide range of practical applications.
§.§ Critical observation
From Table <ref>, it is observed that PRCC values for the transcription rate (λ_rg) of 3.5 kb pgRNA remain positive for all compartments, whereas the transcription rate of 2.4 and 2.1 kb mRNA (λ_rs) predominantly yield negative PRCC values across nearly all compartments. It is perplexing that when both parameters, associated with transcription, exhibit completely opposite contributions to the infection, which raises questions about their underlying mechanisms. This study provides a clear understanding of this fact. When the value of λ_rs is low, the production of surface proteins will decrease. As a consequence, a smaller number of rcDNA-containing capsids will be enveloped by surface proteins. On the other hand, a larger number of capsids will generate a higher quantity of cccDNA through the recycling loop. The transcription of this substantial amount of cccDNA will lead to the synthesis of a large quantity of pgRNA and surface proteins. As a result, these series of process will ultimately lead to a significant augmentation/enhancement in the release virus particles. Therefore, it can be concluded that solely focusing on this parameter to decrease the production of surface proteins would not be a viable approach to effectively control the infection.
blue!25
§ MODEL VALIDATION
In order to validate the proposed model, the simplified model (<ref>) is extended incorporating the effects of entecavir (ETV), which acts as a reverse transcriptase inhibitor i.e. it blocks the production of dsDNA-containing capsids from pgRNA-containing capsids. It is assumed that the efficiency of ETV is ϵ, where 0<ϵ≤ 1. As a result of incorporation of ETV, equation (<ref>) and equation (<ref>) are modified as follows:
.
dP_g/dt=μ_2 ZC_p-(1-ϵ)β_1 P_g,
dS/dt =(1-ϵ)β_1 P_g- β_2 S.
}
Experimental data of four humanized mice are collected from the work of Kitagawa et al. <cit.>. Each mouse was infected with HBV at 1.0× 10^6copies.
On the day 53 of post-inoculation, the mice, displaying a sustained level of HBV in serum, were administered ETV continuously for 70 days. The treatment protocol involved daily dosing of 0.02 mg/kg ETV. The efficiency of ETV, as stated in the study by Kitagawa et al. <cit.>, is recorded as 0.97, and this value is utilized in our study. Through a thorough comparison between the model solution and the experimental data from four humanized mice (Mouse-501, Mouse-502, Mouse-503, Mouse-504), it is seen from Figure <ref> that the model well-captured the experimental data as well dynamics of infection. Therefore, the model demonstrates a close alignment with reality and reflects a strong correspondence with actual observations. In Figure <ref>, the comparisons are visualized.
§ CONCLUSIONS
In case of viral infection, intracellular dynamics model has revealed various intrinsic biological phenomena of individual cells. Considering all possible steps of the HBV life cycle, this study proposes an intracellular dynamics model. To the best of our knowledge, it is the most generalized and reasonable dynamics model yet. A detailed discussion of nearly all possible viral replication mechanisms is provided here. Large number of parameters (comparatively existing study in the literature) have been handled successfully and their roles in disease progression and persistence are explored. The proposed intracellular dynamics model is validated with experimental data obtained from humanized mice.
The evaluation of uncertainties and sensitivity analysis has gained importance in assessing the reliability of models and identifying the most influential factors on outputs. In order to study the sensitivity of the model's parameters, a sampling-based method (Latin hypercube sampling-Partial Rank Correlation Coefficient) is used here. The most positively and negatively correlated parameters for a specific compartment and for the entire system are identified. It is also uncovered that some parameters have a dual role in modulating disease dynamics.
Based on the findings of the present study, the following conclusions can be drawn:
* There is no significant contribution of HBx proteins to the progression of HBV infection. So, targeting the HBx protein
as a future antiviral therapy is not a promising strategy to control the infection.
* Superinfection of cells can lead to more severe liver damage and increase risk of complications. cccDNA and progeny viruses are highly amplified in the presence of superinfection. In a nutshell, superinfection rate is one of the disease controlling parameters.
* The simulation results illustrate that the intracellular delay has little effects on the infection.
* HBsAg is one of the main viral components. The availability of HBsAg in the infected cell switches the arrested replication pattern to explosive replication and vice versa. Therefore, considering other factors, it may be worthwhile to inhibit all the functions of HBsAg to treat HBV patients.
* This study indicates that dslDNA has no bearing on the outcomes of infection.
* During recycling of capsids, core particles assemble inside infected cells and serve as a source of infection. In other word,
the recycling of capsids serves as a positive feedback loop in the infection. The available inhibitors associated with capsids recycling mechanism have proved to be effective at reducing HBV infection.
* The results of the global sensitivity analysis indicate that the parameters related to transcription (λ_rg), translation (λ_p, λ_c) and exit of sub-viral particles (η_sp) have more substantial impacts on disease development compared to other parameters.
However, these findings represent the tip of the iceberg, and deeper mechanisms must be uncovered. The outcomes of this study will advance the understanding of infection clearance and may be applied in practical field including the clinical experiments. We aim to enhance our current model by including all existing antiviral therapy in the near future. Our main focus is to identify the optimal monotherapy or combination therapy based on the individual patient's overall condition.
§.§ Acknowledgments
First author would like to acknowledge the financial support obtained from CSIR (New Delhi) under the CSIR-SRF Fellowship scheme
(File No: 09/731(0171)/2019-EMR-I). The first author also thanks the research facilities received from the Department of Mathematics, Indian Institute of Technology Guwahati, India.
§.§ Author contributions
Both authors contribute equally.
§.§ Conflict of interest
The authors declare no potential conflict of interests.
§.§ Data Availability Statement
Data sharing is not applicable to this article.
10
WHO_2021
Hepatitis b.
<https://www.who.int/news-room/fact-sheets/detail/hepatitis-b>.
27 July 2021.
Hepb_2022
Hepatitis b.
<https://www.hepb.org/what-is-hepatitis-b/what-is-hepb/facts-and-figures>.
2022.
2008_dienstag_hepatitis
Jules L Dienstag.
Hepatitis b virus infection.
New England Journal of Medicine, 359(14):1486–1500, 2008.
2011_liang_predictors
Y Liang, J Jiang, M Su, Z Liu, W Guo, X Huang, R Xie, S Ge, J Hu, Z Jiang,
et al.
Predictors of relapse in chronic hepatitis b after discontinuation of
anti-viral therapy.
Alimentary pharmacology & therapeutics, 34(3):344–352, 2011.
2005_zoulim_new
Fabien Zoulim.
New insight on hepatitis b virus persistence from the study of
intrahepatic viral cccdna.
Journal of hepatology, 42(3):302–308, 2005.
2005_hui_immune
Chee-Kin Hui and George KK Lau.
Immune system and hepatitis b virus infection.
Journal of Clinical Virology, 34:S44–S48, 2005.
2007_Ciupe
Stanca M Ciupe, Ruy M Ribeiro, Patrick W Nelson, and Alan S Perelson.
Modeling the mechanisms of acute hepatitis b virus infection.
Journal of Theoretical Biology, 247(1):23–35, 2007.
2008_Min
Lequan Min, Yongmei Su, and Yang Kuang.
Mathematical analysis of a basic virus infection model with
application to hbv infection.
The Rocky Mountain Journal of Mathematics, pages 1573–1585,
2008.
2021_liu_age
Sanhong Liu and Ran Zhang.
On an age-structured hepatitis b virus infection model with hbv
dna-containing capsids.
Bulletin of the Malaysian Mathematical Sciences Society,
44(3):1345–1370, 2021.
2010_raj_variability
Arjun Raj, Scott A Rifkin, Erik Andersen, and Alexander Van Oudenaarden.
Variability in gene expression underlies incomplete penetrance.
Nature, 463(7283):913–918, 2010.
2008_cohen_dynamic
Ariel A Cohen, Naama Geva-Zatorsky, Eran Eden, Milana Frenkel-Morgenstern,
Iirina Issaeva, Alex Sigal, Ron Milo, Cellina Cohen-Saidon, Yuvalal Liron,
Zvi Kam, et al.
Dynamic proteomics of individual cancer cells in response to a drug.
science, 322(5907):1511–1516, 2008.
2011_navin_tumour
Nicholas Navin, Jude Kendall, Jennifer Troge, Peter Andrews, Linda Rodgers,
Jeanne McIndoo, Kerry Cook, Asya Stepansky, Dan Levy, Diane Esposito, et al.
Tumour evolution inferred by single-cell sequencing.
Nature, 472(7341):90–94, 2011.
1945_delbruck_burst
M Delbrück.
The burst size distribution in the growth of bacterial viruses
(bacteriophages).
Journal of bacteriology, 50(2):131–135, 1945.
2009_schumann_evidence
Thomas Schumann, Helmut Hotzel, Peter Otto, and Reimar Johne.
Evidence of interspecies transmission and reassortment among avian
group a rotaviruses.
Virology, 386(2):334–343, 2009.
2012_timm_kinetics
Andrea Timm and John Yin.
Kinetics of virus production from single cells.
Virology, 424(1):11–17, 2012.
2014_schulte_single
Michael B Schulte and Raul Andino.
Single-cell analysis uncovers extensive biological noise in
poliovirus replication.
Journal of virology, 88(11):6205–6212, 2014.
2015_heldt_single
Frank S Heldt, Sascha Y Kupke, Sebastian Dorl, Udo Reichl, and Timo Frensing.
Single-cell analysis and stochastic modelling unveil large
cell-to-cell variability in influenza a virus infection.
Nature communications, 6(1):1–12, 2015.
2018_xin_single
Xiu Xin, Hailong Wang, Lingling Han, Mingzhen Wang, Hui Fang, Yao Hao, Jiadai
Li, Hu Zhang, Congyi Zheng, and Chao Shen.
Single-cell analysis of the impact of host cell heterogeneity on
infection with foot-and-mouth disease virus.
Journal of virology, 92(9):e00179–18, 2018.
2009_zhu_growth
Ying Zhu, Andrew Yongky, and John Yin.
Growth of an rna virus in single cells reveals a broad fitness
distribution.
Virology, 385(1):39–46, 2009.
2018_cristinelli_use
Sara Cristinelli and Angela Ciuffi.
The use of single-cell rna-seq to understand virus–host
interactions.
Current opinion in virology, 29:39–50, 2018.
2019_suva_single
Mario L Suvà and Itay Tirosh.
Single-cell rna sequencing in cancer: lessons learned and emerging
challenges.
Molecular cell, 75(1):7–12, 2019.
2019_tibbitt_single
Christopher Andrew Tibbitt, Julian Mario Stark, Liesbet Martens, Junjie Ma,
Jeff Eron Mold, Kim Deswarte, Ganna Oliynyk, Xiaogang Feng, Bart Norbert
Lambrecht, Pieter De Bleser, et al.
Single-cell rna sequencing of the t helper cell response to house
dust mites defines a distinct gene expression signature in airway th2 cells.
Immunity, 51(1):169–184, 2019.
2019_zhou_single
Yang Zhou, Ziqing Liu, Joshua D Welch, Xu Gao, Li Wang, Tiffany Garbutt,
Benjamin Keepers, Hong Ma, Jan F Prins, Weining Shen, et al.
Single-cell transcriptomic analyses of cell fate transitions during
human cardiac reprogramming.
Cell Stem Cell, 25(1):149–164, 2019.
2021_Saraceni_review
Corey Saraceni and John Birk.
A review of hepatitis b virus and hepatitis c virus
immunopathogenesis.
Journal of Clinical and Translational Hepatology, (000):0–0,
2021.
2021_prifti
Georgia-Myrto Prifti, Dimitrios Moianos, Erofili Giannakopoulou, Vasiliki
Pardali, John E Tavis, and Grigoris Zoidis.
Recent advances in hepatitis b treatment.
Pharmaceuticals, 14(5):417, 2021.
2018_goyal_cell_to_cell
Ashish Goyal and Ranjit Chauhan.
The dynamics of integration, viral suppression and cell-cell
transmission in the development of occult hepatitis b virus infection.
Journal of Theoretical Biology, 455:269–280, 2018.
1996_Nowak
Martin A Nowak, Sebastian Bonhoeffer, Andrew M Hill, Richard Boehme, Howard C
Thomas, and Hugh McDade.
Viral dynamics in hepatitis b virus infection.
Proceedings of the National Academy of Sciences,
93(9):4398–4402, 1996.
2015_tan_immune
Anthony Tan, Sarene Koh, and Antonio Bertoletti.
Immune response in hepatitis b virus infection.
Cold Spring Harbor perspectives in medicine, 5(8):a021428,
2015.
2015_Murray_singlecell
John M Murray and Ashish Goyal.
In silico single cell dynamics of hepatitis b virus infection and
clearance.
Journal of Theoretical Biology, 366:91–102, 2015.
2018_fatehi_nkcell
F Fatehi Chenar, YN Kyrychko, and KB Blyuss.
Mathematical model of immune response to hepatitis b.
Journal of Theoretical Biology, 447:98–110, 2018.
2016_jun_nakabayashi
Jun Nakabayashi.
The intracellular dynamics of hepatitis b virus (hbv) replication
with reproduced virion “re-cycling”.
Journal of Theoretical Biology, 396:154–162, 2016.
2018_Guo
Ting Guo, Haihong Liu, Chenglin Xu, and Fang Yan.
Global stability of a diffusive and delayed hbv infection model with
hbv dna-containing capsids and general incidence rate.
Discrete & Continuous Dynamical Systems-B, 23(10):4223, 2018.
2021_Fatehi
Farzad Fatehi, Richard J Bingham, Eric C Dykeman, Nikesh Patel, Peter G
Stockley, and Reidun Twarock.
An intracellular model of hepatitis b viral infection: An in silico
platform for comparing therapeutic strategies.
Viruses, 13(1):11, 2021.
2009_liang_hepatitis
T Jake Liang.
Hepatitis b: the virus and disease.
Hepatology, 49(S5):S13–S21, 2009.
2008_balsano_viral
Clara Balsano and Anna Alisi.
Viral hepatitis b: established and emerging therapies.
Current medicinal chemistry, 15(9):930–939, 2008.
2016_lamontagne_hepatitis
R Jason Lamontagne, Sumedha Bagga, and Michael J Bouchard.
Hepatitis b virus molecular biology and pathogenesis.
Hepatoma research, 2:163, 2016.
2016_decorsiere_hepatitis
Adrien Decorsière, Henrik Mueller, Pieter C Van Breugel, Fabien Abdul,
Laetitia Gerossier, Rudolf K Beran, Christine M Livingston, Congrong Niu,
Simon P Fletcher, Olivier Hantz, et al.
Hepatitis b virus x protein identifies the smc5/6 complex as a host
restriction factor.
Nature, 531(7594):386–389, 2016.
2011_lucifora_hepatitis
Julie Lucifora, Silke Arzberger, David Durantel, Laura Belloni, Michel Strubin,
Massimo Levrero, Fabien Zoulim, Olivier Hantz, and Ulrike Protzer.
Hepatitis b virus x protein is essential to initiate and maintain
virus replication after infection.
Journal of hepatology, 55(5):996–1003, 2011.
2014_feitelson_roles
Mark A Feitelson, Barbara Bonamassa, and Alla Arzumanyan.
The roles of hepatitis b virus-encoded x protein in virus replication
and the pathogenesis of chronic liver disease.
Expert Opinion on Therapeutic Targets, 18(3):293–306, 2014.
2017_Thomas_hbv
Thomas Tu, Magdalena A Budzinska, Nicholas A Shackel, and Stephan Urban.
Hbv dna integration: molecular mechanisms and clinical implications.
Viruses, 9(4):75, 2017.
2018_Chunkyu_hepatitis
Chunkyu Ko, Anindita Chakraborty, Wen-Min Chou, Julia Hasreiter, Jochen M
Wettengel, Daniela Stadler, Romina Bester, Theresa Asen, Ke Zhang, Karin
Wisskirchen, et al.
Hepatitis b virus genome recycling and de novo secondary infection
events maintain stable cccdna levels.
Journal of hepatology, 69(6):1231–1241, 2018.
2010_Xu_interferons
Chunxiao Xu, Haitao Guo, Xiao-Ben Pan, Richeng Mao, Wenquan Yu, Xiaodong Xu,
Lai Wei, Jinhong Chang, Timothy M Block, and Ju-Tao Guo.
Interferons accelerate decay of replication-competent nucleocapsids
of hepatitis b virus.
Journal of virology, 84(18):9332–9340, 2010.
2021_lythgoe_estimating
Katrina A Lythgoe, Sheila F Lumley, Lorenzo Pellis, Jane A McKeating, and
Philippa C Matthews.
Estimating hepatitis b virus cccdna persistence in chronic infection.
Virus evolution, 7(1):veaa063, 2021.
2019_hou_restriction
Lidan Hou, Jie Zhao, Shaobing Gao, Tong Ji, Tianyu Song, Yining Li, Jingjie
Wang, Chenlu Geng, Min Long, Jiang Chen, et al.
Restriction of hepatitis b virus replication by c-abl–induced
proteasomal degradation of the viral polymerase.
Science advances, 5(2):eaau7130, 2019.
2019_loomba_discovery
Rohit Loomba, Martin Decaris, Kelvin W Li, Mahalakshmi Shankaran, Hussein
Mohammed, Marcy Matthews, Lisa M Richards, Phirum Nguyen, Emily Rizo, Barbara
Andrews, et al.
Discovery of half-life of circulating hepatitis b surface antigen in
patients with chronic hepatitis b infection using heavy water labeling.
Clinical Infectious Diseases, 69(3):542–545, 2019.
2006_Murray
John M Murray, Robert H Purcell, and Stefan F Wieland.
The half-life of hepatitis b virions.
Hepatology, 44(5):1117–1121, 2006.
2017_schreiner_role
Sabrina Schreiner and Michael Nassal.
A role for the host dna damage response in hepatitis b virus cccdna
formation—and beyond?
Viruses, 9(5):125, 2017.
2012_guo_characterization
Haitao Guo, Chunxiao Xu, Tianlun Zhou, Timothy M Block, and Ju-Tao Guo.
Characterization of the host factors required for hepadnavirus
covalently closed circular (ccc) dna formation.
PLOS ONE, 7(8), 2012.
1998_yang_infection
Wengang Yang and Jesse Summers.
Infection of ducklings with virus particles containing linear
double-stranded duck hepatitis b virus dna: illegitimate replication and
reversion.
Journal of virology, 72(11):8710–8717, 1998.
2004_locarnini_molecular
Stephen Locarnini.
Molecular virology of hepatitis b virus.
In Seminars in liver disease, volume 24, pages 3–10.
Copyright 2004 by Thieme Medical Publishers, Inc., 333 Seventh
Avenue, New …, 2004.
2007_glebe_recent
Dieter Glebe.
Recent advances in hepatitis b virus research: a german point of
view.
World Journal of Gastroenterology: WJG, 13(1):8, 2007.
2015_nassal_hbv
Michael Nassal.
Hbv cccdna: viral persistence reservoir and key obstacle for a cure
of chronic hepatitis b.
Gut, 64(12):1972–1984, 2015.
2021_tu_hepatitis
Thomas Tu, Henrik Zhang, and Stephan Urban.
Hepatitis b virus dna integration: in vitro models for investigating
viral pathogenesis and persistence.
Viruses, 13(2):180, 2021.
2008_marino_methodology
Simeone Marino, Ian B Hogue, Christian J Ray, and Denise E Kirschner.
A methodology for performing global uncertainty and sensitivity
analysis in systems biology.
Journal of theoretical biology, 254(1):178–196, 2008.
1979_Mckay_comparison
R. J. Beckman McKay, M. D. and W. J. Conover.
A comparison of three methods for selecting values of input variables
in the analysis of output from a computer code.
Technometrics, 21(2):239–245, 1979.
2020_lee_hepatitis
Hye Won Lee, Jae Seung Lee, and Sang Hoon Ahn.
Hepatitis b virus cure: targets and future therapies.
International journal of molecular sciences, 22(1):213, 2020.
PPR:PPR672105
Kosaku Kitagawa, Kwang Su Kim, Masashi Iwamoto, Sanae Hayashi, Hyeongki Park,
Takara Nishiyama, Naotoshi Nakamura, Yasuhisa Fujita, Shinji Nakaoka,
Kazuyuki Aihara, Alan Perelson, Lena Allweiss, Maura Dandri, Koichi Watashi,
Yasuhito Tanaka, and Shingo Iwami.
Multiscale modeling of hbv infection integrating intra- and
intercellular viral propagation for analyzing extracellular viral markers,
2023.
§ APPENDIX A
The delay single cell HBV dynamics model is given by
dR/dt = α_1 V(t-τ)-α_2 R-δ_r R.
dC/dt = α_2 R(t-τ)+k_1e^-λ S_p(t-τ) D(t-τ)-δ_c C.
dR_g/dt= λ_rgΦ C(t-τ)-μ_1R_g P(t-τ)-δ_r_gR_g,
dR_s/dt= λ_rsΦ C(t-τ)+λ_sdl D_L(t-τ)-λ_s_pR_s- δ_r_sR_s,
dR_h/dt= λ_rhΦ C(t-τ)-δ_r_hR_h,
dH/dt= λ_h R_h(t-τ)-δ_h H,
dP/dt = λ_p R_g(t-τ)-μ_1R_g(t-τ)P-δ_p P,
dZ/dt =μ_1R_g(t-τ)P(t-τ)-μ_2 ZC_p(t-τ)-δ_z Z,
dC_p/dt=λ_c R_g(t-τ)-μ_2 Z(t-τ)C_p-δ_c_pC_p,
dP_g/dt=μ_2 Z(t-τ)C_p(t-τ)-δ_p_gP_g.
dS_p/dt=λ_s_p R_s(t-τ)-η_s_pS_p-δ_s_pS_p,
dS/dt =β_1 P_g(t-τ)- β_2 S-δ_sS,
dD/dt =0.9β_2 S(t-τ)-k_1e^-λ S_p D-k_2(1-e^-λ S_p)D S_p-δ_d D,
dD_L/dt =0.1 β_2 S(t-τ)-δ_d_L D_L,
dV/dt = k_2(1-e^-λ S_p(t-τ))D(t-τ)S_p(t-τ)-δ_v V,
where τ represents the intracellular delay.
|
http://arxiv.org/abs/2307.02642v1
|
20230705201737
|
Valley-controlled transport in graphene/ WSe$_{2}$ heterostructures under an off-resonant polarized light
|
[
"M. Zubair",
"P. Vasilopoulos",
"M. Tahir"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.mtrl-sci",
"cond-mat.str-el"
] |
[email protected]; [email protected]
Department of Physics, Concordia University, 7141 Sherbrooke Ouest, Montreal, Quebec H4B 1R6, Canada
[email protected]
Department of Physics, Concordia University, 7141 Sherbrooke Ouest, Montreal, Quebec H4B 1R6, Canada
[email protected]; [email protected]
Department of Physics, Colorado State University, Fort Collins, CO 80523, USA
We investigate the electronic dispersion and transport properties of graphene/WSe_2 heterostructures in the presence of a proximity-induced spin-orbit coupling λ_v, sublattice potential Δ, and an off-resonant circularly polarized light of frequency Ω that renormalizes Δ to Δ̅_η p = Δ +η p Δ_Ω with η
and p
the valley and polarization indices, respectively, and Δ_Ω the gap due to the off-resonant circularly polarized light. Using a low-energy
Hamiltonian we find that the interplay between different perturbation terms leads to inverted spin-orbit coupled bands. At high Ω we study the band structure and dc transport using the Floquet theory and linear response formalism, respectively. We find that the inverted band structure transfers into the direct band one when the off-resonant light is present. The valley-Hall conductivity behaves as an even function of the Fermi energy in the presence and absence of this light. At Δ_Ω = λ_v - Δ a transition occurs from the valley-Hall phase to the anomalous Hall phase. In addition, the valley-Hall conductivity switches sign when the polarization of the off-resonant light changes. The valley polarization vanishes for Δ_Ω = 0 but it is finite for Δ_Ω ≠ 0 and reflects the lifting of the valley degeneracy of the energy levels, for Δ_Ω≠ 0, when the off-resonant light is present. The corresponding spin polarization, present for Δ_Ω = 0, increases for Δ_Ω ≠ 0. Further, pure K or K^' valley polarization is generated when Δ_Ω changes sign. Also, the
charge Hall conductivity is finite for Δ_Ω≠ 0 and changes sign when the handedness of the light polarization changes.
Valley-controlled transport in graphene/ WSe_2 heterostructures under an off-resonant polarized light
M. Tahir
August 1, 2023
=====================================================================================================
§ INTRODUCTION
Since its discovery graphene has attracted immense attention
both theoretically and experimentally due to its peculiar electronic and optical properties <cit.>. But, it has limited usage in the field of spintronics due to its very weak intrinsic spin orbit coupling (SOC). The intrinsic SOC in graphene is theoretically predicted to be weak, 12 μeV <cit.>. A value of 20 μeV is reported in a recent experiment for graphene on SiO_2 substrate <cit.>.
A lot of efforts have been made to enhance the strength of SOC in graphene by employing external means, such as graphene hydrogenation <cit.> or fluorination <cit.> as well as heavy adatom decoration <cit.>, and bringing it to proximity with other two-dimensional materials specifically transition metal dichalcogenides (TMDCs) <cit.>. In recent years the heterostructures of graphene and TMDCs have become more promising because the Dirac cone of graphene is well fit in the band gap of TMDCs, which leaves it intact. The giant native SOC of TMDCs is transferred to graphene via hybridization processes. Moreover, the combinations of graphene with TMDCs, such as MoS_2 or WSe_2, exhibit the proximity SOC on the meV scale <cit.>
Presently SOC, induced by proximity effects, is no longer limited to theoretical studies, as it has been demonstrated by experimentally as well <cit.>. The breaking of spatial symmetry due to the substrate leads to an alteration of the Hamiltonian and spin degeneracy of graphene and opens a gap in its massless energy dispersion. In addition, it has been verified by experiments <cit.> that another type of sublattice-resolved intrinsic SOC arises, the so-called valley-Zeeman or staggered SOC with opposite sign on the A and B sublattices. Further, enhancement of the Rashba SOC and creation of staggered potentials are also unavoidable <cit.>.
Nowadays, the optical control of functional materials has been become a hot topic in the condensed matter physics. In addition, it creates a bridge between condensed matter physics <cit.> and ultrafast spectroscopy <cit.>. Many intriguing phenomena have been realized in optically driven quantum solids such as light induced superconductivity <cit.>,
photo-initiated insulator-metal transition <cit.>, microscopic interactions, such as the electron-phonon one, controlled by light <cit.>, and theoretically predicted Floquet topological phases of matters <cit.>. These Floquet phases have stimulated much interest but direct evidence for
electron-photon Floquet dressed states is scarce to date <cit.> contrary to the field of artificial lattices <cit.>.
Recently, light-induced anomalous Hall effect has been observed experimentally in monolayer graphene by using an ultrafast transport technique <cit.> and predicted theoretically using a quantum Liouville equation with relaxation <cit.>. Also, graphene under the influence of light has been studied in various frameworks <cit.>
The transport properties, especially valley-dependent dc transport, using the Floquet theory, has not been addressed sufficiently in contrast with a large amount of research on proximitized graphene. As far as transport in the presence of an off-resonant light is concerned, we are aware only of an electron transport study in MoS_2 <cit.>, of another one on graphene and the Lieb lattice <cit.>, and of a thermal transport study in topological insulators in the absence of any SOC <cit.>. Here we investigate theoretically the band structure in laser-driven graphene/WSe_2 heterostructures using the Floquet theory in the high-frequency regime. Also, we study dc transport in such heterostructres in the framework of linear response theory. We show that the interplay between the proximity SOCs and off-resonant light leads to a phase transition from the inverted band regime to the direct one.
Our results are in good agreement with experimental results <cit.> in the limit of vanishing proximity SOCs.
In Sec. <ref> we specify the Hamiltonian and obtain the eigenvalues and eigenfunctions of the proximity modified graphene as well as an analytical expression for the density of states (DOS). In Sec. <ref> we derive analytical expressions for the conductivities and provide numerical results. Conclusions and a summary follow in Sec. <ref>.
§ FORMULATION
The real space tight-binding (TB) Hamiltonian of proximitized graphene is written as <cit.>
H = -t_J ∑_⟨ i,j ⟩,α c_iα^† c_jα + Δ∑_i αη_c_i c_iα^† c_iα
+ i3 √(3)∑_⟨⟨ i,j ⟩⟩,αα^'λ_I^iν_ij c_iα^† c_jα^' [s_z]_αα^'
+ 2 i λ_R3∑_⟨ i,j ⟩,αα^' c_iα^† c_jα^' [ (s×𝐝̂_ij)_z]_αα^'.
Here t_J is the hopping parameter, c_iα^† creates an electron with spin polarization α at site i that belongs to sublattice A or B, and ⟨ i,j ⟩ (⟨⟨ i,j ⟩⟩) runs over the nearest (second nearest) neighbouring sites. The second term is a staggered on-site potential, which takes into account the effective energy difference experienced by atoms at the lattice sites A (η_c_i=+1) and B (η_c_i=-1), respectively. The third and fourth terms represent the proximity-induced enhancement of the spin orbit coupling (SOC) due to a weak hybridization with the heavy atoms in TMDCs. The third term is the sublattice resolved intrinsic SOC (λ_I^i with i = A,B) where ν_ij=+1, if the second nearest hopping is anticlockwise, and ν_ij=-1 if it is clockwise with respect to the positive z axis. The last term is the Rashba SOC parametrized by λ_R. It arises because the inversion symmetry is broken when the graphene sheet is placed on top of TMDCs. Further, s= (s_x,s_y,s_z) is the Pauli spin matrix and 𝐝̂_ij is the unit vector connecting the sites i and j in the same sublattice.
We analyze the physics of electrons near the Fermi energy using a low-energy effective Hamiltonian derived from Eq. (<ref>) and a Dirac theory around K and K^' points. It reads <cit.>
H_s_zη = v_F(ησ_xp_x+σ_yp_y)+Δσ_z
+λ_R(η s_yσ_x-s_xσ_y)
+ 12 [λ_I^A(σ_z + σ_0) + λ_I^B(σ_z - σ_0)]η s_z
.
Here η=+1(-1) denotes the valley K (K^'),
Δ is the mass term that breaks the inversion symmetry, λ_R the Rashba type SOC strength, σ=(σ_x, σ_y, σ_z)
the Pauli matrix that corresponds to the pseudospin (i.e., A-B sublattice); σ_0 is the unit matrix in the sublattice space and v_F (8.2 × 10^5 m/s) denotes the Fermi velocity of Dirac fermions. The last term arises due to the breaking of sublattice symmetry and can be categorized into two groups according to its dependence on sublattice spin: (i) λ_soσ_zη s_z when λ_so= (λ_I^A+λ_I^B)/2. This is called conventional Kane-Mele (KM) type SOC, which has a magnitude of the order of μeV in graphene/TMDCs heterostuctures <cit.>; (ii) λ_vσ_0η s_z when λ_v= (λ_I^A-λ_I^B)/2. It is called valley-Zeeman or staggered SOC and has been experimentally confirmed in graphene on TMDCs <cit.>; it occurs only for λ_I^A=-λ_I^B. Further, Refs. <cit.> show that
λ_so is negligibly small or zero. In view of that, we treat only the regime λ_v>>λ_so and neglect λ_so altogether.
As shown
in Fig. <ref>, monolayer graphene, irradiated by off-resonant circularly polarized light, is grown on WSe_2 that provides a staggered potential and induces SOC in graphene. We study the changes induced by circularly polarized light in graphene/WSe_2 in the presence of a perpendicular electric field E. We describe the monochromatic light through a time-dependent vector potential A⃗(t)= (E_0/Ω)(cosΩ t, psinΩ t) with Ω its frequency, E_0 the amplitude of the field E, and p = +1(-1) for left (right) circular polarization. The vector potential is periodic in time A(t+T)=A(t) with T=2π/ Ω. For high frequencies ħΩ≫ t_J and
low light intensities,
i.e., A^2 << 1 with A=ev_FE_0/ħΩ characterizing the intensity of light, Eq. (<ref>) gives the
Hamiltonian
H_s η(t) = H_s η^0 + V(t)
,
with
H_s_zη^0 = v_F(ησ_x p_x+σ_yp_y)
+ Δσ_z + λ_vσ_0η s_z
+λ_R(η s_yσ_x-s_xσ_y)
V(t) = -
(e v_F/ℏ)[ησ_x A_x(t)+σ_y A_y(t)]
.
For ħΩ≫ t_J and A^2 << 1,
Eq. (<ref>) can be reduced to an effective,
time-independent Hamiltonian H_s_zη^eff(t) using Floquet theory <cit.>. H_s η^eff(t) is defined through the time evolution operator over one period
Û = T̂exp[-i ∫_0^T H_s_zη(t) dt]= exp[-i H_s_zη^eff T],
where T̂ is time ordering operator. Using perturbation theory and expanding Û in the limit of large frequency Ω, we obtain
H_s_zη^eff = H_s_zη^0 +
[V_-1 , V_1]/ℏΩ+ O(Ω^-2),
where V_m = (1/T) ∫_0^T e^-im Ω t V(t) dt is the m-th Fourier harmonic of the time-periodic Hamiltonian and [V_-1 , V_1] the commutator between V_-1 and V_1. Corrections to Eq. (<ref>), to all orders of 1/Ω, can be obtained by the method of Ref. <cit.>. Here we neglect them because we treat only the case ħΩ≫ t_J. Using Eqs. (<ref>) and (<ref>) we obtain
H_s_zη^eff = v_F[ησ_x p_x+σ_yp_y] + Δ̅_η pσ_z+ λ_vσ_0η s_z
+λ_R(η s_yσ_x-s_xσ_y),
where Δ̅_η p=Δ + η p Δ_Ω with Δ_Ω=v_F^2e^2E_0^2/ℏΩ^3; Δ̅_η p is the renormalized mass term
due to the circularly polarized light which creates a gap Δ_Ω in pure graphene, i.e., for Δ=0, see Ref. <cit.>.
The diagonalization of Eq. (<ref>) gives the dispersion
E_ξ^η p(k) = l {G_η + 2 λ_R^2+ ϵ_k^2
+ 2 s √(Υ)}^1/2.
where ξ ={l, s} and G_η= λ_v^2 + Δ̅_η p^2, Υ = ϵ_k^2λ̅^2 + (λ_R^2 - λ_vΔ̅_η p )^2 with ϵ_k = ℏ v_F k, Δ̅_η p=Δ + η p Δ_Ω and λ̅^2 = λ_R^2 + λ_v^2. Further, l= +1 (-1) denotes the conduction (valence) band and s= +1 (-1) represents the spin-up (spin-down) branches and is not a Pauli matrix s_z. The normalized eigenfunctions for both valleys are
ψ_ξ^+p (k) = N_ξ^+p√(S_0)[ 1; A_ξ^η p e^iϕ; -i B_ξ^η p e^iϕ; -i C_ξ^η p e^2iϕ ]
e^i k· r,
ψ_ξ^-p (k) = N_ξ^-p√(S_0)[ - A_ξ^η p e^iϕ; 1; i C_ξ^η p e^2iϕ; -i B_ξ^η p e^iϕ ]
e^i k· r,
respectively, with
N_ξ^η p = l[1 + ( A_ξ^η p) ^2 + ( B_ξ^η p) ^2 + ( C_ξ^η p) ^2 ]^-1/2,
S_0=L_xL_y the area of the sample, and ϕ = tan^-1(k_y/k_x). Further,
A_ξ^η p = { E_ξ^η p - ηα_1^η} / ϵ_k, B_ξ^η p = 2λ_R{ (E_ξ^η p)^2 - (α_1^η)^2} / ϵ_k{ ( E_ξ^η p + ηα_1^η)( E_ξ^η p - ηα_2^η) - ϵ_k^2}, and C_ξ^η p = 2 λ_R{ E_ξ^η p - ηα_1^η} / { ( E_ξ^η p + ηα_1^η)( E_ξ^η p - ηα_2^η) - ϵ_k^2} with α_1^η= Δ̅_η p+λ_v, and α_2^η= Δ̅_η p-λ_v.
In numerical calculations throughout the manuscript, we use
values of the parameters Δ, λ_v, and λ_R somewhat larger than those of <cit.>
to have well-resolved spin and valley splittings since the overall physics of the system is not changed when we do so.
As for the values of Δ_Ω, it is known that the off-resonant light does not directly excite the electrons; instead, it modifies the electron bands through virtual photon absorption processes. To study the topological transitions of bands, this light must satisfy the condition ℏΩ≫ t_J and A^2 << 1. Accordingly, we will use the values of Δ_Ω from Refs. i13,i36,f14.
The typical band structure (<ref>) for both valleys is illustrated in Fig. <ref> for p=+1, Ω_Ω < Δ+λ_v (inverted band regime), and Δ_Ω > Δ+λ_v (direct band regime). The left panel shows the inverted band regime. The inversion occurs due to the anticrossing of the bands with opposite spins and in the presence of the Rashba SOC. The right panel depicts the direct band regime with simple parabolic dispersion. It is found that the spin and valley degeneracies are completely lifted when Δ_Ω> Δ + λ_v, whereas the valley degeneracy is restored in the opposite limit similar to silicene <cit.>. The valleys are interchanged
if proximitized graphene is irradiated by a right circularly polarized light p=-1 (not shown here).
§.§ Limiting cases and density of states (DOS)
i) Setting Δ=0 in Eq. (<ref>), we obtain
E_ξ^η p(k) = l {λ_v^2 +Δ_Ω^2+ 2 λ_R^2+ ϵ_k^2
+ 2 s √( Y )}^1/2,
with Y = ϵ_k^2λ̅^2+ (λ_R^2 - ηλ_vΔ_Ω)^2.
ii) In the limit λ_R=0, Eq. (<ref>) reduces
E_ξ^η p (k) = l
[ϵ_k^2 + Δ̅_η p^2]^1/2 + s λ_v.
The
DOS per unit area corresponding to Eq. (<ref>) is given by
D(E) = | E | v_F^-22 πℏ^2∑_η p[ θ(| E | - |E_1g^η p| )1 - λ̅/ M^+
+ θ(| E | - |E_2g^η p| )1+ λ̅/ M^-],
with
E_1g^η p = λ_v+ Δ̅_η p,
E_2g^η p =
[(λ_v-Δ̅_η p)^2+4λ_R^2]^1/2
M^± =
[(λ_R^2-λ_vΔ̅_η p)^2+ ℏ ^2 v_F^2λ̅^2ϵ_±]^1/2
ℏ ^2 v_F^2ϵ_± = E^2+λ_v^2-Δ̅_η p^2± 2
[λ̅^2 E^2-λ_R^2(λ_v+Δ̅_η p)^2]^1/2 .
In Fig. <ref> we plot the DOS given by Eq. (<ref>).
The two jumps in the DOS indicate that two gaps open at each valley, displaying the clear signature of lifting the spin and valley degeneracies, when graphene on WSe_2 substrate is in the direct band regime. The spin and valley degeneracies are completely lifted in the direct band regime while only the spin degeneracy is lifted in the inverted band regime. Note that the DOS diverges in the inverted band regime as D(E)∝ (E-Δ_1)^-1/2 with Δ_1=λ_R(λ_v+ Δ)/ (λ_R^2+λ_v^2)^1/2 (see green
curves in both panels). This divergence is due to the Mexican-hat energy dispersion <cit.>, cf. Fig. <ref>.
In passing we may add that this behaviour of the DOS remains the same as the broadened one provided the level width Γ is small, Γ < 0.5 meV. For higher Γ the small structure of the DOS curves is smoothened out.
§ CONDUCTIVITIES
We consider a many-body system described by the Hamiltonian H = H_0 + H_I - 𝐑 · 𝐅 (𝐭), where H_0 is the unperturbed part, H_I=λ V is a binary-type interaction (e.g., between electrons and impurities or phonons) of strength λ, and - 𝐑 · 𝐅(t) is the interaction of the system with the external field F (t) <cit.>. For conductivity problems we have 𝐅(t) = e 𝐄(t), where 𝐄(t) is the electric field, e the electron charge, 𝐑 = ∑_i r_i , and 𝐫_i the position operator of electron i. In the representation in which H_0 is diagonal the many-body density operator ρ = ρ^d + ρ^nd has a diagonal part ρ^d and a nondiagonal part ρ^nd. Using ρ = e^-β H and H=H_0 +λ V, all operators were evaluated in the van Hove limit, λ→ 0, t→∞ but λ^2 t finite, and all averages <X>=Tr{Xρ}
in the representation in which H_0 is diagonal.
In this representation λ V is assumed nondiagonal; if it has a diagonal part, it's included in H_0.
Correspondingly, for weak electric fields and weak scattering potentials, for which the first Born approximation applies, the conductivity tensor has a diagonal part σ_μν^d and a nondiagonal part σ_μν^nd; the total conductivity is σ_μν^tot = σ_μν^d + σ_μν^nd, μ,ν = x,y. For further details see Ref. <cit.>.
In general we have two kinds of currents, diffusive and hopping, with σ_μν^d = σ_μν^dif + σ_μν^col, but usually only one of them is present. The term σ_μν^col was introduced in Ref. <cit.> to distinguish collisional current contributions that are different from the standard diffusive ones valid for elastic scattering and characterized by a relaxation time τ. As such, this is the main term for transport in a magnetic field when the diffusion contributions vanish. It also describes hopping between localized states. If no magnetic field is present, the hopping term σ_μν^col vanishes identically and only the term σ_μν^dif survives. For elastic scattering it is given by <cit.>
σ_μν^d = β e^2S_0∑_ζ f_ζ (1 - f_ζ ) v_νζ v_μζ τ_ζ ,
with τ_ζ the momentum relaxation time, and v_μζ the diagonal matrix elements of the velocity operator. Further, f_ζ = [1 + exp [β (E_ζ - E_F)]]^-1 is the Fermi-Dirac distribution function, β = 1/k_BT, and T the temperature.
Regarding the contribution σ_μν^nd one can use the identity f_ζ (1 - f_ζ^')[1 - exp [β (E_ζ - E_ζ^')]] = f_ζ - f_ζ^' and cast the original form <cit.> in the more familiar one
σ_μν^nd = iℏ e^2S_0∑_ζ≠ζ^'(f_ζ - f_ζ^') v_νζζ^' v_μζζ^'(E_ζ - E_ζ^')(E_ζ - E_ζ^' - i Γ ) ,
where the sum runs over all quantum numbers ζ and ζ^' with ζ≠ζ^'. The infinitesimal quantity ϵ, in the original form of the conductivity, has been replaced by Γ_ζ to
phenomenologically account for the broadening of the energy levels.
One should keep in mind that a strong disorder may modify the Hall conductivity considerably. However, this problem is not studied here. In Eq. (<ref>) v_νζζ^' and v_μζζ^' are the off-diagonal matrix elements of the velocity operator. The relevant velocity operators are given by v_x= ∂ H / ℏ∂ k_x and v_y= ∂ H / ℏ∂ k_y. With ζ={l, s, k, η, p }={ξ, k, η, p } for brevity, they read
⟨ζ| v_x|ζ^'⟩ = v_F N_ξ^η pN_ξ^'^η p (D_ξ,ξ^'^η p e^iϕ + F_ξ,ξ^'^η p e^-iϕ ) δ_η,η^'δ_k,k^',
⟨ζ^'| v_y|ζ⟩ = i v_F N_ξ^η pN_ξ^'^η p ( D_ξ,ξ^'^η p e^-iϕ - F_ξ,ξ^'^η p e^iϕ ) δ_η,η^'δ_k,k^',
where D_ξ,ξ^'^η p= A_ξ^'^η p+ B_ξ^η p C_ξ^'^η p and F_ξ,ξ^'^η p= A_ξ^η p+ B_ξ^'^η p C_ξ^η p.
The diagonal velocity matrix elements v_xζ=∂ E_ξ^η p/ℏ∂ k_x from Eq. (<ref>) can be readily found
v_xζ= l ℏ v_F^2k_xE_ξ^η p[ 1 + sλ̅^2√(Υ)].
The above mentioned general expressions for conductivities are modified for Floquet theory <cit.> but are still valid for driven systems in the limit of large frequencies and weak intensity of light (A<<1) since only the zeroth level of the Floquet states contributes <cit.>, cf. Sec. III. Thus, these states can be taken as the eigenstates of Eq. (<ref>). In addition, although Eq. (<ref>) is perturbative in Ω, the above Hall conductivities expressions are nonperturbative in Ω; that is, an infinitesimal gap Δ̅_η p is sufficient to yield a topological band with a quantized Hall conductance in unirs of 2 e^2/h <cit.>. Further the Fermi distribution is nonuniversal for systems which are out of equilibrium but for some cases of system-bath couplings <cit.>, the steady-state distribution becomes thermal, and we restrict our results to such cases. Additionally, the electrode chemical potential will be small, for linear responses, compared to the intrinsic chemical potential of the system, and so we ignore the electrode chemical potential in our calculations. This allows us to write the chemical potential in the Kubo formalism as a constant, i.e. without accounting for sources at the boundaries. Also, it's worth pointing out that our approach for evaluating the conductivity tensor is the same or similar with that followed in Refs. <cit.> for MoS_2, <cit.> for silicene, and <cit.> for WSe_2. In all of them a perpendicular electric field, not the source-to-drain one, was included in H_0. This is similar to our inclusion of the off-resonant light term V(t) in H_0, as in the present work, and was also the case of Ref. <cit.>.
We now calculate the conductivity σ_yx^nd given by Eq. (<ref>). Further, the velocity matrix elements (<ref>) and (<ref>) are diagonal in k, therefore k will be suppressed in order to simplify the notation. The summation in Eq. (<ref>) runs over all quantum numbers ξ, ξ^', η, η^', and k. The parameter Γ_ζ=Γ_ηη^'^ξξ^', that takes into account the level broadening, is assumed independent of the band and valley indices, i.e., Γ_ηη^'^ξξ^'=Γ. Using Eqs. (<ref>) and (<ref>) we can express Eq. (<ref>) as
Reσ_yx^nd(ξ,ξ^',η, p) = 2 e^2ℏ^2 v_F^2h
∫ dk k (N_ξ^η pN_ξ^'^η p)^2 (f_ξ k^η p- f_ξ^'k^η p) ( Δ_ξξ^'^η p)^2+ Γ^2
×[(D_ξ,ξ^'^η p)^2 - (F_ξ,ξ^'^η p)^2],
Imσ_yx^nd(ξ,ξ^',η, p) = 0,
where Δ_ξξ^'^η p= E_ξ k^η p - E_ξ^' k^η p.
For λ=Δ= Δ_Ω=0 and λ_R≠ 0, Eq. (<ref>) vanishes because the factor (D_ξ,ξ^'^η p)^2 - (F_ξ,ξ^'^η p)^2 becomes zero.
Ignoring skew
and
intervalley scatterings, the valley-Hall conductivity (σ_yx^v)
obtained from Eq. (<ref>) can be evaluated as
σ_yx^v = ∑_ξξ^' p[σ_yx^nd(ξ,ξ^', +, p)
- σ_yx^nd(ξ,ξ^',-, p) ],
where we set Reσ_yx^nd(ξ,ξ^', η, p ) ≡σ_yx^nd(ξ,ξ^',η, p). The spin-Hall conductivity σ_yx^s corresponding to Eq. (<ref>) is finite only when both KM and staggered SOCs are present <cit.>. Therefore, σ_yx^s vanishes even in the presence of Rashba SOC. Even if it does not in graphene on WSe_2, it is assumed negligible in the regime λ_v >> λ_so that we treat and we neglect it altogether, see also Sec. <ref>, above Eq. (<ref>). As usual,
we have to multiply σ_yx^v by 1/2e <cit.>.
We can find a simple analytical result from Eq. (<ref>) for the specific case λ_v, λ_R=0 in the low temperature limit. It is
σ_yx^v = e2 h , -(Δ + η p Δ_Ω) < E_F < Δ + η p Δ_Ω
e2 hηΔ + p Δ_ΩE_F, E_F > Δ + η p Δ_Ω
Eqs. (16)-(17) of Ref. <cit.> in the limit λ→ 0 are similar to Eq. (<ref>).
For Δ_Ω→ 0, Eq. (<ref>) reduces to a result reported in Ref. <cit.>. Further, we find the charge Hall conductivity
σ_yx^c =
∑_p ηη^'ξξ^'σ_yx^nd(ξ,ξ^',η,η^', p)=
0, Δ_Ω = 0
≠ 0, Δ_Ω≠ 0
In the limit Δ_Ω→ 0, σ_yx^c vanishes.
We now consider the diagonal component σ_xx^d given by Eq. (<ref>). Using Eq. (<ref>), with ξ=ξ^', we obtain
σ_xx^d(ξ, η, p) = e^2 v_F^2βπ
∫ dk k (N_ξ^η p)^4 f_ξ k^η p (1- f_ξ k^η p)
× (A_ξ^η p+ B_ξ^η p C_ξ^η p)^2 τ_ξ k^η p.
At very low temperatures we can make the approximation β f_ξ k^η p (1- f_ξ k^η p)≈δ(E_ξ^η p-E_F) and τ_ξ k^η p=τ_ξ k_F^η p. We find r=σ_xx^nd(ξ, η, p) / σ_xx^d(ξ, η, p) << 1, mainly because
σ_xx^nd(ξ, η, p) ∝Γ. The precise value of r depends on the scattering strength through Γ and τ appearing in σ_xx^d(ξ, η, p). In what follows we neglect σ_xx^nd(ξ, η, p).
After evaluating the integral over k, Eq. (<ref>) becomes
σ_xx^d(ξ, η, p) = e^2τ_F E_Fπℏ^2
[
Q_ξ^η p
θ( E_F - E_1g^η p )1 - λ̅^2/ M |_ϵ_+F
+
Q_ξ^η p
θ(E_F - E_2g^η p )1+λ̅^2/ M |_ϵ_-F
],
where Q_ξ^η p=(A_ξ^η p+ B_ξ^η p C_ξ^η p )^2 (N_ξ^η p )^4 and τ_F≡τ_ξ k_F^η p is the relaxation time evaluated at the Fermi level.
As indicated, the 1st and 2nd line in the square brackets are to be evaluated at ϵ_+F and ϵ_-F,
respectively, where ϵ_± F is obtained from Eq. (<ref>) for E = E_F. To evaluate Eq. (<ref>) numerically we used a Lorentzian broadening of
δ(E_ξ^η p-E_F).
The valley P_v and spin P_s polarizations, corresponding to Eq. (<ref>), are
P_v=∑_ξ pσ_xx^d(l,s,+,p)-σ_xx^d(l,s,-,p)σ_xx^d(l,s,+,p)+σ_xx^d(l,s,-,p),
and
P_s=∑_η p lσ_xx^d(l,+,η, p)-σ_xx^d(l,-,η, p)σ_xx^d(l,+,η, p)+σ_xx^d(l,-,η, p).
In Fig. <ref> we plot the conductivity, given by
Eq. (<ref>), as a function of the Fermi energy E_F by evaluating the integral over k numerically for two values of the parameter Δ_Ω and p=+1. Further, the left panel represents the valley-dependent contribution of Eq. (<ref>), with both spins included, whereas the right one depicts its spin-dependent contribution with both valleys included.
To display the result clearly, we set Δ=1 meV, λ_R=2 meV, λ_v=4 meV, and τ_F= 1 × 10^-15 sec.
We find that σ_xx^d(ξ, η, p) vanishes when E_F is in the gap while it increases linearly when E_F is outside the gap. The kink appears when E_F crosses the conduction band (E_++^η +).
Moreover, we find σ_xx^d(ξ, +, +) = σ_xx^d(ξ, -, +) in the inverted band regime (Δ_Ω=0) while σ_xx^d(ξ, +, +) ≠σ_xx^d(ξ, -, +) in the direct band regime (Δ_Ω≠ 0). We
also verified that the analytical result (Eq. (<ref>)) agrees well with the numerical one obtained from Eq. (<ref>).
We plot the total longitudinal conductivity, with both valleys and spins included, in Fig. <ref> for different values of Δ_Ω. As expected,
σ_xx^d is an even function of Δ_Ω. In addition, the band gap increases with
Δ_Ω.
The valley P_v and spin P_s polarizations versus E_F are shown in Fig. <ref> for λ_R=4 meV and three different values of
Δ_Ω. It can be seen that P_v= 0 in the inverted band regime while P_v≠ 0 in the direct band one. In other words, the valley polarization can be switched on and off by controlling the parameter Δ_Ω. On the other hand, P_s≠
0 in both band regimes. It is interesting to study P_v in the direct band regime (Δ_Ω≠ 0). The contribution of σ_xx^d(ξ,+) to P_v is zero in the range λ_v+ Δ - Δ_Ω⩽ E_F < λ_v+ Δ + Δ_Ω. Thus, P_v=1, which is a pure K^' valley polarization for Δ_Ω≠ 0. When we change the polarization of light to p=-1, a pure K valley polarization is obtained. That is, one can easily reverse
the valley polarization by reversing that of the
circularly polarized light. This result may be useful in
valleytronics applications, such as making valley valves <cit.>.
In Fig. <ref> we show the numerically evaluated valley-Hall conductivity σ^v_yx, from Eq. (<ref>),
in the inverted (Δ_Ω =0) and direct (Δ_Ω≠ 0) band regimes for
l= l^' with s ≠ s^',
as well as for
l ≠ l^' with s = s^' and s ≠ s^'. We used a sufficiently low temperature (T=1 K) to ensure that thermal vibrations of atoms have a negligible contribution to the electron transport. σ^v_yx is quantized and has the universal value 2 e^2/h when the Fermi level is in the gap -1 meV ≤ E_F≤ 1 meV (see green curve, compare with the DOS in Fig. <ref>).
Its absolute value is reduced outside the gap as E_F increases. The two peaks, to the left and right of the gap, at E_F≈± 1.5 meV, appear due to the inverted band structure or the Mexican hat-like dispersion as can be seen in the inset of Fig. <ref>. σ^v_yx vanishes
when E_F is in the gap in the direct band regime Δ_Ω≠ 0 as the blue curve shows.
The reason is that in this case electrons from both valleys flow in opposite directions and their contributions to the valley current exactly cancel each other. A non zero valley-Hall current is produced when E_F crosses the conduction and valence bands. When E_F grows further, the conductivity decreases. It is also worth noticing that the valley conductivity changes sign (not shown) if proximitized graphene is irradiated by a right circularly polarized light (p=-1).
For Δ_Ω = 0 a quantized valley-Hall conductivity of 2 e^2/h is obtained in the band gap as can be seen from the
green curve in the inset of Fig. <ref>. On the other hand, for Δ_Ω≠ 0 the
valley-Hall conductivity is quenched to zero within the band gap (see the blue curve of Fig. <ref>), while a quantized charge Hall conductivity of 2 e^2/h and -2 e^2/h is obtained for the left- and right-handed circularly polarized light, respectively, as shown in Fig. <ref>. The reason for the change 2 e^2/h→ -2 e^2/h is that
this nondiagonal contribution to the conductivity is an odd function of Δ_Ω.
§ SUMMARY AND CONCLUSION
We
investigated the valley-dependent dc transport by employing the linear response formalism and Floquet theory in the
high-frequency limit as well as the energy dispersion in the presence of proximity-induced gaps.
We derived analytical expressions for the energy dispersion relation of Dirac fermions, the DOS, and the diagonal and nondiagonal parts of the conductivity. We found that a transition occurs from an inverted band regime to a direct one for Δ_Ω > Δ + λ_v (see Fig. <ref>). In addition, the energy dispersion shows a complete lifting of the fourfold spin and valley degeneracies in the direct band structure while it has a twofold valley degeneracy in the inverted band phase. We demonstrated that the DOS exhibits a van Hove singularity due to the inverted band structure, which remained unchanged as long as Δ_Ω< Δ + λ_v. The four jumps in the DOS are due to the lifting of the fourfold spin and valley degeneracy in the direct band regime in contrast to pristine graphene, cf. Fig. <ref>.
We showed that the valley polarization P_v vanishes for Δ_Ω < Δ + λ_v while for Δ_Ω > Δ + λ_v it is finite, P_v≠ 0; this might be useful in the design of valleytronics devices such as optically controlled valley filters and valves based on proxitimized graphene. On the other hand, P_s≠ 0 in both band regimes.
Further, 100% K or K^' valley polarization is achieved in the range λ_v+ Δ - Δ_Ω⩽ E_F < λ_v+ Δ + Δ_Ω when
the handedness of the light polarization changes.
We found that, when E_F in the gap, σ_yx^v=2 e^2/h in the invert band regime while σ_yx^v= 0 in the direct band regime. Peaks are found in the curve of σ_yx^v versus E_F when E_F crosses the inverted dispersion, see the
green curve in Fig. <ref>. Moreover, for Δ_Ω> Δ + λ_v, we have σ_yx^v≠ 0 when E_F crosses the conduction and valence bands. The valley-Hall conductivity tends to σ_yx^v= 0 for both invert and direct band regimes in the limit E_F→±∞. A last finding is that the
charge Hall conductivity is finite for Δ_Ω≠ 0 and changes sign when the handedness of the light polarization changes.
Our results may be pertinent to developing future spintronics and valleytronics devices such as field-effect tunnelling transistors, memory devices, phototransistors, etc.
M. Z. and P. V. acknowledge the support of the Concordia University Grant No. NGR034 and a Concordia University Merit Fellowship. The work of M. T. was supported by Colorado State University.
99
ii1 A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.81.109Rev. Mod. Phys. 81, 109 (2009).
f7 M. Gmitra, S. Konschuh, C. Ertler, C. Ambrosch-Draxl, and J. Fabian, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.80.235431Phys. Rev. B 80, 235431 (2009).
ii2 J. Sichau, M. Prada, T. Anlauf, T. J. Lyon, B. Bosnjak, L. Tiemann, and R. H. Blick, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.046403Phys. Rev. Lett. 122, 046403 (2019).
ii3 A. A. Kaverzin and B. J. van Wees, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.165412Phys. Rev. B 91, 165412 (2015).
ii4 J. Balakrishnan, G. K. W. Koon, M. Jaiswal, A. H. C. Neto and B. C. Ozyilmaz, https://www.nature.com/articles/s42005-019-0143-7Nat. Phys. 9, 284 (2013).
ii5 X. Hong, S.-H. Cheng, C. Herding, and J. Zhu, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.83.085410Phys. Rev. B 83, 085410 (2011).
ii6 Z. Jia, B. Yan, J. Niu, Q. Han, R. Zhu, D. Yu and X. Wu, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.085411Phys. Rev. B 91, 085411 (2015).
ii7 U. Chandni, E. A. Henriksen and J. P. Eisenstein, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.245402Phys. Rev. B 91, 245402 (2015).
ii8 Y.-C. Lin, N. Lu, N. Perea-Lopez, J. Li, Z. Lin, X. Peng, C. H. Lee, C. Sun, L. Calderin, P. N. Browning, M. S. Bresnehan, M. J. Kim, T. S. Mayer, M. Terrones, and J. A. Robinson, https://pubs.acs.org/doi/10.1021/nn5003858ACS Nano 8, 3715 (2014).
ii9 M.-Y. Lin, C.-E. Chang, C.-H. Wang, C.-F. Su, C. Chen, S.-C. Lee, and S.-Y. Lin, https://aip.scitation.org/doi/10.1063/1.4893448Appl. Phys. Lett. 105, 073501 (2014).
ii10 A. Azizi, S. Eichfeld, G. Geschwind, K. Zhang, B. Jiang, D. Mukherjee, L. Hossain, A. F. Piasecki, B. Kabius, J. A. Robinson, and Nasim Alem, https://pubs.acs.org/doi/10.1021/acsnano.5b01677ACS Nano 9, 4882 (2015).
nii0 M. Gmitra and J. Fabian, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.92.155403 Phys. Rev. B 92, 155403 (2015).
nii1 Z. Wang, D.-K. Ki, J. Y. Khoo, D. Mauro, H. Berger, L. S. Levitov, and A. F. Morpurgo, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.6.041020Phys. Rev. X 6, 041020 (2016).
nii2 T. Völkl, T. Rockinger, M. Drienovsky, K. Watanabe, T. Taniguchi, D.Weiss, and J. Eroms, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.96.125405Phys. Rev. B 96, 125405 (2017).
nii3 A. Avsar, J. Y. Tan, T. Taychatanapat, J. Balakrishnan, G. K. W. Koon, Y. Yeo, J. Lahiri, A. Carvalho, A. S. Rodin, E. C. T. O'Farrell, G. Eda, A. H. Castro Neto, and B. Özyilmaz, https://www.nature.com/articles/ncomms5875Nat. Commun. 5, 4875 (2014).
nii4 S. Omar and B. J. van Wees, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.95.081404Phys. Rev. B 95, 081404(R) (2017).
nii5 A. Dankert and S. P. Dash, https://www.nature.com/articles/ncomms16093Nat. Commun. 8, 16093 (2017).
nii6 M. Offidani, M. Milletarì, R. Raimondi, and A. Ferreira, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.196801Phys. Rev. Lett. 119, 196801 (2017).
f8 S. Zihlmann, A. W. Cummings, J. H. Garcia, M. Kedves, K. Watanabe, T. Taniguchi, C. Schönenberger, and P. Makk, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.97. 075434Phys. Rev. B 97, 075434 (2018).
ii11 J. H. Garcia, M. Vila, A. W. Cummings, and S. Roche, https://pubs.rsc.org/en/content/articlelanding/2018/cs/c7cs00864c#!divAbstractChem. Soc. Rev. 47, 3359 (2018).
ii12 A. W. Cummings, J. H. Garcia, J. Fabian, and S. Roche, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.206601Phys. Rev. Lett. 119, 206601 (2017).
ii13 T. S. Ghiasi, J. Ingla-Aynés, A. A. Kaverzin, and B. J. van Wees, https://pubs.acs.org/doi/abs/10.1021/acs.nanolett.7b03460Nano Lett. 17, 7528 (2017).
ii14 L. A. Benítez, J. F. Sierra, W. Savero Torres, A. Arrighi, F. Bonell, M. V. Costache, and S. O. Valenzuela, https://www.nature.com/articles/s41567-017-0019-2Nat. Phys. 14, 303 (2018).
f3 D. Kochan, S. Irmer, and J. Fabian, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.95.165415Phys. Rev. B 95, 165415 (2017).
i1 D. N. Basov, R. D. Averitt, and D. Hsieh, https://www.nature.com/articles/nmat5017Nat. Mater. 16, 1077 (2017).
i2 F. Krausz and M. I. Stockman, https://www.nature.com/articles/nphoton.2014.28Nat. Photon. 8, 205 (2014).
i3 D. Fausti, R. I. Tobey, N. Dean, S. Kaiser, A. Dienst, M. C. Hoffmann, S. Pyon, T. Takayama, H. Takagi, and A. Cavalleri, https://science.sciencemag.org/content/331/6014/189Science 331, 189 (2011).
i4 M.Mitrano, A. Cantaluppi, D. Nicoletti, S. Kaiser, A. Perucchi, S. Lupi, P. Di Pietro, D. Pontiroli, M. Riccó, S. R. Clark, D. Jaksch, and A. Cavalleri, https://www.nature.com/articles/nature16522Nature 530, 461 (2016).
i7 M. Rini, A. Cavalleri, R. W. Schoenlein, R. López, L. C. Feldman, R. F. Haglund, L. A. Boatner, and T. E. Haynes, https://www.osapublishing.org/ol/abstract.cfm?uri=ol-30-5-558Opt. Lett. 30, 558 (2005).
i8 M. Liu, H. Y. Hwang, H. Tao, A. C. Strikwerda, K. Fan, G. R. Keiser, A. J. Sternbach, K. G. West, S. Kittiwatanakul, J. Lu, S. A. Wolf, F. G. Omenetto, X. Zhang, K. A. Nelson, and R. D. Averitt, https://www.nature.com/articles/nature11231Nature 487, 345 (2012).
i9 E. Pomarico, M. Mitrano, H. Bromberger, M. A. Sentef, A. Al-Temimy, C. Coletti, A. Stöhr, S. Link, U. Starke, C. Cacho, R. Chapman, E. Springate, A. Cavalleri, and I. Gierz, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.95.024304Phys. Rev. B 95, 024304 (2017).
i10 D. M. Kennes, E. Y. Wilner, D. R. Reichman, and A. J. Millis, https://www.nature.com/articles/nphys4024Nat. Phys. 13, 479 (2017).
i11 M. A. Sentef, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.95.205111Phys. Rev. B 95, 205111 (2017).
i12 T. Oka and H. Aoki, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.79.081406Phys. Rev. B 79, 081406(R) (2009).
i13 T. Kitagawa, T. Oka, A. Brataas, L. Fu, and E. Demler, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.84.235108Phys. Rev. B 84, 235108 (2011).
i14 N. H. Lindner, G. Refael, and V. Galitski, https://www.nature.com/articles/nphys1926Nat. Phys. 7, 490 (2011).
i15 M. A. Sentef, M. Claassen, A. F. Kemper, B. Moritz, T. Oka, J. K. Freericks, and T. P. Devereaux, https://www.nature.com/articles/ncomms8047Nat. Commun. 6, 7047 (2015).
i16 H. Hübener, M. A. Sentef, U. De Giovannini, A. F. Kemper, and A. Rubio, https://www.nature.com/articles/ncomms13940Nat. Commun. 8, 13940 (2017).
i17 Y. H. Wang, H. Steinberg, P. Jarillo-Herrero, and N. Gedik, https://science.sciencemag.org/content/342/6157/453Science 342, 453 (2013).
i18 F. Mahmood, C.-K. Chan, Z. Alpichshev, D. Gardner, Y. Lee, P. A. Lee, and N. Gedik, https://www.nature.com/articles/nphys3609Nat. Phys. 12, 306 (2016).
i21 H. Miyake, G. A. Siviloglou, C. J. Kennedy, W. C. Burton, and W. Ketterle, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.185302 Phys. Rev. Lett. 111, 185302 (2013).
i23 N. Fläschner, B. S. Rem, M. Tarnowski, D. Vogel, D.-S. Lühmann, K. Sengstock, and C.Weitenberg, https://science.sciencemag.org/content/352/6289/1091Science 352, 1091 (2016).
i28 M. C. Rechtsman, J. M. Zeuner, Y. Plotnik, Y. Lumer, D. Podolsky, F. Dreisow, S. Nolte, M. Segev, and A. Szameit, https://www.nature.com/articles/nature12066Nature 496, 196 (2013).
i29 M. Aidelsburger, S. Nascimbene, and N. Goldman, https://www.sciencedirect.com/science/article/pii/S1631070518300318C. R. Phys. 19, 394 (2018).
i34 T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. Rechtsman, D. Schuster, J. Simon, O. Zilberberg, and I. Carusotto, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.91.015006Rev. Mod. Phys. 91, 015006 (2019).
i35 L. Asteria, D. T. Tran, T. Ozawa, M. Tarnowski, B. S. Rem, N. Fläschner, K. Sengstock, N. Goldman, and C. Weitenberg, https://www.nature.com/articles/s41567-019-0417-8Nat. Phys. 15, 449 (2019).
i36 J. W. McIver, B. Schulte, F.-U. Stein, T. Matsuyama, G. Jotzu, G. Meier, and A. Cavalleri, https://www.nature.com/articles/s41567-019-0698-yNat. Phys. 16, 38 (2020).
i37 S. A. Sato, J. W. McIver, M. Nuske, P. Tang, G. Jotzu, B. Schulte, H. Hübener, U. De Giovannini, L. Mathey, M. A. Sentef, A. Cavalleri, and A. Rubio, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.214302 Phys. Rev. B 99, 214302 (2019).
i38 M. Bukov, L. D'Alessio, and A. Polkovnikov, https://www.tandfonline.com/doi/abs/10.1080/00018732.2015.1055918?journalCode=tadp20Adv. Phys. 64, 139 (2015).
i39 L. E. F. Foa Torres, P. M. Perez-Piskunow, C. A. Balseiro, and G. Usaj, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.266801Phys. Rev. Lett. 113, 266801 (2014).
i40 G. Usaj, P. M. Perez-Piskunow, L. E. F. Foa Torres, and C. A. Balseiro, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.90.115423Phys. Rev. B 90, 115423 (2014).
i42 H. Dehghani, T. Oka, and A. Mitra, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.90.195429Phys. Rev. B 90, 195429 (2014).
i44 A. Kundu, H. A. Fertig, and B. Seradjeh, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.236803Phys. Rev. Lett. 113, 236803 (2014).
f14 M. Tahir, A. Manchon, and U. Schwingenschlögl, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.90.125438Phys. Rev. B 90, 125438 (2014).
Tak T. Mikami, S. Kitamura, K. Yasuda, N. Tsuji, T. Oka, and H. Aoki, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.144307Phys. Rev. B 93, 144307 (2016).
tah M. Tahir and P. Vasilopoulos, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.115311Phys. Rev. B 91, 115311 (2015).
f1 M. Gmitra, D. Kochan, P. Högl, and J. Fabian, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.155104Phys. Rev. B 93, 155104 (2016).
ff2 C. L. Kane and E. J. Mele, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.95.226801Phys. Rev. Lett. 95, 226801 (2005); ,https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.95.146802 95, 146802 (2005).
f4 A. M. Alsharari, M. M. Asmar, and S. E. Ulloa, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.98.195129Phys. Rev. B 98, 195129 (2018).
f5 Z. Wang, D. K. Ki, H. Chen, H. Berger, A. H. MacDonald, and A. F. Morpurgo, https://www.nature.com/articles/ncomms9339Nat. Commun. 6, 8339 (2015).
f6 B. Yang, M.-F. Tu, J. Kim, Y. Wu, H. Wang, J. Alicea, R. Wu, M. Bockrath and J. Shi, https://iopscience.iop.org/article/10.1088/2053-1583/3/3/031012/meta2D Mater. 3, 031012 (2016).
nc C. J. Tabert and E. J. Nicol, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.87.235426Phys. Rev. B 87, 235426 (2013).
nc1 E. J. Nicol and J. P. Carbotte, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.77.155409Phys. Rev. B 77, 155409 (2008).
f13 M. Charbonneau, K. M. Van Vliet, and P. Vasilopoulos, https://aip.scitation.org/doi/10.1063/1.525355J. Math. Phys. 23, 318 (1982).
flt2 T. Iadecola, T. Neupert, C. Chamon, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.235133Phys Rev. B 91, 235133 (2015).
fr3 C. J. Tabert and E. J. Nicol,
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.110.197402Phys. Rev. Lett. 110, 197402 (2013);
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.88.085434Phys. Rev. B 88, 085434 (2013).
fr4 Kh. Shakouri, P. Vasilopoulos, V. Vargiamidis, and F. M. Peeters,
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.90.235423Phys. Rev. B 90, 235423 (2014).
fr5 M Tahir, P. M. Krstajić, and P. Vasilopoulos, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.98.075429
Phys. Rev. B 98, 075429 (2018);
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.95.235402Phys. Rev B 95, 235402 (2017).
fff1 C. K. Safeer, J. Ingla-Aynés, F. Herling, J. H. Garcia, M. Vila, N. Ontoso, M. Reyes Calvo, S. Roche, L. E. Hueso, and F. Casanova, https://pubs.acs.org/doi/10.1021/acs.nanolett.8b04368Nano Lett. 19, 1074 (2019).
f15 Di Xiao, Wang Yao, and Qian Niu, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.99.236809Phys. Rev. Lett. 99, 236809 (2007).
f16 A. Rycerz, J. Tworzydlo, and C. Beenakker, https://www.nature.com/articles/nphys547 Nat. Phys. 3, 172 (2007).
|
http://arxiv.org/abs/2307.01072v2
|
20230703145357
|
Implications of Nano-Hertz Gravitational Waves on Electroweak Phase Transition in the Singlet Dark Matter Model
|
[
"Yang Xiao",
"Jin Min Yang",
"Yang Zhang"
] |
hep-ph
|
[
"hep-ph"
] |
[][email protected]
CAS Key Laboratory of Theoretical Physics,
Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, P. R. China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, P. R. China
[][email protected]
CAS Key Laboratory of Theoretical Physics,
Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, P. R. China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, P. R. China
[][email protected]
School of Physics, Zhengzhou University, Zhengzhou 450000, P. R. China
CAS Key Laboratory of Theoretical Physics,
Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, P. R. China
Inspired by the recent evidences of nano-Hertz stochastic gravitational waves observed by the pulsar timing array collaborations, we explore their implied supercooled electroweak phase transition in the singlet extension of the Standard Model. Our findings reveal that by adjusting the model parameter at per milli level, the corresponding percolation temperature can be continuously lowered to 1 GeV. With such a low percolation temperature, the singlet dark matter may freeze out before the electroweak phase transition, and, consequently, the entropy generated during the transition can significantly affect the dark matter relic density and other related constraints.
Implications of Nano-Hertz Gravitational Waves on Electroweak Phase Transition in the Singlet Dark Matter Model
Yang Zhang
August 1, 2023
===================================================================================================================
§ INTRODUCTION
Recently, the NANOGrav, EPTA, PPTA, and CPTA collaborations reported positive evidences for the presence of stochastic gravitational wave (GW) background in the 𝒪(1∼10) nHz frequency band using pulsar timing arrays (PTAs) <cit.>.
This background can be produced through a variety of cosmological processes <cit.>. In a model-independent Bayesian analysis of the NANOGrav data, a cosmological phase transition with a percolation temperature around 1 GeV is favored <cit.>.
However, in popular new physics models beyond the Standard Model (SM), the electroweak phase transition (EWPT) occurs at around 100 GeV and concludes rapidly, resulting in a milli-Hertz stochastic gravitational wave background <cit.>. Therefore, it is difficult to explain the observed nano-Hertz GW signals using EWPT.
Fortunately, the phase transition can be postponed in the case of supercooling. Generally, the percolation temperature, at which the majority of true vacuum bubbles collide, is typically lower than the nucleation temperature by no more than 10 GeV <cit.>. The study in <cit.> found that the percolation temperature can descend to a few MeV, with a nucleation temperature of approximately 50 GeV, in a toy model that is based on a non-linear realization of the electroweak gauge group. A similar type of extremely supercooled first-order phase transition (FOPT) is investigated in <cit.> within the framework of the SM extended with a dimension-six operator.
In this study, we investigate the phenomenon of extreme supercooling within a more realistic model, i.e., the singlet extension of the SM under ℤ_2 symmetry (xSM), which contains a Weakly Interacting Massive Particle (WIMP) as the dark matter (DM) candidate. This model is highly restricted by the DM direct detection limits <cit.>. Even when taking into account the dilution effect caused by the supercooled phase transition, these constraints cannot be alleviated as the freeze-out temperature is lower than the nucleation temperature <cit.>. Nonetheless, inspired by the observed evidence of nano-Hertz stochastic gravitational waves, it is possible that the EWPT ends at a temperature of a few GeV. In this scenario, the freeze-out of DM may occur before the completion of the phase transition, and thus the DM density can be diluted by entropy release during the strong first-order phase transition.
By and large, it is possible to generate the reported nano-Hertz stochastic gravitational waves through an extremely supercooled EWPT in new physics models. Accordingly, the relevant phenomenology of DM needs to be revisited, as the DM decouples from other particles during the EWPT, which may have significant implication for the abundance of DM.
The work is organized as follows. In Section II, we provide an introduction to our model and discuss the physics associated with the phase transition. Section III shows the range of nucleation temperature and percolation temperature in the xSM, and demonstrates the corresponding spectrum of gravitational wave. In Section IV, we analyze the implications of a low percolation temperature on the calculations of dark matter. Finally, we summarize our findings and draw our conclusion in Section V.
§ SINGLET EXTENSIONS OF THE SM
The xSM is one of the simplest and most predictive realisations of the WIMP scenario. In this model, the addition of an extra scalar field allows for the generation of a potential barrier between the high-temperature symmetric minimum and the electroweak symmetry breaking (EWSB) minimum as the universe cools down. This results in a strong first-order EWPT, which has the potential to generate the observed baryon asymmetry of the universe and produce detectable stochastic gravitational waves. See <cit.> for recent reviews.
After some parameterization, the tree-level effective potential of the xSM can be expressed as
V_0(ϕ_h,ϕ_s) = -μ_h^2/2ϕ_h^2 + λ_h/4ϕ_h^4 - μ_s^2/2ϕ_s^2 + λ_s/4ϕ_s^4 + λ_hs/4ϕ_h^2ϕ_s^2,
where ϕ_h and ϕ_s represent the background field configurations for the SM Higgs and the additional scalar fields, respectively. The model parameters satisfy the tadpole conditions,
. ∂ V_0/∂ϕ_h|_𝐯 = 0 , . ∂ V_0/∂ϕ_s|_𝐯 = 0 ,
. ∂^2 V_0/∂ϕ_h^2|_𝐯 = m_h^2 , . ∂^2 V_0/∂ϕ_s^2|_𝐯 = m_s^2,
at the electroweak vacuum 𝐯≡ (v_ EW, 0). Here we set m_h=125 and v_ EW = 246. As a result, there remains three free parameters, namely m_s, λ_s, and λ_hs. For simplicity, we incorporate the one-loop correction using the on-shell-like renormalization scheme in Landau gauge, which maintains the above tadpole conditions.
The Parwani method <cit.> is adopted for daisy resummation, while the loop contribution from Goldstone bosons is neglected to remedy the infrared divergences. The resummed effective theory enables the computation of advanced state-of-the-art calculations <cit.>. The Mathematica package DRalgo can be used to perform these computations <cit.>.
In general, altering the settings in the effective potential would have a tolerable impact on the properties of EWPT, except when the transition temperature is sensitive to the model parameter <cit.>.
Unfortunately, the situation of extremely supercooled is highly sensitive to the model parameters. Consequently, even slight changes to the above settings can lead to different percolation temperatures. However, we have observed that the changes in percolation temperature with varying λ_hs are continuous, unlike the behavior of the nucleation temperature. This implies that we can always tune the model parameters to achieve a similar result to the one shown below.
In the evolution of the effective potential at finite temperature, there are three commonly used temperatures to characterize the process of phase transition: the critical temperature T_c, the nucleation temperature T_n, and the percolation temperature T_p.
The critical temperature T_c is defined as the temperature at which the two minimums become degenerate,
V(v_h^ high,v_s^ high;T_c) = V(v_h^ low,v_s^ low;T_c),
where V(ϕ_h,ϕ_s,T) is the full one-loop finite temperature effective potential. The minimums of (v_h^ high,v_s^ high) and (v_h^ low,v_s^ low) correspond to the high-temperature symmetric minimum and the low-temperature EWSB minimum, respectively. In the xSM, we have v_h^ high=0 and v_s^ low=0, while v_s^ high usually increases continuously from zero and v_h^ low approaches to v_ EW at zero-temperature.
When the temperature of the universe falls below T_c, the low-temperature EWSB minimum starts to have lower free energy than that of the high-temperature symmetric minimum. Thus some regions of the symmetric plasma tunnel to the true vacuum with a probability per unit volume per unit time <cit.>:
Γ∼ A e^-S,
where S is given by
S= {[ 2 π^2 ∫^+∞_0r^3 dr [1/2(∂ϕ/∂ r)^2 + V_ eff(ϕ;T)], T ≈ 0; 4 π/T∫^+∞_0r^2 dr [1/2(∂ϕ/∂ r)^2 + V_ eff(ϕ;T)], T ≫ 0.; ].
The bubble configuration ϕ(r) in the integral is fixed from the corresponding equation of motion
d^2 ϕ/ d r^2 + d-1/r dϕ/ dr = ∂ V_ eff(ϕ;T)/∂ϕ,
subjecting to the boundary conditions lim_r →∞ϕ(r) = 0 and dϕ/ d r|_r=0=0 <cit.>. The pre-factor A is often estimated as the fourth power of the temperature when temperature is high and the fourth power of the energy scale when temperature is zero. In this paper, we set A to T_c^4 ∼𝒪(100)^4 as in <cit.>, for all the temperature, as the EWPT energy scale is the same order as T_c.
The nucleation rate of bubbles increases significantly as the universe continues to cool. The phase transition begins when the probability of nucleating a supercritical bubble within one Hubble volume becomes approximately one, which gives the definition of T_n:
∫^+∞_T_n d T/TΓ(T)/H(T)^4 = 𝒪(1),
where H(T) = √(8π G ρ/3) is the Hubble constant, G is the gravitational constant, and ρ is the energy density of the universe <cit.>.
From this definition, we can get an approximate formula for T_n
S ≈ 4 logM_PI/T≈ 130 ∼ 140.
With the temperature further decreasing, the nucleated bubbles of true vacuum keep growth and occupy nearly 30% of the space when the percolation temperature T_p is reached. This percentage is determined by the formation of a cluster of connected bubbles with size of the order of the medium, i.e., bubbles are colliding <cit.>. Therefore, T_p is crucial for the stochastic gravitational wave background produced from bubble collision.
The calculation of T_p involves approximating the fraction of false vacuum <cit.>,
h(t) = exp[-∫^t _t_ initialΓ(t')V(t',t)dt'],
where v_w is the bubble velocity and
V(t',t) = g [∫ ^t _t' v_w(τ) dτ]^3.
For a spherical bubble, the shape constant g is equal to 4π/3.
In general, the fraction of false vacuum undergoes a significant change around the percolation temperature T_p. Therefore, the accuracy of the computational results relies on the stability of the action calculation. Nonetheless, in the case of the xSM, the stability of the action calculation is not satisfactory in <cit.>. Thus, we repeat the calculation of T_p for each sample, find the interval that includes the percolation temperature, and ensure that the length of the interval is small enough to safely consider the average value as the percolation temperature.
In a fast phase transition, these three temperatures are closely aligned with one another. However, in a supercooled transition, they become noticeably separate, resulting in an enlargement of the energy gap as the transition progresses.
§ RESULTS AND DISCUSSION
In Fig. <ref> we present the nucleation temperature and the percolation temperature for a set of benchmark points in the xSM (m_s=234 GeV, λ_s=0.2, λ_hs≃ 1.96). These particular points are selected near the line where the two phases become degenerate at zero temperature, satisfying <cit.>
1/2λ_hs v_ EW^2 - μ_h^2√(λ_s/λ_h) = m_s^2.
On this line, there will be no EWPT at all. Therefore, by tuning from this point, it is possible to achieve an EWPT at a very low temperature. We vary λ_hs as this mixing parameter governs the potential barrier between the two minima, using <cit.>.
We can see that both T_n and T_p decrease as λ_hs increases, because a smaller λ_hs leads to a smaller energy gap between the two minima. The curve of T_n ends at 84 with λ_hs = 1.961. The reason of such an end can be seen from the left panel of Fig. <ref>, or Figure 4 in <cit.> and Figure 2 in <cit.>. For a given point, as the temperature decreases, the action S initially decreases, but it may start to increase before reaching approximately 140 due to the temperature appearing in the denominator of Eq. <ref>. The lower bound for T_n in the xSM is approximately 44 GeV <cit.>. In <cit.>, a much lower T_n can be achieved in the super fine-tuned region where the high-temperature symmetric mumimum has non-zero v_h^ high. Here, we select a benchmark point with T_n>84 GeV to demonstrate that it is still possible to find T_p around 1 GeV even without a low T_n.
Before the nucleation temperature disappear, the difference between T_n and T_p is relatively small, around 5 . Then, T_p decreases dramatically and continuously from 80 to zero. This means that, without considering any other constraints, we can obtain any desired value of T_p by finely tuning λ_hs at a level lower than 1‰. Of course, T_p should be at least large than 1 MeV to satisfy nucleosynthesis constraints. Additionally, <cit.> argues that the transition of T_p≤1 may not complete as the false vacuum fraction decrease so slow that be overcame by the expansion of the universe. Thus, we study the evaluation of the false vacuum fraction for a point of T_p>1 and a point of T_P ≪ 1.
Fig.<ref> illustrates the action and the false vacuum rate versus temperature for the benchmark point with λ_hs=1.9694 and λ_hs=1.9751. These values correspond to T_p ≈ 1.48 GeV and T_p ≈ 0.02 GeV, respectively. The action consistently remains above 140, indicating that there is no nucleation temperature.
In this situation, only a few bubbles can be generated. Such a low T_p indicates that the dominant phase transition mode is not the thermal transition but the quantum tunneling, which can be observed from the flat area in the left panel of Fig.<ref>. Typically, the dominant source of spectrum of stochastic GW background is the sound wave in EWPT <cit.>. However, in the case of extreme supercooling, most of the released energy is utilized to accelerate the bubble wall, making the bubble collision as the dominant source. Additionally, it is safe to approximate the velocity of the bubble wall as 1 instead of solving the Boltzmann equation, which is the value used in the recent NaNoGrav report <cit.>.
Using the results of <cit.>, the spectrum of GW generated through bubble collisions can be described as
Ω_colh^2 = 1.67× 10^-5(100/g*)^1/3(β/H_*)κ_p^2(α/α+2)^2
×0.11v^3/0.42+v^2 S(f),
where
S(f) = 3.8(f/f_0)^2.8/1+2.8(f/f_0)^3.8,
f_0 = 1.65 × 10^-7(T_*/)(g*/100)^1/6(β/H_*)
×0.62/1.8-0.1v+v^2 Hz,
κ_p = 1/1+0.715α(0.715α+4/27√(3α/2)),
with g* being the degree of freedom of relativistic particles, α being the ratio of the vacuum energy to the radiation energy, v being the bubble wall velocity and T_* being the reference temperature. A few remarks are in order:
* In this extreme supercooling case, α is so large that the ratios involving α in Eq. <ref> and Eq. <ref> can be regarded as one.
* The reference temperature is often set as the nucleation temperature or the percolation temperature. Recently, the study in <cit.> proposed that the nucleation temperature may not be suitable for this extreme case, and the percolation temperature can accurately reflect the phase transition process. Therefore, we use the percolation temperature as the reference temperature to avoid the dilemma of the non-existence of the nucleation temperature.
* The parameter β is often defined as the derivative of the thermal action: β/H_* = Td(S_3/T)/dT. However, in the case of supercooling, where the dominant action is the 4D action, β becomes zero, as shown in Fig. <ref>. Another way to calculate β is based on dimensional analysis <cit.>, where β∼ vR^-1∼ R^-1, with R being the characteristic length scale chosen as the radius of the bubble. Thus, β/H_* ∼ 1/(RH_*) ∼ V^1/3/R ∼𝒪(1), due to the fact that the entire universe is occupied by only a few bubbles. This estimation of β is consist with the result of <cit.>. A low bound of β/H_*>3 is introduced in <cit.> to prevent phase transitions from being incomplete or leading to eternal inflation. It assumes that β/H_* is of the same order as S_3/T at nucleation, while our scenario involves of S≃ S_4 during the phase transition.
The spectrum of stochastic GW background generated by these collisions for λ=1.9694 is shown by the blue curve in Fig. <ref>. The grey band represents the observations from 15 years of NANOGrav data <cit.>, while the dashed curves indicate the future detection capabilities from LISA <cit.> (purple), Taiji <cit.> (green), and TianQin <cit.> (red). We observe that the spectrum associated with T_p ≈ 1.48 displays a peak frequency that coincides with the NANOGrav signals.
The results are consistent with the nano-Hertz background produced in the similar way by the one-dimensional effective potentials <cit.>.
The energy released during the supercooled phase transition will heat up the surrounding plasma and cause a shift in the peak frequency of the stochastic GW background.
In <cit.>, the reheating temperature is estimated to be approximately at the energy scale of the new physics, assuming conservation of energy density during the reheating process (which is further discussed in the next section).
Meanwhile, <cit.> pointed out the entropy injection of phase transition could lead to a strong dilution of the GW signal.
Consequently, the peak frequency of the background will undergo a red-shift from the desired nano-Hertz range.
We illustrate this shift by the pink curve in Figure.<ref>, with a simplified estimate of T_ reh≈ 47, see below.
However, if the energy transferred to the plasma is too high, the bubbles will be slowed down by the reheated plasma and may not be able to occupy the initial 30% vacuum, leading to an ambiguous definition of the percolation temperature. Additionally, it is worth noting that the dominant source of GW might be a hybrid situation rather than solely the result of bubble collisions. In this study, we emphasize that the percolation temperature can be rather low, providing a prerequisite for explaining nano-Hertz GW. Further in-depth investigations are necessary to verify and understand this complex scenario.
§ IMPLICATIONS ON DARK MATTER
We previously found in <cit.> that the dilution effect caused by an electroweak FOPT is negligible for the current DM density in the xSM. This is because the freeze-out temperature T_f is always lower than the nucleation temperature, indicating that the strong FOPT typically occurs before the DM freeze-out. In the xSM, the freeze-out temperature can be approximated as T_f ≈ m_s/20, where m_s is required to be smaller than 1 TeV for a strong FOPT. Consequently, we have T_n > 50 GeV > T_f.
Nevertheless, in this unique situation of extreme supercooling, the phase transition completes when the temperature of the universe drops below T_p, which is below the GeV scale. This is significantly lower than the freeze-out temperature calculated using the traditional method. Thus, the dilution effect caused by FOPT can be preserved and has an impact on the current DM relic density. This effect can potentially rescue parameter space that was previously excluded by DM direct detection experiments or by an excessive DM relic density. Note that the calculation of dilution factor described in <cit.> may not be applicable in this case, as it assumes not very strong supercooling. It also assumes that the energy density is conserved during the reheating as in <cit.>, but allows a fraction f of the universe to be occupied by the true vacuum. Then, we have
ρ(ϕ_f, T') = ρ(ϕ_f, T_reh) - f [ρ(ϕ_f, T_reh)-ρ(ϕ_t, T_reh)]
= ρ(ϕ_f, T_reh) - fL.
We can determine the value of f once we know the reheating temperature, or vice versa. For the benchmark point with λ_hs = 1.9694, by setting f ≈ 0.3 and T' = T_p and we can find that the corresponding reheating temperature is approximately 47 GeV, which is consistent with the results obtained in <cit.>.
On the other hand, we can calculate the true vacuum ratio f for the case when T' = T_p and T_reh=T_c, where the maximum reheating temperature corresponds to the critical temperature at which the universe is in the
phase-coexistence stage. This gives a f ≈ 14 ≫ 1, which is clearly unphysical.
Similar results can also arise even in cases where the phase transition is not extremely supercooled, as demonstrated in <cit.>.
It suggests that the assumption of energy density conservation may not hold in this scenario. It is necessary to consider more dynamic processes, as discussed in <cit.>, to better understand this situation. Additionally, the calculation of freeze-out temperature before EWSB is an unsolved problem. Therefore, a specific study is required to determine the dilution factor, and we leave this for future research.
§ CONCLUSION
In this work we investigated the phenomenon of extreme supercooling and the occurrence of a strong first-order electroweak phase transition in the singlet extension of the Standard Model. Our findings revealed that the percolation temperature can significantly and continuously decrease with increasing mixing coupling λ_hs. Consequently, by appropriately tuning the model parameters, we found that it is possible to achieve a percolation temperature of a few GeV.
We explored the implications of such a phase transition on the generation of a stochastic gravitational wave background resulted from bubble collisions at the percolation temperature. The observed signals from pulsar timing array collaborations could be reasonably explained by this gravitational wave background, disregarding possible red-shift effects.
If this extreme supercooling and strong first-order electroweak phase transition indeed characterizes the nature of our universe, it will have a profound impact on the dark matter properties. Further research in this direction is warranted to fully understand the implications and consequences of our findings.
§ ACKNOWLEDGMENTS
This work was supported by the National Natural Science Foundation of China under grant numbers 2105248, 11821505, and 12075300, the Peng-Huan-Wu Theoretical Physics Innovation Center under grant number 12047503, the Key R&D Program of the Ministry of Science and Technology under grant number 2017YFA0402204, and the Key Research Program of the Chinese Academy of Sciences under grant number XDPB15.
CitationStyle
|
http://arxiv.org/abs/2307.00911v3
|
20230703101515
|
A reactive neural network framework for water-loaded acidic zeolites
|
[
"Andreas Erlebach",
"Martin Šípka",
"Indranil Saha",
"Petr Nachtigall",
"Christopher J. Heard",
"Lukáš Grajciar"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
0.0cm
0.2cm
16cm
21cm
1.0cm
sciabstract
24pt
A reactive neural network framework for water-loaded acidic zeolites
Andreas Erlebach,^1∗ Martin Šípka,^1,2 Indranil Saha,^1
Petr Nachtigall,^1† Christopher J. Heard,^1 Lukáš Grajciar^1∗
^1Department of Physical and Macromolecular Chemistry, Faculty of Sciences,
Charles University, Hlavova 8, 128 43 Prague 2, Czech Republic
^2Mathematical Institute, Faculty of Mathematics and Physics,
Charles University, Sokolovská 83, 186 75 Prague, Czech Republic
^†Deceased, 28th December 2022
^∗To whom correspondence should be addressed; E-mail:
[email protected]; [email protected]
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Under operating conditions, the dynamics of water and ions confined within protonic aluminosilicate zeolite (H-AS) micropores are responsible for many of their properties, including hydrothermal stability, acidity and catalytic activity. However, due to high computational cost, operando studies of H-AS are currently rare and limited to specific cases and simplified models. In this work, we have developed a general potential energy surface interpolator with consistent accuracy for the entire class of H-AS, including the full range of experimentally relevant water concentrations and Si/Al ratios, via a reactive neural network potential (NNP). This NNP combines dramatic sampling acceleration at the metaGGA reference level with the capacity for discovery of new chemistry, such as collective defect formation mechanisms at the zeolite surface. Furthermore, we show that the baseline model allows for data-efficient adoption of higher-level (hybrid) references via Δ-learning and the acceleration of rare event sampling via automatic construction of collective variables. This framework allows for operando simulations of realistic catalysts at quantitative accuracy.
§ INTRODUCTION
Zeolites are a class of microporous aluminosilicates with tremendous structural and chemical diversity, which originates from the myriad stable three-dimensional arrangements of covalently connected silica/alumina tetrahedra. This makes zeolites a versatile material class with applications ranging from thermal energy storage to gas separation and water purification, but predominantly in heterogeneous catalysis.<cit.> The presence of aluminium, and in particular the necessary charge compensation add another layer of complexity to the structural characterisation of these materials, but are crucial to the catalytic function of zeolites. For example, acidic zeolites, in the form of protonated (H-form) aluminosilicates (H-AS) are one of the cornerstones of industrial petrochemical processes.<cit.> Recently, great experimental and theoretical efforts have been made to go beyond the traditional applications of zeolites, for example in converting sustainable bio-feedstocks into chemicals. <cit.>
A further critical consideration for both existing and emerging applications is the interaction between H-AS and water. This relationship governs many features of H-AS including i) proton solvation, and thus acidity,<cit.> ii) hydrolytic bond dissociation and defect formation, which controls catalyst durability and activity,<cit.> iii) water mobility and clustering in zeolite pores<cit.> and iv) the synthesis of zeolites from precursor gels containing silica fragments, water and cations <cit.>. Owing to the microporous nature of zeolites, this interaction is not adequately viewed as a simple bulk-liquid interface, but rather a collection of complex binding, clustering, exchange and reaction steps between variously sized water clusters and an inhomogeneous surface that is complicated by topology-dependent confinement effects <cit.>. As a result, the proper mechanistic understanding of H-AS water-zeolite interactions is still lacking, limited to either static calculations at ultra high vacuum conditions <cit.> or exploratory dynamical (AIMD) simulations of narrow scope. <cit.> These investigations demonstrate the importance of capturing dynamics under operating conditions, being able to discover unexpected reaction mechanisms and defective species that hitherto eluded structural identification, but are not sufficiently economical for a global exploration of structural and reactive space.
A state-of-the-art tool for accelerating the reactive sampling in zeolite-water systems, and thus reaching experimentally relevant timescale or realistic levels of model complexity is the class of reactive analytical potentials, for example ReaxFF.<cit.> However, due to their fixed functional form, these potentials have limited transferability to systems with different chemical composition.<cit.> Therefore, they frequently require re-parameterization for a specific system for fine-tuning.<cit.> An emerging alternative to analytical force fields is represented by machine learning potentials (MLPs), which interpolate the potential energy surface (PES) at the level of an ab initio training set.<cit.>
Two paradigms dominate the MLP field currently: i) training of MLP that cover large parts of the chemical space with dozens of elements, but with limited coverage of the configuration space, e.g., not considering all relevant chemical reactions with the associated transition states, e.g., OC22.<cit.> and ii) active learning procedures to accelerate simulations for a specific system <cit.>, which capture the details of the PES, including transition states, but have little or no transferability to systems with different chemical composition. Hence, MLPs that are able to simultaneously cover the broad chemical and configurational space needed for a class of materials such as H-AS zeolites, including the complexity of framework and water-framework based reactive transitions, are currently missing.
In this work, we developed reactive global neural network potentials (NNP) for an entire material class, namely, H-AS zeolites. These potentials capture the chemical space from dense silica and alumina polymorphs, through water-containing H-AS zeolites of all experimentally relevant Si/Al ratios, to bulk water and water gas-phase clusters. We observed excellent transferability among unseen framework topologies, with consistently high accuracy with respect to the training reference data. Generalization tests showed hitherto unseen chemical species and processes, including a collective hydrolysis mechanism at the surface of a zeolite nanosheet. Finally, we show that the learned representations of the NNP baseline models can be used for data-efficient learning of higher (hybrid) DFT level corrections (Δ-learning)<cit.> for specific use cases, in addition to developing machine learned collective variables for the acceleration of rare event sampling.<cit.>
§ RESULTS
§.§ Database generation and training of the general H-AS NNPs
One of the challenges in training general NNPs for a material class is creating an interpolation grid that captures relevant parts of the configuration and chemical space. The computational procedure employed in this work is summarized in Figure 1a. The bulk of the database is derived from 500 short (10 ps) ab initio ab initio molecular dynamics (AIMD) trajectories (at PBE+D3(BJ) level)<cit.> using a set of H-AS zeolite models. This structure set contains 150 zeolites constructed using ten topologies (three existing and seven hypothetical) with varying Si/Al ratios and water loading at three temperatures ranging from 1200 K to 3600 K (see Supplementary Table 1) to sample both low- and high-energy parts of the potential energy surface (PES). In addition, nine AIMD runs were performed for bulk water at three densities (0.9-1.1 g cm^-3) and at three temperatures (300-900 K). All AIMD trajectories were then subsampled by Farthest Point Sampling (FPS)<cit.> using metric based on the smooth overlap of atomic positions (SOAP)<cit.> kernel to obtain a collection of de-correlated and structurally-distinct configurations. These structures were used for SCAN+D3(BJ)<cit.> single-point (SP) calculations to create the bulk of the training database. To systematically sample states close to the equilibrium structures, lattice deformations (see Supplementary methods) were applied to the optimized structures of the H-AS zeolite models mentioned above and these structures were then used for SCAN+D3(BJ) single point (SP) calculations. Finally, to further diversify the database, the same lattice deformations were used for alumina and ice polymorphs as well as water clusters<cit.> (see Methods section). The resulting database was used to train (an ensemble of six) SchNet-based NNPs achieving average test root mean square errors (RMSE) of 5.3 meV atom^-1 and 186 meV Å^-1 for energies and forces, respectively (see Supplementary Table 2). These errors are similar to other reactive (rotationally invariant) MLPs<cit.>.
Figure 1b provides a low-dimensional representation of the training database. It shows a t-distributed stochastic neighbor embedding (t-SNE)<cit.> plot of the averaged SchNet representation vectors (see Supplementary methods) to visualize the structural and chemical diversity of the training database, ranging from water-free alumina and silica systems through various water-loaded H-AS zeolites to bulk water and small clusters. The t-SNE components of the averaged SchNet representations change smoothly with the chemical composition of the structures as well as with their total energy (see Supplementary Figure 1). In addition, all generalization tests (see below) lie within the generated interpolation grid of the database and cover a wide range of application cases for zeolite modeling.
< g r a p h i c s >
Fig. 1 Training and testing of general H-AS NNPs. a Computational workflow for creation of the SCAN+D3(BJ) database and application of the trained NNPs to test their generality. The end-to-end learned representations are used for Δ-learning and construction of ML collective variables. b t-distributed stochastic neighbor embedding (t-SNE) plot of the average representation vectors of all configurations in the training database (color codes shown on the left). Generalization tests are highlighted in red. c Energy error distribution Δ E_r (see Eq 1) of the NNPs in comparison with ReaxFF <cit.>.
§.§ Generalization tests and exploration of configuration space
To properly test the generalization abilities of the trained NNPs, we employed a series of simulations for systems outside of the training domain, i.e., including: i) MD simulations at ambient conditions that probe the performance of NNPs for close-to-equilibrium structures and ii) high-temperature MD simulations (supplemented by nudged elastic band (NEB) transition path searches) to assess the NNP quality for highly activated, reactive events.
The systems considered sample the chemical and structural space of water-loaded H-AS zeolites (see Figure 1b) varying in water and aluminum content, as well as in the zeolite topology (FAU, GIS and MFI zeolite frameworks which are not seen during the training) and dimensionality of the H-AS systems (three-dimensional crystal, zeolite layer or a zeolitic molecular fragment interacting with bulk water). Further details about the performed MD test simulations including discussion of some chemically relevant observations are provided below in the "Sampling equilibrium properties" and "NNP robustness at high temperatures" sections below.
Here, we focus on overall performance of our trained NNP in these generalization tests. To compare energies of all computational methods across the chemical H-AS space, we used the energies E_r of the hypothetical formation reaction:
x SiO_2 + y/2Al_2O_3 + z/2H_2O_(g)→Si_x Al_yH_z O_2x+1.5y+0.5z
with α-quartz, corundum (α-Al_2O_3), and a single water molecule in the gas phase as reference structures.
Adoption of E_r allows for benchmarking methods across broader chemical space, as exemplified by Hautier et al. <cit.>. Alternatively, we also expressed the errors of relative energies Δ E with respect to a reference configuration for each model system, e.g., the initial structure of an MD trajectory (see Supplementary Table 3). Obviously, the force errors as intensive properties are independent of the reference and can be compared directly across the chemical space Table 1 summarizes the RMSEs of energies and forces for all test cases (2700 structures in total) of the NNP with respect to SCAN+D3(BJ) reference and Figure 1c shows the total energy error Δ E_r distribution (see Supplementary Figure 2 for Δ E_r and Δ E distributions for each system separately). Table 1 and Figure 1c also show the performance of the reactive analytical force field ReaxFF specialized for water-loaded H-AS systems.<cit.>
Table 1. Root mean square errors of reaction energies Δ E_r (see Eq 1) and forces for the test cases shown in Fig. 2 at the NNP and ReaxFF level.
The total NNP errors are similar to other state-of-the-art (rotationally invariant) MLPs<cit.> for the modeling of reactive events and outperform ReaxFF by more than an order of magnitude in both energy and forces. More importantly, the NNP calculated reaction energies E_r are consistent over the entire range of chemical H-AS compositions and configurations. Only in the case of GIS(T=3000K)+24H_2O, energy and force errors are about twice as high compared to the other test cases. Such higher errors were also obtained for MLPs when applied to simulations with a large number of reactive events at extreme temperatures. <cit.> To put these NNP errors in context, note that standard GGA-level DFT functionals show, for 27 formation reactions that involve silica, an RMSE of about 28 meV atom^-1 with respect to experiment.<cit.> Therefore, the NNPs safely retain the (meta)GGA-level DFT quality for the description of the water-loaded H-AS systems.
By contrast, ReaxFF shows, e.g., a relatively low energy RMSE for the high silica FAU tests (FAU(Si/Al=47)+nH_2O) but ten times higher energy errors for FAU(Si/Al=1).This difference arises from shifted Δ E_r energy error distributions for ReaxFF and, to a far lesser extent, for the NNPs (see Supplementary Figure 2). Such almost constant shifts are removed when taking a structure with the same chemical composition as the reference, that is, comparing relative energies Δ E with DFT. As an example, the relative energy RMSE ΔΔ E of GIS(T=3000K)+24H_2O (NNP: 6 meV atom^-1; ReaxFF: 83 meV atom^-1) are about two times lower than Δ E_r (see Supplementary Table 3). In addition, the NNPs provide an order of magnitude higher force quality than RexFF across all generalization test cases. This comparison demonstrates the ability of the NNPs for general modeling of structure, properties and chemical reactivity across the class of H-AS materials.
§.§.§ Sampling of equilibrium properties
The dynamic behaviour of zeolite-confined water containing solvated protons is of high interest due to (potential) application of zeolites in water purification <cit.>, heat storage<cit.> or reaction optimization under humid conditions (such as biomass conversion).<cit.> The ability to realistically model the (water-loaded) H-AS systems close to equilibrium is crucial for understanding of many of their signature properties such as acidity, (water) adsorption and diffusion, or relative stability as a function of topology, water content and aluminum distribution and concentration.
The role of water loading and aluminum concentration was probed using equilibrium MD simulations of water-loaded zeolite with FAU (faujasite) topology - an industrially important zeolite topology unseen in the NNP training - under standard conditions (300 K). We considered model systems with the (theoretically) lowest and highest possible Si/Al ratios in a (primitive) FAU unit cell, namely, a single Brønsted acid site (BAS) with Si/Al=47 and Si/Al=1 according to Löwenstein's rule that prohibits the formation of Al–O–Al pairs. In the case of Si/Al=47 (FAU(Si/Al=47)+nH_2O), three water loadings n were tested, from single water through a water tetramer to full water loading of FAU with 48 water molecules (approximate water density of 1 g cm^-3). For Si/Al=1, full water loading with 48 molecules per FAU unit cell was chosen to focus on extensive sampling of BAS protonation and deprotonation events, a key reactive event characterizing these strong solid acids.
In the case of FAU (Si/Al=47) model, the single water molecule (n=1) remains adsorbed at the BAS throughout the 1 ns MD simulation, in line with the very strong interaction between BAS and water molecule amounting to -79 kJ mol^-1 calculated here and with adsorption energies reported in the literature (-70 to -80 kJ mol^-1 in CHA<cit.>). We quantified the degree of solvation by calculating the minimum distance of Al-O_FW-Si framework oxygens to all hydrogen atoms (see Supplementary Figure 3). The proton was considered solvated if it is closer to a water oxygen than to Al-O_FW-Si. Only very few solvated states (less than 3% of the trajectory) were observed during the 1 ns run, in line with previous (shorter) AIMD simulations. <cit.> However, the water tetramer (n=4) is already able to deprotonate the BAS, but similarly to single water, the tetramer stays close to the framework Al during the 1 ns MD trajectory (on average 3.1 Å between Al and the cluster center-of-mass, see Supplementary Figure 3). At full water loading (n=48), the proton rapidly leaves the BAS and stays solvated with an average distance of 7.3 Å (ranging from 3-10 Å) from the Al-O_FW-Si (see Supplementary Figure 3). However, the evaluation of the confined water dynamics herein is hindered by finite-size effects, due to a small primitive cell of FAU chosen primarily for benchmarking purposes. Hence, we refer the interested reader to our preliminary work<cit.> using the NNPs presented here on water diffusion in FAU using appropriately sized FAU unit cell (cubic cell with edge length of 25 Å) that is prohibitively large for carrying out routine DFT calculations.
A particularly challenging case is FAU with Si/Al=1. The MD trajectory contains several protonation and deprotonation events that cannot be described by analytical, non-reactive force fields. Also the ReaxFF parameterization used in this work shows considerably higher errors compared to FAU(Si/Al=47) (see Table 1 and Supplementary Figure 2). On the other hand, the errors of the NNP for this challenging case increase only mildly and are well below the test errors for the training database (see above).
To check the NNP generality further, we employed the "inverse" models to water-loaded zeolites, namely the fragments of zeolites (silicic acid Si(OH)_4 and aluminium hydroxide Al(OH)_3) solvated in bulk-like water (using a simulation box containing 96 water molecules). Such systems are relevant for modeling potential precursors of zeolite synthesis or products of zeolite degradation in hot liquid water (de-silication and de-alumination).<cit.> Since both processes take place under rather harsh hydrothermal conditions (temperature above 100 ^∘ C and pressure above 10 bars), the test simulations were carried out at 500 K. In both cases, the NNPs accurately reproduce the SCAN+D3(BJ) energies and forces. Similar to the previously discussed system FAU(Si/Al=1)+48H_2O, the NNP energy errors (4 meV atom^-1) are mainly connected to the offset of E_r (see Supplementary Figure 2). These results suggest that NNPs enable us to run reliable large-scale equilibrium MD simulations across the chemical and configuration space of H-AS and water with the (meta)GGA level of accuracy.
§.§.§ NNP performance for highly activated reactive events
Modeling of chemical reactions at the H-AS-water interface needs a robust interpolation of the relevant transition states. However, MLPs are expected to have only limited capability to reliably describe configurations and energetics in extrapolated or sparsely interpolated regions of the potential energy surface,<cit.> which often coincide with the high energy transition state configurations. Therefore, we tested the trained NNPs by performing MD simulations at very high temperatures (at 1600 and 3000 K) for an unbiased assessment of the NNP quality and robustness for modelling reactive processes. We chose two systems that were not part of the training dataset: an interface model of siliceous MFI slab in interaction with the bulk-like water and water-loaded GIS with Si/Al=1. In addition, we tested the accuracy of the NNPs using static nudge-elastic band (NEB) calculations, that are used to locate specific transition pathways, for four reactions relevant for H-AS zeolite based catalysts (see below).
< g r a p h i c s >
Fig. 2 Surface defect creation in an 2D-MFI nanosheet. a Snapshot of 2D-MFI from an exploratory 1 ns MD run at 1600 K to sample reactive events (Si: yellow, O: red, H: white). b-e Reaction steps of silanol defect creation at the 2D-MFI-water interface: b water adsorption on a surface Si and water autoprotolysis; c proton transfer from the adsorbed water to the hydroxide ion; d migration of the hydronium ion and adsorption on a framework oxygen; e Si–O bond hydrolysis, creating a silanol defect in axial position to the formed surface silanol. f NNP and ReaxFF energy error distribution of 100 snapshots comprising the reaction steps b-e.
The first generalization test was a model for the external interface of siliceous MFI with bulk water. This zeolite model resembles 2D MFI nanosheets which were successfully prepared by exfoliation of a multilamellar MFI zeolite.<cit.> To sample reactive events at the external zeolite surface, we performed an exploratory MD run at 1600 K for 1 ns. No extrapolation was detected using the ensemble of NNPs. As expected, significantly increased temperature leads to increased probability of the highly activated reactive events to take place and we do observe a relevant chemical reaction taking place over the course of 5 ps, in which a silanol defect is created at the external MFI surface (see Figure 2a). In our previous work,<cit.> a similar reactive event was found to be characterized by a free energy of activation amounting to approx. 80 kJ mol^-1 (at 450 K). The observed reaction starts at the intersection of the bulk water phase with the MFI main channel (along the crystallographic b-direction). Firstly, a water molecule adsorbs at a surface Si site (Figure 2b). The autoprotolysis of a nearby water molecule leads to the transfer of a proton from the adsorbed water molecule to the formed hydroxide ion creating an additional surface silanol group at the five-fold coordinated Si atom (Figure 2c). The remaining hydronium ion shuttles the excess proton together with the surrounding water molecules to a framework oxygen bound to the five-fold coordinated Si (Figure 2d). This process finally leads to the cleavage of the Si–O(H) bond, creating a silanol defect in axial position to the previously formed surface silanol. Our exploratory MD simulation revealed a feasible reaction mechanism for silanol defect creation at the external zeolite surface involving the autoprotolysis of water, which would be challenging to find by biased dynamics simulations with human designed CVs. These findings are in line with previous experimental studies on the hydrolysis of MFI zeolites in hot liquid water that suggest that zeolite degradation predominantly starts at the external surface in which water autoprotolysis and silanol groups play a crucial role. <cit.> To confirm that the defect creation process observed at the NNP level is reliable, we performed SCAN+D3(BJ) SP calculations using 100 snapshots comprising the reaction steps depicted in Figures 3b-3e. The NNPs proved very accurate for this test case with an energy RMSE of 1.4 meV atom^-1 (see Table 1), in contrast to the employed ReaxFF parameterization with a rather broad error distribution (see Figure 2f) and an RMSE of 38 meV atom^-1.
The second particularly challenging generalization test was a GIS zeolite model (Si/Al=1) loaded with waters (24 molecules), which was molten at 3000 K for 2 ns to sample multiple highly activated reactive events taking place simultaneously (see also Supplementary Figure 4). We obtained a stable MD trajectory of the liquid H-AS state with thousands of bond-breaking events (e.g., Si–O and Al–O bond hydrolysis, aluminol and silanol formation) over the entire simulation time, without detecting extrapolation using the trained ensemble of NNPs. However, this test case shows higher energy RMSE by around a factor of two when compared to the other test cases (see Table 1). Similar trends of increased energy errors have also been observed for other MLPs applied to high temperature MD runs of the liquid state of strongly (covalently) bound materials (see e.g. Refs <cit.>). Even though the NNP accuracy mildly deteriorates at these extremely high temperatures, they proved robust in these simulations of a large variety of highly activated reactive events.
Lastly, we tested the accuracy of trained NNPs on specific elemental reactions in water-loaded H-AS zeolites with well-known transition states: a proton jump with and without water,<cit.> and water-assisted bond breaking mechanisms of the Si–O and Al–O(H) bonds<cit.> (see Supplementary Figure 5 and Supplementary Table 4 as well as Figure 3). All NEB calculations were performed for FAU; an industrially relevant zeolite topology that was not part of the training dataset. For quantification of the NNP error, we used SCAN+D3(BJ) SP calculations on all generated NEB images. On average, the relative NNP energies only slightly deviate from their DFT reference with an RMSE of about 6 kJ mol^-1. Such small errors for activation barriers can be considered to lie within DFT accuracy. <cit.>
The generalization tests of the herein trained NNPs presented above demonstrate that the trained NNPs are robust and general interpolators for simulations across the chemical and configuration space spanned by the water-loaded H-AS zeolites and that they are able to retain the reference-level (SCAN+D3(BJ)) DFT quality not only for close-to-equilibrium simulations but also for highly activated reactive events.
§.§ Extensions of the NNP model
Obtaining a general NNP model that describes water-loaded H-AS zeolitic systems with (meta)GGA DFT quality with several orders of magnitude speedup is clearly beneficial. However, with such a robust baseline NNP model available, it is possible to construct extensions that can improve either the accuracy of the description or the efficiency of sampling of the reactive events of interest.
§.§.§ Improving the baseline model using Δ-learning
To improve the accuracy of the benchmark level, one can employ the well-known Δ-learning concept. <cit.> In this way, one can train a correction model on top of the baseline model, using a computationally more demanding but more accurate level of theory for a small subset of datapoints that covers the system of interest. We demonstrate the applicability of the concept for two model reactions discussed above - a proton jump and water-assisted scission of Al–O(H)–Si bonds.<cit.> (see Figures 4c-d) For the higher-level reference, we chose the range-separated hybrid DFT functional ωB97X complemented with the empirical dispersion correction D3(BJ): a functional that shows considerably better performance for water cluster binding energies and reaction barriers <cit.> than our baseline reference SCAN+D3(BJ) functional. First, we generated a small ωB97X-D3(BJ) database containing 500 structures taken from the biased (NNP level) MD runs of a H-jump in water-free CHA (between O2 and O3, see Methods section) taken from Ref. <cit.>. Next, we trained a correction (ΔNNP) to the atomic energies of the NNP baseline model by using a simple linear regression (see Method section). It turned out that 150 training structures were sufficient to reach (test set) RMSEs of 1.3 meV atom^-1 and 69 meV Å^-1 for energy and forces, respectively (see Supplementary Figure 6).
We then tested whether the trained ΔNNP model is capable to model a proton jump and Al-O(H) bond dissociation mechanism in a zeolite topology different from those included in the training set. Again, we chose FAU with a single Al defect (Si/Al=47) as a test case. Figure 3a and 3b show the results of the NNP level NEB calculations along with the corresponding DFT energies. Figure 3c and 3d depict the structures for both reaction paths and Table 2 also compares the relative energies of the proton jump in CHA (O2-O3) and FAU (O1-O4). Not surprisingly, both the general baseline and ΔNNP model are in very good agreement with their DFT reference in case of the proton jump in CHA with less than 3 kJ/mol deviation. The ΔNNP accuracy of the relative energies only slightly deteriorates (about 5 kJ mol^-1 error) when applied to FAU. When comparing the reaction energies Δ E_r (see Eq 1), the ΔNNP model shows an almost constant shift to its DFT reference (see Supplementary Table 5). This offset shows that the ΔNNP model is capable of describing local atomic environments (e.g., of BAS) but with less transferability to different compositions and zeolite frameworks as the baseline NNP model.
< g r a p h i c s >
Fig. 3 Reaction path modeling using Δ-learning and ML collective variables. Reaction paths of a proton jump (a, c, e) and an Al–O(H) bond dissociation (b, d, f) in FAU. a, b Static NNP simulations and corresponding DFT energies for the NNP baseline level (SCAN+D3(BJ)) and the ΔNNP level (ωB97X-D3(BJ)) trained on 150 structures (taken from Ref <cit.>) of a proton jump in CHA. c, d Atomic structures along the reaction path (Si: yellow, Al: grey, O: red, H: white). e, f Estimated free energy profiles using ML collective variables.
To check how the ΔNNP model performs for different but related reactions to proton jumps, we repeated the NEB calculations at the ΔNNP level for the Al–O(H) bond dissociation (Al–O2 bond cleavage, see Method section) mechanism suggested by Silaghi et al.<cit.> (Figure 3b and 3d). The ΔNNP model reproduces the ωB97X-D3(BJ) relative energies Δ E of the reaction path with near-DFT accuracy albeit with somewhat higher energy errors (less than 8 kJ mol^-1) compared to the CHA proton jump and the NNP baseline model (see Table 3).
Table 2. Relative energies Δ E [kJ mol^-1] of the proton jump in CHA and FAU at the (Δ)NNP and DFT level.
To calculate ΔNNP energy and force errors, we performed single-point calculations for the biased dynamics runs of the reactions (in FAU) shown in Figure 3 (see Method section). Similar to the NEB calculations, low RMSEs for the relative energies Δ E of less than 2 meV atom^-1 were obtained (see Supplementary Table 6). However, the reaction energies Δ E_r (see Eq 1) show an almost constant offset (see Supplementary Figure 7), as observed for the baseline NNP and ReaxFF (see Supplementary Figure 2), leading to larger RMSEs of up to 18 meV atom^-1. This offset shows that the ΔNNP model does not retain the generality of the NNP baseline model across different zeolite topologies. However, the low Δ E errors indicate that the ΔNNP model correctly interpolates the local environments of the BAS in a low water (or water-free) regime.
In addition, the ΔNNP model shows low force errors of around 100 meV Å^-1 for both reactions (see Supplementary Table 6) which is comparable to the accuracy of the NNP baseline model (see Table 1). This finding supports the conclusion that the ΔNNP model is capable to model the local atomic environments in FAU even when trained on only 150 structures of a different zeolite (CHA). For comparison, Bocus et al.<cit.> trained specialized SchNet NNPs on biased dynamics trajectories of proton jumps in CHA using more than 100k training points. When applying these NNPs to different zeolite topologies, they obtained up to twice as high force errors (around 200-250 meV Å^-1). These results indicate that the learned representation vectors of the more general NNP baseline model contain information on the H-AS PES, allowing a data-efficient Δ-learning of higher level corrections. Therefore, the ΔNNP exhibits fairly good extrapolation robustness even when applied to similar reaction pathways in other zeolite topologies.
Table 3. Relative energies Δ E [kJ mol^-1] of the Al-O(H) bond dissociation in FAU at the (Δ)NNP and DFT level.
§.§.§ Accelerating rare event sampling using baseline model representations
In the previous section, we tested the Δ-learned model for accurate (hybrid DFT) modeling using static calculations of known reaction mechanisms. However, prior investigations have shown the unforeseen and highly collective nature of water-involved reaction mechanisms as well as the sizable role of temperature effects. <cit.> Both imply the need for a tool with the ability to effectively discover and sample transition pathways. For effective sampling of the activated reactive events, which are therefore rare on the timescales accessible even for the NNP-accelerated simulations, one typically adopts a biasing along a low-dimensional representation of the reactive process, i.e., along the reaction coordinates or collective variables (CVs). However, good CVs can be difficult to construct in case of unknown, possibly complex, reaction pathways.
Our recent work <cit.> shows how the end-to-end learned atomic representations of our baseline NNP model can be used to automatically generate robust machine-learned CVs (ML-CVs).
In this approach, the structures of the reactant, product, and perhaps also tentative transition states are first represented using the atomic representations of the herein-trained baseline NNP model. Next, these representations are used as an input for the dimensionality reduction model (variational autoencoder), which generates a low-dimensional (typically one- or two-dimensional)
latent space from which the model attempts to reconstruct the input representation vectors as precisely as possible. As a result, the latent low-dimensional space effectively distinguishes products from reactants, i.e., it represents the reactive coordinate, or collective variable.
We showed previously that learned ML-CVs coupled with the baseline NNP enable efficient sampling of the free energy surface for a proton jump and Si-O bond hydrolysis in CHA zeolite.<cit.> Here, we test this procedure using the aforementioned proton jump and Al-O2(H) bond dissociation mechanism in FAU with Si/Al=47 which is outside of the NNP training domain, in contrast to CHA (see Methods section). Figure 3e and 3f show the estimated free energy profiles calculated with ML-CVs using well-tempered metadynamics<cit.> simulations (see Supplementary Information and Methods). The free energy barrier (approx. 110±10 kJ mol^-1 at 300 K) of the proton jump in FAU is somewhat higher compared to the static calculations (84 kJ mol^-1, see Table 2). This is in line with previous calculations<cit.> which showed increasing reaction barriers with temperature (up to 20 kJ mol^-1 from 0 K to room temperature). In case of the Al-O(H) bond dissociation the activation free energy is about 80 kJ mol^-1, similar to the barrier found by the NEB simulations (see Table 3). Hence, with the baseline NNP model, one can not only accelerate the evaluation of energies (and forces) necessary for sampling the water-loaded H-AS systems but also use it to automatically generate ML-CVs accelerating the sampling of a particular reactive process.
§.§.§ Conclusions
In this work, we developed a neural network potential (NNP) for the entire class of protonic aluminosilicate (H-AS) zeolites, which are one of the cornerstones of existing petrochemical processes, <cit.> as well as one of the main candidates for emerging applications of zeolites in sustainable chemistry.<cit.> Our NNP provides a general approximation of the potential energy surface of the H-AS zeolites, including reactive interactions with water, capturing both close-to-equilibrium structures and high-energy bond-breaking scenarios. The ability to cover large portions of both the chemical and configurational space of this material class was demonstrated using multiple generalization tests that ranged from zeolite surfaces varying in water and aluminum content to zeolite fragments solvated in bulk-like water and a high-temperature melt of the aluminosilicate zeolite GIS. These tests confirm the outstanding transferability of the NNP, which are able to maintain consistent accuracy close to the reference meta-GGA DFT level, across the entire H-AS material class, outperforming state-of-the-art analytical reactive force fields for water-loaded H-AS zeolites<cit.> by at least one order of magnitude. Moreover, in some of these tests we observed hitherto unseen chemical processes and species, which confirms the capability of the NNP for exploration and discovery of novel reactive pathways, in addition to acceleration of configuration space sampling.
Furthermore, we showed that the NNP can be used as a robust baseline model that allows for extensions including: i) data-efficient adoption of higher-level (range-separated hybrid DFT) description via Δ-learning <cit.> and ii) acceleration of reactive event sampling using automatic construction of collective variables, via end-to-end learned atomic representations <cit.>. Hence, the baseline model with its extensions constitutes a broader ML-based framework within which one can simulate H-AS materials in a comprehensive bias-free fashion with tunable accuracy.
We expect that the tools developed in this work will enable large-scale simulations of H-AS zeolites to tackle long-lasting challenges in the field, ranging from understanding the mechanistic underpinnings of zeolite hydrothermal (in)stability to the determination of the character of active species and defects under operating conditions.
§ METHODS
§.§ Dataset generation
Covering the chemical and configuration space of H-AS zeolites requires a structurally distinct set of zeolite frameworks with different water loadings and Si/Al ratios. In our previous publication,<cit.> we used Farthest Point Sampling<cit.> together with the smooth overlap of atomic positions<cit.> descriptor (SOAP-FPS) to find a subset of siliceous zeolites that optimally covers the structural diversity of existing and more than 300k hypothetical zeolites. From this subset, ten zeolites were selected, three existing (CHA, SOD, MVY) and seven hypothetical zeolite frameworks (see Supplementary methods). These frameworks were used for construction of 150 initial structures combining four water loadings (from 0 to ∼1.1 g cm^-3) with three Si/Al ratios between ∼1-32 (in protonic form) and water-loaded purely siliceous zeolites (see Supplementary Table 1). We also added a two-dimensional silica bilayer (12 Å vacuum layer) used in ref. <cit.> with three different water-loadings to the initial structure set. All 153 initial configurations were then optimized under zero pressure conditions.
Next, the entire structure set was equilibrated for 10 ps at 1200, 2400, and 3600 K using ab initio MD (AIMD) simulations to sample reactive events at higher energies. Sampling of the low energy parts of the potential energy surface (PES) used 210 unit cell deformations applied to all optimized structures (see Supplementary methods). Apart from microporous structures and two-dimensional H-AS, we also added the same set of 210 lattice deformations for six dense H-AS polymorphs, namely, four alumina polymorphs α-Al_2O_3 (corundum),<cit.> θ-Al_2O_3,<cit.> γ-AlO(OH) (Boehmite),<cit.> and α-Al(OH)_3 (Gibbsite),<cit.> as well as two aluminosilicate polymorphs, Si_3Al_2O_12H_3 (H_3O-Natrolite)<cit.> and Al_2Si_2O_5(OH)_4 (Dickite).<cit.> Additionally, we (SOAP-FPS) subsampled AIMD trajectories of zeolite CHA taken from previous publications<cit.> to further extend the structure database. These trajectories are equilibrium MD runs of non-Löwenstein pairs (Al-O-Al) with various water loadings (0, 1, 15 water molecules)<cit.> and biased AIMD runs of Si-O(H) and Al-O(H) bond cleavage mechanisms.<cit.>
For interpolation of the interactions in pure water, we performed AIMD simulations (10 ps) for bulk water with 64 water molecules at three densities (0.9, 1.0, 1.1 g cm^-3) and at three temperatures (300, 600, 900 K). In addition, we used single water and water clusters in vacuo taken from the BEGDB database<cit.> (38 isomers from (H_2O)_2 to (H_2O)_10, available under: begdb.org) and four isomers of (H_2O)_20.<cit.> All clusters were first optimized (constant volume conditions) with a unit cell ensuring a distance between equivalent periodic images of at least 1 nm. Then the aforementioned 210 lattice deformations were applied to all optimized clusters. Finally, the unit cells of two ice polymorphs (Ice II<cit.> and Ice I_h<cit.>) were deformed in the same way for sampling of low-energy structures of crystalline water.
All AIMD simulations and structure optimizations were performed at the computationally less demanding PBE+D3(BJ)<cit.> level employing the dispersion correction of Grimme et al. (D3)<cit.> along with Becke-Johnson (BJ)<cit.> damping. The AIMD equilibration used the canonical (NVT) ensemble along with a 1 fs time step, the Nosé–Hoover thermostat<cit.> and with hydrogen being replaced by tritium. Structurally diverse configurations were extracted from the MD trajectories using SOAP-FPS (see Supplementary methods). These decorrelated MD structures were used, together with the generated set of lattice deformations, for single-point (SP) calculations at the (metaGGA) SCAN-D3(BJ) level.<cit.> The resulting SCAN+D3(BJ) reference dataset contained 248 439 structures.
An ensemble of six SchNet<cit.> NNPs was trained on the final SCAN+D3(BJ) database. The six independent training runs used different, randomly split parts of the DFT dataset with approximately 80% of the datapoints as training set and 10% as validation and test set, respectively. We used the same SchNet hyperparameters (6 Å cutoff, 6 interaction blocks, 128 feature vector elements, 60 Gaussians for distance expansion) and loss function for training of energies and forces (trade-off 0.01) as in our previous publication.<cit.> Minimization of the loss function used mini-batch gradient descent along with the ADAM optimizer<cit.> and four structures per batch. If the loss function for the validation set did not decrease in three subsequent epochs, the learning rate was lowered (from 10^-4 to 3·10^-6) by factor 0.75.
§.§ Generalization tests and reaction path searches
Testing of the NNP accuracy, robustness and generality used a series of MD and NEB calculations of systems that were not included in the training database. To test the NNP performance at close-to-equilibrium (low temperature) conditions, we performed four MD runs (1 ns, 300 K) for zeolite FAU (primitive unit cell, 48 T-sites) at different chemical compositions. Three of the MD runs used a single Al atom (and BAS) per unit cell (Si/Al=47) and three water loadings (1, 4, 48 water molecules). The fourth run was performed with 24 Al per unit cell (Si/Al=1) and 48 water molecules. From every MD trajectory, 500 were selected for subsequent SCAN+D3(BJ) and ReaxFF SP calculations. As an "inverse" test case to three-dimensional zeolites, we chose silicic acid Si(OH)_4 and aluminum hydroxide Al(OH)_3 solvated in bulk water (96 water molecules). Both systems were equilibrated at hydrothermal conditions 500 K for 1 ns. Subsequently, two hundred configurations were selected from both MD runs for accuracy evaluation.
To check the NNP performance and robustness for the sampling of reactive events, we first constructed a model of the external MFI-water interface. The starting point was an orthorhombic (96 T-site) MFI unit cell with one silanol nest at T-site T9 for exploratory, high-temperature MD runs to sample reactive events at the internal and external MFI-water interface. Next, an MFI(010) surface model was created by adding a 12 Å vacuum layer and cleaving the Si–O bonds between the T-sites T7, T9, T10, and T12 (lattice plane with lowest number of bridging O) yielding eight silanol groups on both surfaces. After addition of 165 water molecules, the model was equilibrated for 2 ns at 1600 K. One hundred structures were selected from the trajectory that include the surface defect creation shown in Figure 2 for SP calculations. As an extreme case to test the NNP robustness at very high temperatures, we simulated the liquid state of an H-AS-water model system at 3000 K. The initial configuration was a model of GIS (32 T-site unit cell) with Si/Al = 1 and 24 water molecules which was equilibrated for 2 ns. Finally, two hundred structures were extracted from the MD run for the NNP and ReaxFF error evaluation.
All MD simulations used a time step of 0.5 fs with hydrogen being replaced by deuterium, employing the Nosé–Hoover thermostat.<cit.> The final generalization test set collected from all trajectories contains 2700 configurations for SP calculations at the SCAN+D3(BJ) and ReaxFF<cit.> level allowing the energy and force error evaluation shown in Table 1 and Figure 1c.
Lastly, we conducted NNP performance tests for modeling of reaction pathways in FAU (primitive unit cell, Si/Al=47) using NNP level climbing image NEB calculations. We chose four reaction pathways: i) a proton transfer with one water molecule and without water (between O1-O4),<cit.> and ii) water-assisted bond breaking mechanisms of the Si–O2 and Al–O2(H) bonds<cit.> (see Figure 3 and Supplementary Figure 5). In addition, we tested a water-free proton transfer in CHA between O2 and O3. The numbering of the symmetry inequivalent oxygen atoms (see Supplementary Figure 8) is consistent with the labeling of the zeolite frameworks in the IZA database (available under: iza-structure.org/databases). Energies at the SCAN+D3(BJ) were then obtained by SP calculations for all NEB images.
§.§ Δ-learning
We applied the Δ-learning approach<cit.> to improve the accuracy of our baseline SCAN+D3(BJ) model to the (hybrid DFT) ωB97X-D3(BJ) level. First, we generated a training set using a subset of an (NNP level) biased dynamics run of a proton jump in CHA between O2 and O3, taken from Ref. <cit.>. These structures were selected by FPS using the euclidean distance of the (baseline) SchNet NNP representation vectors averaged over each MD snapshot. SP calculations were then applied to 500 extracted configurations to obtain energies and forces at the ωB97X-D3(BJ) level.
The ΔNNP correction of the atomic energies Δ E_i to the NNP baseline model (SCAN+D3(BJ)) was obtained by linear regression of the SchNet representation vectors 𝐱_i of each atom i with the (column) weight vector 𝐰_i and bias b_i: Δ E_i = 𝐰^T𝐱_i + b_i. We tested the ΔNNP model performance using 250 randomly chosen structures from the dataset and convergence tests showed that 150 training points give sufficiently low test set errors (see Supplementary Figure 6). To test the ΔNNP quality, we repeated the NEB calculations described above for the water-free proton jump and Al–O2(H) bond hydrolysis in FAU as well as the proton transfer in CHA (O2-O3) without water. In addition, we performed two hundred SP calculations for structures taken from the biased dynamics runs of the proton jump (O1-O4) and the water-assisted Al–O2(H) bond cleavage to improve the force error statistics of the ΔNNP model.
§.§ Biased dynamics using ML collective variables
Collective variables guiding proton transfer (O1-O4) and Al–O2(H) bond dissociation reactions in FAU described in the manuscript were trained using a variational autoencoder build on top of NNP-generated representation vectors.
To train the proton jump reaction, we used 3500 data points simulated from equilibrium MD runs on reactants and the same number on products. The data generated by using intuitively chosen CV and steered dynamics method were available for verification (see Supplementary Figure 9) but were not used during training. The NNP representations were calculated and saved in a cache and an encoder producing a single CV was trained together with the decoder. We trained on 40 epochs with a learning rate of 10^-4. The encoder generating the CV was a simple linear layer on top of pre-trained representations. To reduce the complexity of the task, only the 30 representation elements were chosen that maximized the variance between reactant and product configurations. For the biased dynamics, we used well-tempered metadynamics from the PLUMED package.<cit.> The parameters for the simulation are in Supplementary Table 7. The simulation was run for 1 800 000 steps using a 0.5 fs timestep.
Al–O(H) bond dissociation was trained similarly to the proton jump case. We used the same number of data points obtained by running unbiased trajectories in both end states. The encoder was again only linear and was trained for 60 epochs with a learning rate 2.10^-4. 200 representation elements were pre-selected from the representation vectors generated by the NNP.
Parameters for the well-tempered metadynamics method are reported in Supplementary Table 7. We ran the biased dynamics for both test reaction for 1 500 000 timesteps of 0.5 fs. To avoid simulating degrees of freedom that are unimportant for the Al–O(H) bond dissociation reaction we introduced some restrains to the biased dynamics. We required the distance between the free water molecule and the aluminium atom to be at most 2.2 Å to avoid it diffusing away. We also fixed two hydrogen atoms to their corresponding oxygens to avoid permutations that would complicate the process. In the same fashion we disallowed the formation of the hydrogen bond (Figure 3d-FS) for different O and H pairs.
Otherwise, the setup is standard for the representation-based collective variables and a more thorough description of the autoencoder architecture and workflow can be found in <cit.>.
§.§ Computational details
All simulations at the DFT level used the Vienna Ab initio Simulation Package<cit.> (VASP, version 5.4.4) along with standard versions of the Projector Augmented-Wave (PAW) potentials.<cit.> Calculations at constant volume were performed with an energy cutoff of 400 eV. Structure optimizations at constant (zero) pressure employed a larger energy cutoff of 800 eV. The minimum linear density of the k-point grids was set to 0.1 Å^-1 along the reciprocal lattice vectors. Single-point calculations using ReaxFF<cit.> were performed with GULP.<cit.> NNP training and NNP level simulations used the Python packages SchNetPack<cit.> (version 1.0) and the atomic simulation environment (ASE).<cit.>
§ DATA AVAILABILITY
The trained (baseline) NNPs and the Δ-learned model along with an example for (single-point) calculations using SchNetPack (version 1.0) are available in a Zenodo repository under CC-BY-NC-SA 4.0 license (https://doi.org/10.5281/zenodo.8139369). MD trajectories of all generalization tests, energy and force data (SCAN+D3(BJ), NNP and ReaxFF level) used for error statistics as well as trajectories of the NEB calculations are openly available (CC BY 4.0 license) in another Zenodo repository version (https://doi.org/10.5281/zenodo.8141614). The remaining data for the reproduction of results is available upon reasonable request.
naturemag
§ ACKNOWLEDGMENTS
Charles University Centre of Advanced Materials (CUCAM) (OP VVV Excellent Research Teams, project number CZ.02.1.01/0.0/0.0/15_003/0000417) is acknowledged. LG acknowledges the support of the Primus Research Program of Charles University (PRIMUS/20/SCI/004) and that of Czech Science Foundation (23-07616S). Computational resources were provided by the e-INFRA CZ project (ID:90140), supported by the Ministry of Education, Youth and Sports of the Czech Republic.
§ AUTHOR CONTRIBUTIONS
A.E. performed the simulations needed to obtain the dataset, curated the training/testing dataset, e.g., using active-learning strategies, trained and validated the NNP models (both baseline and Δ-learned), conceived and carried out the bulk of the generalization tests; analyzed the data, wrote the original manuscript draft and contributed to its later refinement. M.Š. generated the collective variables using the baseline NNP, carried out the metadynamics simulations and co-wrote the sections on accelerating rare event sampling using baseline model representations. I.S. carried out a part of generalization tests, in particular the transition state modelling. P.N. acquired funding, partially supervised the work. C.J.H. partially supervised the work, co-wrote and revised the manuscript. L.G. acquired funding, supervised the work, contributed to data analysis/curation, conceived the extensions of the baseline model and co-wrote and revised the manuscript.
§ COMPETING INTERESTS
The authors declare no competing interests.
[pages=-]SI_sialoh_nnp.pdf
|
http://arxiv.org/abs/2307.01390v1
|
20230703230449
|
Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives
|
[
"Danele Lunghi",
"Alkis Simitsis",
"Olivier Caelen",
"Gianluca Bontempi"
] |
cs.LG
|
[
"cs.LG",
"cs.CR"
] |
Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives]Adversarial Learning in Real-World Fraud Detection:
Challenges and Perspectives
The author pursues a joint PhD degree under the auspices of DEDS (No 955895), a Horizon 2020 MSCA ITN, and he is co-affiliated with Université Libre de Bruxelles in Belgium, Athena Research Center and the University of Athens (a degree awarding institute for Athena R.C.) in Greece.
Université Libre de Bruxelles,
University of Athens, and Athena RC
Bruxelles
Belgium
[email protected]
Athena Research Center
Athens
Greece
[email protected]
Worldline S.A., Belgium
Bruxelles
Belgium
[email protected]
Université Libre de Bruxelles
Bruxelles
Belgium
[email protected]
Data economy relies on data-driven systems and complex machine learning applications are fueled by them. Unfortunately, however, machine learning models are exposed to fraudulent activities and adversarial attacks, which threaten their security and trustworthiness. In the last decade or so, the research interest on adversarial machine learning has grown significantly, revealing how learning applications could be severely impacted by effective attacks.
Although early results of adversarial machine learning indicate the huge potential of the approach to specific domains such as image processing, still there is a gap in both the research literature and practice regarding how to generalize adversarial techniques in other domains and applications.
Fraud detection is a particularly interesting application, due to the reciprocal influence between modern data economy and online payment systems, and for the machine learning challenges fraud detection poses, the understanding of which can help in multiple other machine learning domains.
In this work we show how attacks against fraud detection systems differ
from other applications of adversarial machine learning,
and propose a number of interesting directions to bridge this gap.
Fraud detection is a critical defense mechanism for data economy, as it is for other applications as well, which poses several challenges for machine learning. In this work, we describe how attacks against fraud detection systems differ from other applications of adversarial machine learning, and propose a number of interesting directions to bridge this gap.
<ccs2012>
<concept>
<concept_id>10002978.10002997</concept_id>
<concept_desc>Security and privacy Intrusion/anomaly detection and malware mitigation</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003227.10003351</concept_id>
<concept_desc>Information systems Data mining</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Intrusion/anomaly detection and malware mitigation
[500]Information systems Data mining
[
Gianluca Bontempi
9 June 2023
=====================
§ INTRODUCTION
We live in the era of a new, fourth paradigm of discovery <cit.> based on data-intensive science and data-driven methods and algorithms. Business decisions are increasingly based on data-driven Machine Learning (ML) algorithms and data residing in a plurality of sources (often offered by data marketplaces) and formed in various modalities. Such a modern data economy ecosystem rapidly changes how the economy works and provides immense economic and social value to our society. The emerging field of Data Economy aims at building the right tools and safeguards to ensure that the 'right' algorithms interact with the 'right' data at the 'right' price. 'Right' can be interpreted in various ways including 'fair', 'just', 'explainable', and 'secure' among others. In this study, we focus on the ‘secure’ aspect of Data Economy. As data-driven systems play a crucial role in many applications and are indispensable for many scientific, economic, and governmental activities, we should not neglect the immense risks of fraudulent activity and we need to reinforce such systems with secure learning algorithms and practices.
The necessity of operating machine learning models in adversarial environments, where an adversary actively works to have the implemented model behave in a different way from what it was defined for, led to the creation of a new research field called Adversarial Machine Learning (AML). Over the last two decades, adversarial machine learning has become a research topic of increasingly growing interest, especially due to the significant initial results obtained in the field of image recognition <cit.>.
The modern data economy ecosystem provides immense economic and social value to our society.
Nowadays, data-driven systems play a crucial role in many applications and are indispensable for many scientific, economic, and governmental activities.
Given the ubiquity of such systems, we should not underestimate the importance of reinforcing them with secure learning algorithms and practices.
The necessity of operating machine learning models in adversarial environments, where an adversary actively works to have the implemented model behave in a different way from what it was defined for, led to the creation of a new research field called Adversarial Machine Learning (AML).
Over the last two decades,
adversarial machine learning has become
a research topic of increasingly growing interest, especially due to the
significant initial results obtained in the field of image recognition <cit.>.
Attacks designed against an algorithm's training set (poisoning attacks) and at test time (evasion attacks) make machine learning systems highly vulnerable to attacks in a constrained domain.
Despite the successful application of adversarial techniques to image recognition, generalizing them to other applications and domains is neither trivial nor obvious. For example, image recognition presents relatively few semantic and lexical constraints on the data, and adapting algorithms designed for it to applications where such constraints are relevant presents serious challenges <cit.>.
For the rest of this work, we will refer to
such applications as `constrained applications'.
An additional challenge is that most attacks have been developed against static systems,
whereas many applications
operate on streaming data.
Research has only recently dealt with adversarial attacks against online systems, focusing so far mainly on the theoretical aspect of the problem <cit.>.
However, this could
have substantial economic implications, as many
valuable targets in our economy are constrained and often are online applications too.
A case in point is bank fraud detection, which heavily relies on data-driven systems. The characteristics of the domain impose specific constraints on the data that should be taken into consideration during the design of adversarial strategies.
For example, transactions cannot have a negative amount. Moreover, fraud detection is usually performed on aggregated features, i.e., features obtained by combining multiple transactions to observe the customers' behavioral patterns <cit.>, which
depend on the past usage of an account.
Such constraints limit the range of possible actions of an attacker.
Furthermore, the changing habits of users impose that the system continuously adapts to the environment through concept drift adaptation algorithms <cit.>.
With accelerated digitalization, new risks of cybercrime are emerging. Fraudsters continuously find new ways to make financial gains, forcing the payment systems to put more and more effort into fraud detection systems <cit.>.
Global losses from payment fraud have tripled from $ 9.84 billion in 2011 to $ 32.39 in 2020, an increase of more than 200% <cit.>.
This number could significantly increase if skillful attackers
attempt to effectively trick
the machine learning systems underlying the fraud detection engines.
For instance, smart fraudsters may
attempt to understand the classifier's behavior to craft undetected frauds. To reach this goal, they could
attempt a brute-force approach:
first compromise
a number of cards and then perform multiple transactions to understand the model's behavior and the characteristics of the transactions it considers genuine. While testing the model, a certain number of cards may be blocked.
Still, if the fraudster has access to enough cards, they might bypass a non state-of-the-art fraud detection mechanism.
And unfortunately, in the real-world, fraudulent attempts are much more sophisticated and at times, successful as well.
Crucially, if this behavior spreads, the whole trustworthiness of online payment systems would be jeopardized.
In fact, trust in online payment systems is a fundamental condition to foster the growth of online services, which are a significant data source. If the risk of having online transactions hijacked becomes too high, or if the procedures to avoid fraud make the payment operations too cumbersome, the existence and profitability of the whole data economy ecosystem may be jeopardized.
In this paper, we consider the problem of applying adversarial machine learning techniques to fraud detection to ensure the robustness of online transaction systems against hostile attacks. To do so, we first analyze in Section <ref> the threats that different adversarial attacks pose to data-driven systems. We then explain why we focus our attention on the so-called evasion attacks.
We then describe such attacks in Section <ref>, showing a taxonomy of the most important attacks in the literature and discussing their strengths and weaknesses in more detail.
Similarly, we discuss the primary defensive mechanism research on adversarial machine learning has discovered to mitigate the risks from such attacks.
Then, in Section <ref>, we discuss the fraud detection problem in detail. We explain in detail the challenges that apply to fraud detection the techniques discussed in Section <ref> presents, and we show why the main difficulties of performing an attack as the one described above, as well as the risks that the lack of research on the topic presents.
Finally, in Section <ref>, we summarise the analysis performed in the previous Sections, we identify research directions that we believe to be promising, and we argue that researching adversarial machine learning for fraud detection could result in a deeper understanding of adversarial machine learning and be beneficial for many other applications as well.
Our goal and contributions.
In this paper, we consider the problem of applying adversarial machine learning techniques to fraud detection to ensure the robustness of online transaction systems against hostile attacks. Our analysis comprises the following steps.
* We describe the threats that different adversarial attacks pose to data-driven systems, and also motivate why in this work, we focus on the so-called evasion attacks (see Section <ref>).
* We elaborate on evasion attacks, present a taxonomy of the most important attacks described in the literature, and discuss their strengths and weaknesses. We also discuss research attempts to mitigate the risk from such attacks by developing defensive mechanisms for adversarial machine learning (see Section <ref>).
* We present
early solutions proposed for adversarial attacks against online (streaming) applications and fraud detection systems (see Section <ref>).
* We present critical challenges and limitations, argue that there is a gap in the literature to deal with such issues, and offer our perspectives toward future research (see Section <ref>).
We present next the various types of attacks typically met in security sensitive applications.
§ THREAT MODELING IN AML
The best way to model a problem in security applications is in terms of threats, and threat modeling is generally the first step for
further analysis on the topic <cit.>. For adversarial machine learning, this translates into modeling possible attackers based on their goals, their knowledge, and their capabilities.
Based on these axes, we provide an overview of the different fields of adversarial machine learning (see also Figure <ref>).
Inference vs. integrity attacks.
A relevant criterion to map the threats is to consider the nature of the security violation <cit.>.
A primary distinction is between adversaries who want to infer information from the system and those who wish to influence its behavior somehow.
The first group will perform the so-called inference attacks, where the goal is to obtain information about the training set of the model. Machine learning models are trained on data, and when the training data is
sensitive,
there is a risk that some unwanted information may leak from the behavior of the trained model.
For instance, network models dealing with natural language texts could unintentionally memorize rare or unique sequences, which a skillful attacker may retrieve and exploit <cit.>.
Instead, influencing the system can interfere with two properties of the system: availability and integrity <cit.>. In the former, the attacker may target the availability of certain system operations by using false positives to create Denial of Service Attacks <cit.>.
Instead, Integrity attacks aim to use the false negatives to perform operations that the system is not meant to allow. In turn, integrity attacks are divided into poisoning and evasion attacks.
Poisoning attacks.
In those attacks, the attacker may access the system's training set and inject or modify one or multiple observations to influence the model's training process.
Such attacks are called poisoning attacks and aim at influencing the model's behavior.
Example techniques are the backdoor attacks,
where a backdoor key is inserted in the model. The backdoor does not affect the classifier's performance on most samples. Instead, it
becomes active when a particular key is contained in the data, effectively allowing the attacker to infect the model unnoticed <cit.>.
However, in fraud detection, the focus of this work, the training set is generally not available online, and it is incredibly complex even for numerous fraudsters working in parallel to inject enough transactions to affect the model's training process, given the massive number of transactions systems in production process every day.
Evasion.
These attacks involve cases where the attacker aims at influencing the model's behavior but has no access to its training set.
In fraud detection, the focus of our work, the evasion attacks pose particularly high risks and are critical for multiple reasons.
Fraudsters aim at crafting frauds that the data-driven system will not detect.
Evasion attacks.
These attacks involve cases where the attacker aims at influencing the model's behavior but has no access to its training set.
Hence, in this domain, as fraudsters aim at crafting frauds that the data-driven system will not detect, their goal is primarily to influence the model.
Hence, evasion attacks pose high risks and are particularly critical in fraud detection.
White and black box attacks.
Another dimension involves white box and black box attacks.
White box attacks assume that the attacker has complete knowledge of the model's structure and weights and may exploit it to craft efficient and precise attacks. Black box attacks require no prior knowledge
and treat the target model as a black box oracle, the structure of which can be inferred through multiple interactions. While crucial to understanding the literature on adversarial machine learning, this division may occasionally be misleading. For example, an attacker may know the model's structure but not its weights, or they may see part of the training set but not know what
model is employed.
Thus, a third category gray box attacks has been proposed to model this gray area.
Moreover, not all attacks aimed at compromising a model have the same goal. For example, an attacker may want the classifier to misclassify an observation in any direction or they may want it to classify it as a precise class. For instance, if we assume that a classifier has multiple classes that lead to an outcome unfavorable to potential attackers and one class that allows them to reach their goal, they have no interest in making the classifier predict any wrong class. Instead, they aim precisely at the class of interest. In the second case, the attack is called targeted, otherwise it is untargeted.
In fraud detection, attackers typically try to inject frauds that are not recognized by the system.
Not all attacks aimed at compromising a model have the same goal. For example, an attacker may want the classifier to misclassify an observation in any direction or they may want it to classify it as a precise class. For instance, let us assume that a classifier has multiple classes that lead to an outcome unfavorable to potential attackers and one class that allows them to reach their goal. Then, attackers have no interest in making the classifier predict any wrong class, and instead, they aim precisely at the class of interest.
In the second case, the attack is called targeted, otherwise it is untargeted. In fraud detection, attackers typically try to inject frauds that are not recognized by the system.
§ EVASION ATTACKS AND DEFENSES
In this Section, we present evasion attacks that are particularly challenging, and present ideas
for defense against them.
§.§ Attacks
White box attacks.
White box attacks are generally the easiest to perform, and over the years, an extensive array of attacks was designed. A common idea is to formulate the evasion problem as a constrained optimization problem, where the goal is modifying an observation x to generate an adversarial sample x_adv, having x and x_adv as close as possible in the original data space. All methods described here vary in the cost metric used (L_0, L_1, L2, L_∞ …) and the formulation and optimization approaches used.
Szegedy et al. <cit.> use the L_1 norm and solve it using Limited-memory Broyden Fletcher Goldfarb Shanno (L-BFGS) optimization. Fast gradient sign method (FGSM) <cit.> instead uses gradient ascent to maximize the loss of the classifier.
DeepFool <cit.> is an attack designed mainly for linear models. It works by iteratively generating small perturbations, determining the nearest hyperplane for an input element, and projecting it beyond this hyperplane. While linearization can be performed to extend the method to non-linear problems, the attacks can hardly be used in unconstrained domains.
Another interesting method is the Jacobian-based Saliency Map Attack (JSMA) <cit.>, a targeted attack aimed at controlling the L_0 norm, hence minimizing the number of features required to perform the attack. Interestingly, controlling the L_0 norm allows working in domains where the attacker has access only to a set of features, which may be relevant for some applications.
Similarly, Carlini & Wagner <cit.> propose a method that minimizes the attack's L_0 and the L_1 norm, effectively allowing to design attacks for both constrained and unconstrained domains. Moreover, such an attack was proven to break classic defenses such as distillation. Finally, Elastic-Nets <cit.> propose formulating the optimization problem as an elastic network regularized optimization problem. Elastic-Nets can optimize both L_1 and L2 and have shown significant transferability of the attacks. However, the form of the optimization translates into a significantly longer computation time compared to L-BFGS.
Black box attacks.
ZerothOrder Optimization (ZOO) <cit.> is a black box attack inspired by the C& W attack. This method uses the logits provided by the algorithm to estimate the gradients of the classification, then optimizes through Zeroth Order Optimization. Moreover, the attack has been designed to reduce the number of queries to the model (hence avoiding direct query detection) through importance sampling, hierarchical attacks, and attack space reduction.
Another approach considers
iterative targeted/non-targeted decision-based attacks, which do not require the logits of the target system <cit.>. Instead, the attack uses a rejection sampling algorithm to track the classifier's decision boundary and design the attacks. Using only the decisions of the classifier is the most realistic setting for many machine-learning APIs.
Similarly, the OPT attack <cit.> uses only the decisions of the classifiers as inputs. OTP uses the Randomized Gradient-Free (RGF) method to estimate the gradient at each iteration rather than the zeroth-order coordinate descent method and uses the L_1 and L2 norms to decide the size of the perturbation.
A different technique employs
a substitute model <cit.>. In particular, multiple queries
are
used to build a model that behaves similarly to the target classifier, to then design white box attacks against the substitute model. Due to the transferability of machine learning attacks, such attacks are likely to work against the original model too.
Finally, mimicry attacks <cit.> may be considered a simple form of adversarial attack. The idea, developed in intrusion detection, is to generate observations that avoid detection from the system by mimicking the characteristics of normal data. Mimicry attacks can work in black and gray/white settings, but knowing the features used to evaluate users' behavior allows optimizing the attack better.
§.§ Defenses
It has been advocated that the choice of features may increase the vulnerability of a model to adversarial attacks <cit.>. In particular,
a set of highly predictive features that is brittle and incomprehensible to humans,
could potentially
be modified without the humans noticing it.
One approach to fix this would be to create a more robust dataset that does not contain non-robust features.
An approach designed for neural networks proposes a defense mechanism based on regularization <cit.>.
This is inspired by the classic regularization employed in training, which could be seen as a weights-regularization, to implement a new technique called input-regularization.
The key idea is
that by reducing the effect that small changes in the data space may have on
the classifier's decisions,
the impact of small changes is automatically limited, and adversaries require higher leverage on the data to breach the model.
Another defense mechanism for Deep Neural Networks (DNN) is distillation, a technique
to train a neural network using knowledge transferred from a different
DNN.
<cit.> proposes a distillation version using the knowledge extracted from a DNN to improve its resilience to adversarial samples. This knowledge is then used to reduce variation around the inputs, using distillation to improve the generalization capabilities of the model and, consequently, its robustness towards adversarial attacks.
A widespread defense against adversarial attacks is adversarial training; i.e., the use of adversarial samples in the training of a machine learning model.
A form of regularization <cit.>, adversarial training significantly increases the model's robustness against the attacks used in the training process. However, recent works show that training a model against multiple attacks may be cumbersome <cit.>, and training against a type of perturbations typically does not guarantee against different types of attacks <cit.>.
While some techniques that defend against multiple perturbations exist <cit.>, adversarial training is still a highly incomplete defense. Moreover, the well-known trade-off between robustness and accuracy <cit.> implies that all the proposed defenses have a cost, and employing them when a threat is not present may result in an unjustifiable loss of accuracy for the classifier.
§ EXISTING SOLUTIONS
In this section, we review solutions related to (a) evasion attacks against online systems, (b) machine learning based fraud detection, and (c) adversarial attacks on fraud detection.
Online evasion attacks.
Past work has studied the problem of evasion attacks against speech recognition systems <cit.>.
Attacks against time series data such as speech and financial time series are often performed at run time without knowing the full-time series. This is because data is one-pass, and the perturbation is added each time t without access to all the elements x_t' in the series, where t' > t.
This approach uses reinforcement learning to model the problem,
where the attacker bases his perturbation
on the current status of the model. The authors propose finding the optimal policy through the Imitation Learning Strategy, where the model learns from the trajectory of a competent agent called an expert. In this case, they use state-of-the-art, non-real-time adversarial example crafting techniques as the expert.
Another work focuses on two aspects of online evasion attacks: the partial knowledge attackers have of the target model, and the irrevocability of their decision, since they operate on a transient data stream <cit.>.
The second problem is fascinating. Generally, attacks in the literature assume that the attacker can decide which points they want to change, but in a streaming environment, the attacker must decide whether they want to launch an attack in the present moment, and the decision, when taken, is irrevocable.
The authors study a deterministic variant of the problem of online adversarial learning, where the adversary must execute k successful attacks within n streamed
data points, where k << n, and re-conduct it to the classic computer science problem, named k-secretary problem, where one must choose the k best candidates as secretaries from a randomly ordered set of n potential candidates.
Then, they formulate a stochastic variant of the problem, which better suits the classic black box adversarial attack scenario.
Data-driven fraud detection.
Machine learning for fraud detection is a complex and widely studied problem.
Early works focused
on profiling the users based on the idea that the same transaction may be considered fraudulent or regular
depending on how well it fits the habits of the person who performs it <cit.>.
More recent works have been focusing on peculiar aspects of data distributions, such as the severe class imbalance, concept drift <cit.>, verification latency <cit.>, and the scalability of the learning process in a streaming environment <cit.>.
Notably, fraud detection is mainly performed through supervised methods <cit.>,
as
unsupervised approaches struggle with covering all possible scenarios of legitimate transaction activities <cit.>.
Finally, a significant challenge for fraud detection research is the lack of results sharing due to confidentiality issues <cit.>.
Synthetic data generators, such as the one presented in <cit.>, tackle this issue and allow for controlling the environment and testing against specific challenges, such as time dependency and concept drift. Still, clearly, additional efforts are needed to make fraud detection data widely available.
Adversarial attacks on fraud detection systems.
There are few works on adversarial attacks against fraud detection systems to date.
El-Awady <cit.> proposes an evasion attack against fraud detection systems, showing how the problem of maximizing the revenue of an attacker who has access to a set of stolen cards may be well expressed through reinforcement learning.
Carminati & al <cit.>
studies the problem of attacking a fraud detection system through evasion.
The paper argues in fraud detection the attacker has only access to the raw data and not to the features used by the model, and changes in the feature space may correspond to feasible changes in the data space.
The proposed threat model considers the attacker's goals, knowledge, and leverage.
In particular, three scenarios are considered:
(i) White box), the attacker knows everything;
(ii) Black box, the attacker does not know the detection system and training data, but knows the previous transactions performed using the card and has access to a similar dataset to the one used by the system;
and (iii) Gray box, the attacker knows the model's features, the same dataset used in the black box setting, and no more.
Based on the substitute model attack <cit.>, the paper
uses the data in the training set to train a machine learning model called Oracle, against which the attacks are then designed. Finally, it identifies two features the attacker can freely change (time and amount) and compare different strategies.
Another relevant contribution
analyzes the performance of various black-box evasion attacks against
an insurance fraud detection system.
The work isolates four constraints: the difference between editable and non-editable features, data imbalance, designing attacks unnoticed by human investigators, and the presence of non-continuous features, and proposes various solutions to adapt existing techniques to them.
Interestingly, the authors an open-source Python library called Adversarial Robustness Toolbox (ART) <cit.>. Moreover, the experiments are performed on a real-world, publicly available German data-loans dataset <cit.>, even though the considered data set is significantly smaller and less imbalanced than most bank transactions datasets <cit.>.
§ CHALLENGES AND PERSPECTIVES
Performing adversarial attacks against fraud detection systems is not trivial, as fraud detection presents domain-specific challenges for these attacks.
First, to perform a transaction, an attacker would need access to a stolen or cloned card.
Since this operation comes at a cost, the attackers should be highly efficient in the number of transactions performed.
Moreover, fraud detection systems can utilize time-dependent features to work <cit.>, where the past transactions of a card influence the probability of any transaction being accepted. Adversarial attacks work at aggregated features level. Hence attackers shall find the transactions that, after being processed together with the past transactions, lead to the same result obtained with standard evasion attacks <cit.>. In general, such transactions are extremely hard or even impossible to find.
Additionally, in real-world scenarios, several transactions' features are not observable or controllable by the attacker. For example, the average number of frauds on a terminal in the last days, used in <cit.>, is generally unknown to users and fraudsters alike, and cannot be hence considered in the evasion attack employed. Class imbalance can also lead to a significant loss in performance for most attacks unless adequately tackled <cit.>.
Furthermore, data-driven fraud detection
employs delayed feedback, as human investigators are often called to analyze suspicious transactions before a card is blocked <cit.>
(see Figure <ref>).
Hence, estimating each transaction's effect is harder for the fraudster, as the fact that a transaction is allowed does not mean that it will not lead to
blocking the card performing it once the investigators analyze it.
Finally, fraud detection systems are often performed online. As discussed in Section <ref>, this would require the attacks to work one-pass, and choosing the right moment to perform an attack is a further complication for the attacker.
Additionally, adaptations to concept drift may lead to continuous changes in the learner, which make it harder for a black box attacker to study it.
A common assumption in adversarial machine learning is that attacks can happen at a frequency high enough to make the drifts irrelevant. However, the problems of constrained cards budget and delayed feedback make the challenge relevant. Moreover, the frequency at which they can perform transactions with any card is limited by the automatic checks fraud detection systems perform, which result in automatic blocking of the cards.
On the other hand, concept drift may also create opportunities for poisoning. For instance, classifier retraining means that the most recent data can disproportionately impact the model <cit.>, which may mitigate the poisoning scalability issue discussed in Section <ref>.
To the best of our knowledge, only a few of these challenges have been addressed in the literature. For example,
<cit.> considers the problem of features sparsity but
do not assume streaming settings
or consider delayed feedback. Conversely, studies on adversarial attacks against streaming applications do not treat problems like feature observability, time dependency, or delayed feedback.
Nonetheless, this gap has significant implications
on designing
the right defense
due to the difficulties of assessing the threat, which may lead to two
conflicting challenges. First, we may
underestimate the threat, employing few or no defenses, and being vulnerable to any new effective attack crafted by the fraudsters. On the flip side, over-evaluating the risk may lead to an excessive focus on the system's robustness at the expense of accuracy. Suppose the risk is, in effect, low.
Then, this will result in an ongoing cost in terms of accuracy, translating into an excessive number of false alarms or too many regular frauds non-detected. While this trade-off is familiar to any security application, the lack of an adequate understanding of the threats makes it significantly more dangerous.
Considering the example described in Section <ref>,
let us suppose that the attacker does not know the model and employs a black box attack as the substitute model attack <cit.>.
They are first limited by the number of cards they can use. Then, even if we assume that they have enough cards to handle the exploration phase of the model, they still face the problem of delayed feedback, which creates uncertainty in the information about the model each transaction provides. Moreover, cards have a different history and are treated differently by the fraud detection engine. Even assuming they know the past transaction of all the cards they have stolen, they are still limited in creating transactions in the feature space.
All these problems decrease in importance with the budget of the attacker. For instance, if they had infinite cards, they could use each card only once during the estimation phase, hence solving the issue of delayed feedback, and they could still be "fast" enough not to be affected by concept drift. They could even learn the history distribution of each card, where, assuming that each card C is characterized by a history h(p), they could estimate the probability distribution P(h(C)).
This is an impossible scenario, and an infinite number of cards is not a realistic threat. However, how many cards do they need to perform the attack? And can they exploit other properties of fraud detection systems that we do not know to increase the efficiency of the attacks? Without proper research on the topic, answering these questions will be impossible, meaning we may know how real the risk is only when a severe breach happens.
This can have severe implications, as secure learning requires an accurate threat analysis, which in turn requires efforts to understand the vulnerabilities of a system and the attacks that can be performed against it. In particular is crucial to know how serious the threat of various families of attacks, such as poisoning and evasion, is for online and constrained applications. This allows directing resources toward defense from the most likely threats. Moreover, we need to find out how effective evasion is in certain situations, and it is a problem because we do not know if someone could soon find super-effective attacks in the field. Conversely, more works aimed at constructing experiments tools for fraud detection more available are required to ease the research.
Let us assume now that the attackers do not know the model and employ a black box attack as a substitute model attack <cit.>. They will face a number of challenges: (a) limited number of cards they can use, (b) delayed feedback, which creates uncertainty in the information about the model each transaction provides, and (c) the fraud detection engine will treat each card differently based on their usage history.
Even if the attackers know the transaction history of all the cards in their possession, they will still be limited in creating transactions in the feature space.
However, these challenges also depend on the budget of the attacker. For instance, if they had infinite cards, they could use each card only once during the estimation phase, hence solving the issue of delayed feedback, and they could still be “fast” enough not to be affected by concept drift. They could even learn the history distribution of each card, where, assuming that each card C is characterized by a history h(p), they could estimate the probability distribution P(h(C)).
Although acquiring an infinite number of cards does not seem as a viable scenario, still several critical research questions arise: “How many cards do attackers need to pose a realistic threat?” or “Could they exploit other properties of fraud detection systems that we do not know to increase the efficiency of an attacks?”. Although several organizations have put together rules and policies based on experience and common logic, still a proper, principled research effort is required towards being able to assess what constitutes a real risk, much earlier than when a severe breach happens.
Admittedly, secure learning requires an accurate threat analysis, which in turn requires efforts to understand the vulnerabilities of a system and the attacks that could be performed against it. In particular, it is crucial to know how serious the threat of various families of attacks, such as poisoning and evasion, is for online and constrained applications. This would allow directing the right resources toward defense from the most likely, high-risk threats. An extra complication however is that as the fraud detection researchers and practitioners put effort in enforce their defense, at the same time, fraudsters invent novel ways to attack these defenses.
§ CONCLUSIONS
Adversarial machine learning has made incredible advancements in the last decade, showing how machine learning applications are highly vulnerable to prepared and skillful attackers. Especially in computer vision, poisoning and evasion attacks have proven capable of breaching a variety of machine learning models. However, a gap between theoretical research and most real-world applications remains. First, not enough studies were made for many business-critical applications. Fraud detection, in particular, presents many challenges to an attacker significantly different from those found in image recognition. Limited budget, time-dependent features, concept drift, and verification latency present severe obstacles to existing algorithms, and research on overcoming these obstacles is still in its infancy. This leaves room for skillful attackers to create new attacks, exploiting the gap in the research and the difficulties in assessing the severity of the threat.
Finally, understanding how to deal with fraud detection challenges may help with similar applications.
First, more and more applications are deployed online, and understanding how adversarial attacks in such settings work is mandatory to deploy them in complete security. More generally, understanding how to deal with constrained applications in adversary settings is a step we must take to make the defenses and risk assessment techniques developed in the theoretical research on adversarial attacks.
Despite the recent significant advancements of adversarial machine learning, the relevant studies stress that machine learning applications are highly vulnerable to prepared and skillful attackers. In the computer vision paradigm, poisoning and evasion attacks have proven capable of breaching a variety of machine learning models. As we still have not studied adequately several business-critical applications a gap between theoretical research and many real-world applications remains.
Fraud detection, in particular, presents many challenges to an attacker that are significantly different from those studied so far in areas such as image recognition. Examples include limited budget, time-dependent features, concept drift, and verification latency. However, the same challenges also complicate the construction of effective and efficient defense mechanisms, and the relevant research is still in its infancy.
This leaves room for skillful attackers to create new attacks, exploiting the gap in the research and the difficulties in assessing the severity of the threat.
Hence, as we have entered an era driven by a new data economy paradigm, it is imperative to fully exploit and understand how we could reinforce our data-driven, learning systems with effective yet practical fraud detection minimizing the risk of a fraudulent activity and bias. And as more and more applications operate online, research needs to adapt rapidly and present solid results towards effective defense and risk assessment, and secure operation of constrained applications in adversary settings.
Future works include comparing existing adversarial attacks against available fraud detection data sets, designing synthetic transaction generators to allow for testing adversarial attacks against concept drift, delayed feedback, and the other main challenges highlighted in this work.
Hence, as we enter an era driven by a new data economy paradigm, it is imperative to fully exploit and understand how we could reinforce our data-driven, learning systems with effective yet practical fraud detection. And as more and more applications operate online, research needs to adapt rapidly and present solid results towards effective defense and risk assessment, and secure operation of constrained applications in adversary settings.
§ INTRODUCTION2
ACM's consolidated article template, introduced in 2017, provides a
consistent style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The “” document class can be used to prepare articles
for any ACM publication — conference or journal, and for any stage
of publication, from review to final “camera-ready” copy, to the
author's own version, with very few changes to the source.
§ TEMPLATE OVERVIEW
As noted in the introduction, the “” document class can
be used to prepare many different kinds of documentation — a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a “camera-ready”
journal article, a SIGCHI Extended Abstract, and more — all by
selecting the appropriate template style and template parameters.
This document will explain the major features of the document
class. For further information, the User's Guide is
available from
<https://www.acm.org/publications/proceedings-template>.
§.§ Template Styles
The primary parameter given to the “” document class is
the template style which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the command:
Journals use one of three template styles. All but three ACM journals
use the template style:
* : The default journal template style.
* : Used by JOCCH and TAP.
* : Used by TOG.
The majority of conference proceedings documentation will use the template style.
* : The default proceedings template style.
* : Used for SIGCHI conference articles.
* : Used for SIGCHI “Extended Abstract” articles.
* : Used for SIGPLAN conference articles.
§.§ Template Parameters
In addition to specifying the template style to be used in
formatting your work, there are a number of template parameters
which modify some part of the applied template style. A complete list
of these parameters can be found in the User's Guide.
Frequently-used parameters, or combinations of parameters, include:
* : Suitable for a “double-blind”
conference submission. Anonymizes the work and includes line
numbers. Use with the command to print the
submission's unique ID on each page of the work.
* : Produces a version of the work suitable
for posting by the author.
* : Produces colored hyperlinks.
This document uses the following string as the first command in the
source file:
§ MODIFICATIONS
Modifying the template — including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the command to manually adjust the
vertical spacing between elements of your work — is not allowed.
Your document will be returned to you for revision if
modifications are discovered.
§ TYPEFACES
The “” document class requires the use of the
“Libertine” typeface family. Your installation should include
this set of packages. Please do not substitute other typefaces. The
“” and “” packages should not be used,
as they will override the built-in typeface families.
§ TITLE INFORMATION
The title of your work should use capital letters appropriately -
<https://capitalizemytitle.com/> has useful rules for
capitalization. Use the command to define the title of
your work. If your work has a subtitle, define it with the
command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The
command has a “short title” parameter:
§ AUTHORS AND AFFILIATIONS
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an “e-mail
alias,” as shown below, is not acceptable:
The and commands allow a note
to apply to multiple authors — for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last definition:
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
<https://www.acm.org/publications/proceedings-template>, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
§ RIGHTS INFORMATION
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
* the “ACM Reference Format” text on the first page.
* the “rights management” text on the first page.
* the conference information in the page header(s).
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
§ CCS CONCEPTS AND USER-DEFINED KEYWORDS
Two elements of the “acmart” document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System —
<https://www.acm.org/publications/class-2012> — is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via <https://dl.acm.org/ccs/ccs.cfm>, and generate the
commands to be included in the source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
§ SECTIONING COMMANDS
Your work should use standard sectioning commands:
, , , and
. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is not allowed.
§ TABLES
The “” document class includes the “”
package — <https://ctan.org/pkg/booktabs> — for preparing
high-quality tables.
Table captions are placed above the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper “floating” placement of tables, use the
environment table to enclose the table's contents and the
table caption. The contents of the table itself must go in the
tabular environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on tabular material are found in the
User's Guide.
Immediately following this sentence is the point at which
Table <ref> is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
To set a wider table, which takes up the whole width of the page's
live area, use the environment table* to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will “float” to a location deemed more
desirable. Immediately following this sentence is the point at which
Table <ref> is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
§ MATH EQUATIONS
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
§.§ Inline (In-text) Equations
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the math environment,
which can be invoked with the usual
construction or with
the short form . You can use any of the symbols
and structures, from α to ω, available in
<cit.>; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
lim_n→∞x=0
,
set here in in-line math style, looks slightly different when
set in display style. (See next section).
§.§ Display Equations
A numbered display equation—one set off by vertical space from the
text and centered horizontally—is produced by the equation
environment. An unnumbered display equation is produced by the
displaymath environment.
Again, in either environment, you can use any of the symbols and
structures available in ; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
lim_n→∞x=0
Notice how it is formatted somewhat differently in
the displaymath
environment. Now, we'll enter an unnumbered equation:
∑_i=0^∞ x + 1
and follow it with another numbered equation:
∑_i=0^∞x_i=∫_0^π+2 f
just to demonstrate 's able handling of numbering.
§ FIGURES
The “” environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed below the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper. For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
<https://www.acm.org/publications/taps/describing-figures/>.
§.§ The “Teaser Figure”
A “teaser figure” is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
command:
§ CITATIONS AND BIBLIOGRAPHIES
The use of for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
— use full first names (“Donald E. Knuth”) not initials
(“D. E. Knuth”) — and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the command:
where “” is the name, without the “”
suffix, of the file.
Citations and references are numbered by default. A small number of
ACM publications have citations and references formatted in the
“author year” style; for these exceptions, please include this
command in the preamble (before the command
“”) of your source:
Some examples. A paginated journal article <cit.>, an
enumerated journal article <cit.>, a reference to an entire
issue <cit.>, a monograph (whole book) <cit.>, a
monograph/whole book in a series (see 2a in spec. document)
<cit.>, a divisible-book such as an anthology or compilation
<cit.> followed by the same example, however we only output
the series if the volume number is given <cit.> (so
Editor00a's series should NOT be present since it has no vol. no.),
a chapter in a divisible book <cit.>, a chapter in a
divisible book in a series <cit.>, a multi-volume work as
book <cit.>, a couple of articles in a proceedings (of a
conference, symposium, workshop for example) (paginated proceedings
article) <cit.>, a proceedings article with
all possible elements <cit.>, an example of an enumerated
proceedings article <cit.>, an informally published work
<cit.>, a couple of preprints <cit.>, a doctoral dissertation <cit.>, a
master's thesis: <cit.>, an online document / world wide web
resource <cit.>, a video game
(Case 1) <cit.> and (Case 2) <cit.> and <cit.>
and (Case 3) a patent <cit.>, work accepted for
publication <cit.>, 'YYYYb'-test for prolific author
<cit.> and <cit.>. Other cites might
contain 'duplicate' DOI and URLs (some SIAM articles)
<cit.>. Boris / Barbara Beeton:
multi-volume works as books <cit.> and <cit.>. A
couple of citations with DOIs:
<cit.>. Online
citations: <cit.>. Artifacts:
<cit.> and <cit.>.
§ ACKNOWLEDGMENTS
Identification of funding sources and other support, and thanks to
individuals and groups that assisted in the research and the
preparation of the work should be included in an acknowledgment
section, which is placed just before the reference section in your
document.
This section has a special environment:
so that the information contained therein can be more easily collected
during the article metadata extraction phase, and to ensure
consistency in the spelling of the section heading.
Authors should not prepare this section as a numbered or unnumbered ; please use the “” environment.
§ APPENDICES
If your work needs an appendix, add it before the
“” command at the conclusion of your source
document.
Start the appendix with the “” command:
and note that in the appendix, sections are lettered, not
numbered. This document has two appendices, demonstrating the section
and subsection identification method.
§ MULTI-LANGUAGE PAPERS
Papers may be written in languages other than English or include
titles, subtitles, keywords and abstracts in different languages (as a
rule, a paper in a language other than English should include an
English title and an English abstract). Use for
every language used in the paper. The last language indicated is the
main language of the paper. For example, a French paper with
additional titles and abstracts in English and German may start with
the following command
The title, subtitle, keywords and abstract will be typeset in the main
language of the paper. The commands ,
begin title, subtitle and keywords, can be used to set these elements
in the other languages. The environment is
used to set the translation of the abstract. These commands and
environment have a mandatory first argument: the language of the
second argument. See file for examples
of their usage.
§ SIGCHI EXTENDED ABSTRACTS
The “” template style (available only in and
not in Word) produces a landscape-orientation formatted article, with
a wide left margin. Three environments are available for use with the
“” template style, and produce formatted output in
the margin:
* : Place formatted text in the margin.
* : Place a figure in the margin.
* : Place a table in the margin.
To Robert, for the bagels and explaining CMYK and color spaces.
ACM-Reference-Format
§ RESEARCH METHODS
§.§ Part One
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi
malesuada, quam in pulvinar varius, metus nunc fermentum urna, id
sollicitudin purus odio sit amet enim. Aliquam ullamcorper eu ipsum
vel mollis. Curabitur quis dictum nisl. Phasellus vel semper risus, et
lacinia dolor. Integer ultricies commodo sem nec semper.
§.§ Part Two
Etiam commodo feugiat nisl pulvinar pellentesque. Etiam auctor sodales
ligula, non varius nibh pulvinar semper. Suspendisse nec lectus non
ipsum convallis congue hendrerit vitae sapien. Donec at laoreet
eros. Vivamus non purus placerat, scelerisque diam eu, cursus
ante. Etiam aliquam tortor auctor efficitur mattis.
§ ONLINE RESOURCES
Nam id fermentum dui. Suspendisse sagittis tortor a nulla mollis, in
pulvinar ex pretium. Sed interdum orci quis metus euismod, et sagittis
enim maximus. Vestibulum gravida massa ut felis suscipit
congue. Quisque mattis elit a risus ultrices commodo venenatis eget
dui. Etiam sagittis eleifend elementum.
Nam interdum magna at lectus dignissim, ac dignissim lorem
rhoncus. Maecenas eu arcu ac neque placerat aliquam. Nunc pulvinar
massa et mattis lacinia.
Source/ACM-Reference-Format
|
http://arxiv.org/abs/2307.00543v1
|
20230702112333
|
Defending Against Malicious Behaviors in Federated Learning with Blockchain
|
[
"Nanqing Dong",
"Zhipeng Wang",
"Jiahao Sun",
"Michael Kampffmeyer",
"Yizhe Wen",
"Shuoying Zhang",
"William Knottenbelt",
"Eric Xing"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"cs.GT"
] |
Journal of IEEE Transactions on Artificial Intelligence, Vol. 00, No. 0, Month 2020
First A. Author et al.: Bare Demo of IEEEtai.cls for IEEE Journals of IEEE Transactions on Artificial Intelligence
let@tokenonedot
onedotlet@token..
e.g E.g
i.e I.e
c.f C.f
etc vs
w.r.t d.o.f
i.i.d et al
Defending Against Malicious Behaviors in Federated Learning with Blockchain
Nanqing Dong, Zhipeng Wang, Jiahao Sun, Michael Kampffmeyer, Yizhe Wen, Shuoying Zhang,
William Knottenbelt, and Eric Xing, Fellow, IEEE
The first two authors contributed equally to this work. This work was supported in part by FLock.io under the FLock Research Grant.
N. Dong is with the Department of Computer Science, University of Oxford, Oxford, OX1 3QD, UK. (email: [email protected])
Z. Wang and W. Knottenbelt are with the Department of Computing, Imperial College London, London, SW7 2AZ, UK. (emails: [email protected], [email protected])
J. Sun is with the Data Science Institute, Imperial College London, SW7 2AZ, UK; and also with FLock.io, London, WC2H 9JQ, UK. (email: [email protected])
M. Kampffmeyer is with the Department of Physics and Technology at UiT The Arctic University of Norway, 9019 Tromsø, Norway. (email: [email protected])
Y. Wen and S. Zhang are with FLock.io, London, WC2H 9JQ, UK. (emails: [email protected], [email protected])
E. Xing is with the Machine Learning Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA; and also with Mohamed bin Zayed University of Artificial Intelligence, Masdar City, Abu Dhabi, UAE (email: [email protected])
August 1, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In the era of deep learning, federated learning (FL) presents a promising approach that allows multi-institutional data owners, or clients, to collaboratively train machine learning models without compromising data privacy. However, most existing FL approaches rely on a centralized server for global model aggregation, leading to a single point of failure. This makes the system vulnerable to malicious attacks when dealing with dishonest clients.
In this work, we address this problem by proposing a secure and reliable FL system based on blockchain and distributed ledger technology.
Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors. Both theoretical and empirical analyses are presented to demonstrate the effectiveness of the proposed approach, showing that our framework is robust against malicious client-side behaviors.
Federated learning has been a promising solution to utilize multi-site data while preserving users' privacy.
Despite the success of integrating blockchain with federated learning to decentralize global model aggregation, the protection of this integration from clients with malicious intent in federated scenarios remains unclear.
This paper presents the first formulation of this problem and the proposed stake-based aggregation mechanism shows robustness in detecting malicious behaviors. The results in this work not only pose a new research direction in federated learning, but can also benefit a wide variety of applications such as finance and healthcare.
Blockchain, Deep Learning, Federated Learning, Trustworthy Machine Learning
§ INTRODUCTION
Nowadays, machine learning (ML), or more specifically, deep learning, has transformed a broad spectrum of industries, ranging from finance to healthcare. In current ML paradigms, training data are first collected and curated, and then ML models are optimized by minimizing certain loss criteria on the training data. A common underlying assumption in the learning environment is that the training data can be instantly accessed or easily distributed across computing nodes without communication constraints, data are centralized.
However, in a system with multiple clients ( data holders), to ensure data centralization, clients have to upload local data to a centralized device ( a central server) to conduct the centralized training described above. Despite the success of centralized training in various deep learning applications <cit.>, there is growing concern about data privacy and security, especially when the local data held by the clients are private or contain sensitive information. Especially, to ensure data governance, strict data regulations have been established <cit.>.
To address the aforementioned concern, federated learning (FL) has been proposed <cit.>.
In a typical FL system, a central server <cit.> is responsible for aggregating and synchronizing model weights, while a set of clients manipulate multi-site data.
This facilitates data governance, as clients only exchange model weights or gradients with a central server instead of uploading local data to the central server, and has led to FL becoming a standardized solution to utilize multi-site data while preserving privacy.
Though FL perfectly implements data decentralization, a trustworthy central server is required in the system. In such a system design, the central server in fact has privileges over clients, as the central server determines the global aggregation and synchronization. If the central server is compromised or manipulated by a malicious party, the clients are vulnerable if the central server intentionally distributes problematic model updates. This can potentially increase the cost of system management and maintenance. Towards avoiding this single point of failure, many efforts have been made to decentralize the central server, and one particularly promising solution is to use a blockchain as decentralized storage <cit.>.
Originally proposed for cryptocurrencies, a blockchain is a distributed ledger that can record the state transition information among multiple parties <cit.>, without relying on a centralized server. Blockchain technology has gained widespread attention for its potential to revolutionize a variety of industries, such as finance <cit.>, healthcare <cit.>, and supply chain management <cit.>. By leveraging the decentralized nature of the blockchain, FL can benefit from increased security, privacy, and efficiency, as well as reduced reliance on centralized servers <cit.>. Concretely, in FL with blockchain, each client participating in the learning process uploads their local model updates to the blockchain, where they are stored in blocks, the metadata of a blockchain system. These blocks are then used to aggregate the local model updates into a global model, which can be downloaded by the clients. The use of blockchain smart contracts <cit.>, which are computer programs triggered by blockchain events, ensures that the global aggregation process is performed automatically and transparently, without the need for human intervention or centralized control.
Though integrating blockchain with existing FL systems can partially solve the threat to the central server, FL systems are still vulnerable to client-side malicious attacks <cit.>. In this work, we define malicious behaviors as actions that intentionally decrease the learning performance ( accuracy and convergence) of the global model. The attackers can sabotage the FL systems via attacks such as data poisoning <cit.> or model poisoning <cit.>. This work focuses on defending against client-side malicious attacks.
We propose a generic framework that can integrate an FL system with a blockchain system and can defend against malicious attacks. The proposed defense mechanism is motivated by proof-of-stake (PoS) <cit.>, a consensus mechanism in blockchain, and The Resistance <cit.>, a role-playing board game.
PoS has an incentive mechanism that encourages honest behaviors by rewarding it and punishes dishonest behaviors via slashing.
The Resistance, on the other hand, has two mismatched competing parties, where the party with a larger size is denoted as the resistance force and the other party is denoted as the spies. In The Resistance, there is a voting mechanism where, in each round, each player conducts independent reasoning and votes for a player, and the player with the highest votes will be deemed as a “spy” and kicked out of the game. The goal of the resistance force is to vote out all the spies while the spies aim to impersonate the resistance force and survive until the end. Based on these two concepts, this work proposes a novel majority-voting mechanism for global aggregation where each participating client independently validates the quality of aggregated local updates and votes for acceptance of the global update. The aggregation mechanism is stake-based where participating clients stake assets[In practice, the staked assets can be linked with cryptocurrency or real currency to increase the financial cost of malicious attacks.] or tokens (a quantitative measurement of the asset, which can be used to indicate the trustworthiness of the client in our system) for their own actions. There are two types of actions, proposing (uploading local updates) and voting. If the majority vote is to accept the global aggregation, a proposer will be refunded with its staked tokens and a voter who votes for acceptance will not only be refunded but also be rewarded with the staked tokens from the voters who vote for rejection, and vice versa. The overall procedure of the stake-based aggregation mechanism is illustrated in Fig. <ref>.
We evaluate the proposed framework on a practical financial problem, namely loan default prediction. We simulate the FL and blockchain environment for the Lending Club Kaggle challenge dataset to conduct experiments in a controllable setting and to provide insights into the problem of interest. We empirically show that an FL system can maintain robust performance under malicious attacks by introducing the proposed stake-based aggregation mechanism.
The contributions of this work are summarized as follows:
* We formulate the problem of decentralized federated learning with blockchain in the presence of malicious attacks.
* We propose a stake-based aggregation mechanism for federated learning systems that can defend against malicious attacks.
* We evaluate the robustness of the proposed framework in a simulated environment and provide initial empirical insights into the problem.
§ RELATED WORK
§.§ Federated Learning
The concept of FL comes from the necessity of on-device training, where the training data have to remain on the device <cit.>.
The clients of FL are distributed at different physical locations that are connected to the internet, which exposes a few security risks compared with distributed learning. First, as the local dataset belongs to the client, FL has to take the users' privacy into consideration. This can be addressed by integrating privacy-preserving techniques into FL, such as differential privacy <cit.>. Second, FL can be manipulated via internet access, the central server can be compromised by a third party. Third, a new client could participate in the federated training at any time if it meets the required criteria. This means that clients with malicious intentions can also join the federated systems while meeting the initial criteria. This work focuses on the third risk as the second risk is mitigated by replacing the central server with a blockchain. Traditional FL methods can only detect and defend the malicious clients via assigning small weights to malicious clients during global aggregation based on the divergence of learned parameters <cit.> or using unsupervised learning methods ( anomaly detection <cit.> and clustering <cit.>) to get rid of malicious clients. However, none of these methods tackles the second challenge and these methods do not consider the third challenge. The proposed framework utilizes the blockchain to ensure the security of global aggregation and defends the client-side malicious behaviors with a novel majority-voting mechanism.
§.§ Blockchain
Blockchains refer to distributed ledgers that operate on a global peer-to-peer (P2P) network, as exemplified by popular cryptocurrencies such as Bitcoin <cit.> and Ethereum <cit.>. One of the defining characteristics of blockchain technology is the ability for users to freely join or leave the network, without a central authority in place to ensure common agreement on the distributed ledgers. Instead, users rely on consensus protocols <cit.>, such as proof-of-work (PoW) or PoS, to achieve agreement in a distributed setting.
As shown in Fig. <ref>, in a blockchain system, a transaction typically involves a sender who wishes to transfer a digital asset, such as a cryptocurrency, to a recipient. The sender initiates the transaction by creating a digital signature that includes the transaction details and the sender's private key, which is used to verify the sender's identity and authorize the transfer. The transaction is then broadcasted over a P2P network to miners, who are participants in the network responsible for verifying and adding new blocks of transactions to the blockchain. Miners validate and confirm the transaction using consensus protocols, to ensure that the transaction is legitimate and not a duplicate or fraudulent transaction. Once confirmed, the transaction is added to a block, which is then cryptographically linked to the previous block using hash functions <cit.>, forming a chain of blocks (, blockchain). The block is then propagated to all the participants in the network, creating a decentralized, immutable record of the transaction. Finally, the recipient can access the digital asset by using their private key to authenticate their identity and claim ownership of the asset. The use of cryptography and consensus protocols ensures the security, transparency, and decentralization of the transaction process, making blockchain technology a promising solution for a variety of applications beyond cryptocurrency, e.g., insurance <cit.>, healthcare <cit.>, supply chain management <cit.>, energy <cit.>, and Internet of Things (IoT) <cit.>.
Another key feature of blockchain technology is the use of smart contracts <cit.>, which are quasi-Turing-complete programs that can be executed within a virtual machine. When a transaction is initiated, a smart contract is typically used to encode the terms and conditions of the transaction, such as the amount, currency, and time of transfer. The smart contract is then stored on the blockchain network and executed automatically when the predefined conditions are met. The execution of the smart contract verifies the transaction, ensuring that it meets the agreed-upon terms and conditions, and then automatically transfers the digital asset or currency to the recipient. Smart contracts can be leveraged to build a wide range of decentralized applications (DApps), such as decentralized finance (DeFi) services <cit.>.
§.§ Federated Learning with Blockchain
Traditional FL faces challenges <cit.>, such as privacy and security concerns, unreliable communication, and difficulty in reaching a consensus among the parties. Blockchain, on the other hand, provides a decentralized, secure, and transparent platform for data storage and sharing. This makes the use of blockchain for FL a promising direction to potentially address privacy and security concerns by allowing parties to keep their data private while still contributing to the training process. Additionally, blockchain can provide a secure communication channel for FL participants and ensure the integrity of the FL process.
Current blockchain-based FL designs <cit.> have been broadly used in diverse fields, including mobile edge
computing <cit.>, IoT <cit.> and distributed machine learning <cit.>. Despite the potential benefits of combining FL with blockchain, several challenges remain. For instance, FL systems are still vulnerable to client-side malicious attacks <cit.> and lack incentive-compatible mechanisms to motivate FL participants to behave honestly during the training process.
Multiple reputation-based incentive mechanisms <cit.> have recently been proposed to encourage participants and enhance model accuracy in blockchain-based FL. However, it remains unclear how to effectively utilize the blockchain infrastructure and leverage its inherent incentive mechanism (i.e., cryptocurrencies) to incentivize trustworthy FL behaviors and penalize malicious clients.
§ PROBLEM FORMULATION
This section introduces the problem of interest, the definition of the malicious behaviors considered, and the underlying assumptions in this work.
§.§ Setup
There are K > 1 clients in a federated system. Let 𝒦 = {1, 2, ⋯, K} denote the set of all clients. Let 𝒟_k denote the local data stored in client k, we have 𝒟_k ∩𝒟_l = ∅ for k ≠ l and k, l ∈𝒦. Each local dataset 𝒟_k can be randomly split into a training set and a test set, which are both private to client k. In addition to K clients, a blockchain plays the role of a parameter server <cit.> for global aggregation. Let f_θ be the model of interest. In the parameter server, the parameter set θ_0^0 is randomly initialized at round 0 and K clients download θ_0^0 from the blockchain as K local copies {θ_k^0}_k=1^K for full synchronization. During the federated optimization phase, a set of 𝒦_p^t clients is randomly selected for round t. For each k ∈𝒦_p^t, the client k updates θ_k^t-1 by training on the training set of 𝒟_k independently for a number of local epochs. Then, the blockchain aggregates updated {θ_k^t}_k ∈𝒦 collected from all the K clients to update θ_0^t. The K clients then synchronize with the parameter server, θ_k^t←θ_0^t. To facilitate data governance, as required in among others the medical domain <cit.>, we assume that the patient's data (either raw data or encoded data) in a client can not be uploaded to the blockchain or other clients, only parameters {θ_k}_k=0^K and metadata ( the statistics of data) <cit.> can be exchanged between the blockchain and the clients.
It is worth mentioning that this work focuses on the interactions between FL and blockchain, where blockchain computing (or mining, in a more fashionable sense) and the application of additional privacy-preserving techniques <cit.> are considered orthogonal research directions and thus beyond the scope of this work.
§.§ Malicious Behaviors
The definition of malicious behavior in this work is an action that intentionally decreases the global model performance. There are two types of actions for each client that interact with the federated system, a client can propose ( be a proposer) and vote ( be a voter). Proposing is to upload local model or gradient updates to the parameter server, while voting is a peer-review process to validate the “virtually” aggregated model updates. The technical details of the two actions are described in Sec. <ref>. There are thus two corresponding malicious behaviors. The first malicious behavior is to propose harmful local model updates and the second one is to vote dishonestly. More specifically, in the second case, a client votes for approval when it is aware that the proposed model updates are poisoned and votes for rejection when there is no evidence that indicates that the proposed model updates are poisoned. It is worth mentioning that the clients themselves might not intentionally attack the FL system as they can be compromised by attackers. For simplicity, we define the clients that have malicious behaviors as malicious clients in this work, denoted as 𝒦_m. We use η to denote the ratio of malicious clients among all clients, η = |𝒦_m|/𝒦, where |·| is the cardinality of a set.
§.§ Assumptions
There are six important assumptions in this work.
* A1: The goal of malicious behaviors is to decrease the global model performance. This is also reflected in Sec. <ref>. Under these assumptions, behaviors that are harmful to the system but do not influence the global model performance are beyond the scope of discussion in this work. An example is eavesdropping, cloning the model specifications.
* A2: All clients are rational. This means that both honest and malicious clients expect to maximize their gain or minimize their loss while achieving their goals.
* A3: Following previous studies on blockchain <cit.>, we assume that η is strictly smaller than 50%. This means there are always more honest clients than malicious clients in a federated system.
* A4: There is no collusion among malicious clients. That is to say, each malicious client acts independently. In the application scenarios of this work ( Sec. <ref>), there is a minimal bar for a client to participate in the system, basic qualifications or industry standards for clinical or financial institutes. Meanwhile, it is difficult to compromise multiple clients with independent cybersecurity systems simultaneously. Thus, we deem that it is almost impossible to launch large-scale multi-agent attacks in the application scenarios of interest. As an exploratory study, this work considers the single-agent scenario. (See Sec. <ref> for the discussion on a multi-agent scenario.)
* A5: There is no capacity constraint on the hardware, including computing, communication, and storage, allowing us to solely focus on the algorithmic side of the problem.
* A6: The underlying blockchain of the FL system of interest is running securely with a consensus protocol that ensures the validity and integrity of transactions and blocks. While the security of the blockchain is crucial for the overall security of the FL system, addressing the malicious miners falls outside the scope of this study.
§ METHOD
§.§ Federated Aggregation
In this work, we illustrate the proposed framework in the context of the seminal FL method, FedAVG <cit.>. At the end of round t, the local models {θ_k^t}_k=1^K are uploaded and aggregated as a weighted average:
θ_0^t = ∑_k=1^K a_k θ_k^t-1,
where a_k = n_k/N. The metadata n_k = |𝒟_k| is the number of local training examples stored in client k and N = ∑_k=1^K n_k is the total number of training examples in the K clients.
§.§ Local Validation
In contrast to standard FL algorithms, the aggregated global model is not recorded in a block directly. Instead, θ̃_0^t, a copy of θ_0^t is downloaded by a randomly selected set of clients, denoted as voters, 𝒦_v^t. A voter k runs a local inference with θ̃_0^t on its local test set and outputs a local validation score. The local validation score s^t_k is a scalar, which can be linked with common metrics of ML tasks[For example, common evaluation metrics include accuracy for classification, mean Intersection over Union (mIOU) for semantic segmentation, and mean average precision (mAP) for object detection.]. If s^t_k is not lower than a threshold, the voter votes for accepting this aggregated model; otherwise, the voter votes against it. The threshold can be based on a validation score s^t-1_k acquired in the previous round. In the training of ML tasks, the scores can be volatile due to the characteristics of the tasks. Thus, a hyperparameter ϵ∈ (0, 1) is introduced to control the tolerance of performance decrease in a single round. Mathematically, Voter k has the following score.
v_k^t =
+1, s^t_k ≥ (1 - ϵ)s^t-1_k
-1, s^t_k < (1 - ϵ)s^t-1_k
It is worth mentioning that it is almost impossible for the attackers to manipulate scores by fooling all the randomly selected voters ( via adversarial attacks <cit.>). According to A5, the majority of voters are honest. It is thus difficult to attack (either via data poisoning or model poisoning) as the validation set of each client is private.
§.§ Majority Voting
The majority voting process for whether to apply the global aggregation operation at round t can be described below. Here, we use a binary variable a^t to denote the decision.
a^t =
+1, ∑_k ∈𝒦_v v_k > 0
-1, ∑_k ∈𝒦_v v_k ≤ 0
If a^t = 1, the global aggregation will be finalized and recorded in the block; otherwise, the global aggregation will be discarded.
§.§ Asset Redistribution
As there are two independent actions, there are two parallel reward-and-slash designs for proposing and voting. For both actions, the randomly selected proposers and voters are required to stake a fixed sum of tokens before they act. If some of these actors fail to stake (they do not have enough tokens left), they lose their access to the blockchain and are removed from the FL system permanently. Proposers will be rewarded with tokens accumulated in an independent pool (if there are any tokens left in the pool) if the global aggregation is approved and lose their stakes if the global aggregation is rejected. The reward-and-slash design for the proposers is illustrated in Algorithm <ref>. For the voters, the majority party will not only take back their stakes but also be rewarded with the staked tokens lost by the minority party. The reward-and-slash design for the voters is illustrated in Algorithm <ref>. In the following section, Sec. <ref>, we demonstrate that under the proposed design and assumptions in Sec. <ref>, malicious voters have no incentive to make dishonest votes.
§.§ Theoretical Analysis on Malicious Votes
In this section, we theoretically show that malicious voters in the proposed framework have no incentive to make dishonest votes.
When all clients are rational and there is no collusion among malicious clients, a malicious client should not make a malicious vote.
Let 𝒦_v denote a randomly selected set of voters and n_v = |𝒦_v|. For client k ∈𝒦_v, let γ_v > 0 denote the staked tokens for voting, client k must stake γ_v to participate in the voting, otherwise, it will be removed from the system.
Under A4, each client makes an independent decision on voting. Let r be the ratio of malicious clients in 𝒦_v, there are r · n_v malicious clients in 𝒦_v and (1 - r) · n_v honest clients. If r · n_v < (1 - r) · n_v, r < 0.5, each malicious client will lose γ_v; if r · n_v > (1 - r) · n_v, r > 0.5, each malicious client will gain (1 - r) · n_v ·γ_v /r · n_v = 1- r/rγ_v. The expected return ℛ of a malicious client will be
ℛ = ∫_0^0.5 -γ_v dr + ∫_0.5^11- r/rγ_v dr
= - 0.5 γ_v + ((ln(1) - 1) - (ln(0.5) - 0.5)) γ_v
= -( ln(0.5) + 1) γ_v < 0
Under A2, each client is rational. As ℛ < 0, in the long run, a malicious client will lose all tokens and be removed from the system. So, a given client has no reason to make a dishonest vote resulting in honest votes by all clients.
Additionally, there is a game among malicious clients. As Eq. (<ref>) is known by all clients in advance. A malicious client can easily make an honest vote to gain tokens from other malicious clients who make dishonest votes. However, according to Nash Equilibrium <cit.>, we are certain that, in the long run, no malicious clients will make dishonest votes.
Theorem <ref> will further be empirically validated in Sec. <ref>.
In practice, A4 can be relaxed, where multiple malicious clients work together to attach the FL system. If A2 holds, we are certain that the malicious voters will reach a consensus internally before they act to win the majority vote. Intuitively, all malicious voters can be considered as a group together. In this case, this "group" will behave exactly as the single malicious client in Theorem <ref> based on the same reasoning. The proof is omitted.
§.§ Training
Each round consists of the following steps: proposer selection, local training, global aggregation, local validation, majority voting, token redistribution, and block creation (recording state[For example, the state can record the global model and tokens of each client.] information). The above steps are repeated in multiple rounds until certain stopping criteria are fulfilled. The complete training process is depicted in Fig. <ref>. The stopping criteria could be a fixed amount of training epochs, which is commonly adopted in ML.
§ EXPERIMENTS
§.§ Experimental Setup
We evaluate the proposed framework in a simulated environment.
§.§.§ Data and Task
We consider a standard classification task, namely loan default prediction. We use the Kaggle Lending Club dataset[<https://www.kaggle.com/datasets/wordsforthewise/lending-club>] to simulate a realistic financial application scenario.
We pre-process the raw dataset by dropping all entries with missing values. For the labels, we only keep “Fully Paid” and “Charged Off” to simplify the task as a binary classification task.
We randomly select 80% of the data as the training set and use the rest of the data as the test set. The training set is split into K subsets of equal size and distributed across K clients. Within each client, 20% of the local data are randomly selected as the validation set.
§.§.§ Implementation
There are K = 50 clients in the system and each client is initialized with 64 tokens. We use a 3-layer multi-layer perceptron (MLP) as the network backbone. Apart from the last layer, each layer of the MLP has 128 hidden nodes. We use a standard Adam <cit.> optimizer with fixed learning rate 10^-3 and batch size 128. No data augmentation is applied. We use the binary accuracy as both the local validation score and evaluation metric. We consider a simple data poisoning attack <cit.>, where malicious clients are trained to confuse the model. All baselines are implemented in PyTorch 1.12.1 <cit.> on one NVIDIA Tesla T4 GPU. We leverage Ethereum smart contracts to deploy our reward-and-slash design in a private blockchain and simulate the training process using the Python library [<https://web3py.readthedocs.io/en/v5/>]. We set ϵ = 0.05 based on empirical experience.[We notice that too small ϵ can cause large oscillation, which slows the convergence, and too large ϵ can facilitate the convergence at the expense of decreased detection performance, the system fails to remove the majority of malicious clients.]
§.§.§ Baselines
We consider 4 baselines. The first one is an Oracle approach, a centralized baseline without malicious attacks. The Oracle should provide the upper-bound performance of the experiment. The second one is FedAVG without malicious attacks (denoted as FedAVG w/o mal), which is equivalent to FedAVG under η = 0 and should provide the upper-bound performance for a decentralized environment. The third one is FedAVG under malicious attacks (denoted as FedAVG w/ mal), where η of clients are malicious. The fourth one is the proposed method, FedAVG with blockchain under malicious attacks (denoted as FedAVG w/ block). For FL baselines, 10% of clients are randomly selected to perform local training at each epoch. For FedAVG w/ block, we simply use the remaining 90% of the clients as voters.
§.§ Results
§.§.§ Empirical Analysis on Malicious Voters
To empirically validate the theoretical result in Sec. <ref>, we first simulate a hypothetical scenario where there are only honest proposers. As there are more honest proposers than malicious proposers at each round on average, the effect of malicious weights can be seen as slowing the convergence and decreasing the global performance, which will be validated in Sec. <ref>. Note, due to A4, we further simplify the scenario to focus on the behavior of malicious voters. As shown in Fig. <ref>, given the set of the hyperparameter for slashing voters γ_r = {2, 4, 8, 16, 32}, the malicious voters will be eliminated from the system shortly ( their average tokens decline to 0 within ≈ 40 epochs).
§.§.§ Comparison with Baselines
Following Theorem <ref> and Sec. <ref>, we now are certain that there will be no de facto malicious voters. Thus, in the following experiments, we focus on the scenarios where malicious clients only upload harmful weights but make honest votes. We evaluate the proposed framework against the baselines described in Sec. <ref>. We provide the learning curves in Fig. <ref> and the accuracy for all four approaches after convergence (the mean accuracy of the last 50 epochs) in Tab. <ref>. The performance of FedAVG w/ block is competitive with FedAVG w/o mal ( η = 0) and consistently outperforms FedAVG w/ mal. As η increases, the performance of FedAVG w/ mal decreases significantly, with a larger standard deviation and increased instability. In contrast, FedAVG w/ block maintains robust performance, with only slightly lower results compared to FedAVG w/o mal.
§.§.§ Analysis of Token Distributions
Fig. <ref>, <ref> and <ref> depict the average tokens remaining in honest and malicious proposers during the FL training process. We observe that honest proposers gradually accumulate more tokens while malicious proposers have fewer tokens over sufficient training epochs. Eventually, most malicious proposers lose the ability to participate in staking and are removed from the FL system, as their remaining tokens are insufficient. This meets the expectations of our system design.
§.§.§ Survival Analysis of Clients
As shown in Fig. <ref>, the anticipated survival time of malicious proposers experiences a decrease as γ_p increases. This effect can be attributed to the incentive mechanism in place, whereby a higher value of γ_p results in a greater penalty for proposers who act maliciously.
Fig. <ref> shows the survival time of honest proposers under different values of γ_p and exhibits noteworthy behavior. In cases where the malicious ratio η is high, the expected survival time of honest proposers may decrease with a large γ_p. This is due to the fact that, in each epoch, all randomly selected proposers will be slashed if the performance of the aggregated global model does not show improvement. Therefore, it is worth noting that balancing the token slashing parameter γ_p is crucial, because setting an excessively high value can harm honest proposers, whereas a small value can lead to slow convergence (see Fig. <ref>).
§.§.§ Sensitivity to Malicious Client Ratio
The results presented in Fig. <ref> demonstrate the robustness of our proposed method, FedAVG w/ block, against different malicious client ratios, as its performance remains unaffected even under large η values. However, it is important to note that the malicious client ratio can impact the token distribution and survival time of clients. Specifically, when there are more malicious clients present in the system, honest clients tend to accumulate more assets on average ( Fig. <ref> - <ref>). Nevertheless, they also face a higher risk of being slashed during an epoch, which can ultimately shorten their survival time ( Fig. <ref> - <ref>).
§.§.§ Limitations
In this work, as the experimental results aim to evaluate the robustness of the proposed framework, several practical challenges are simplified, staleness <cit.>, storage, and privacy <cit.>. Further, the proposed method requires more computational power than traditional methods due to mining (blockchain computing) and voting. Finally, large models have gained in popularity in practical applications, ViT <cit.> and GPT-3 <cit.>. This raises the question on how to efficiently handle on-chain aggregation for large models. Future work thus will aim to address these limitations to facilitate the research and development of FL with blockchain.
§ CONCLUSION
In this work, we explore an under-explored research direction, namely using FL and blockchain to defend against malicious behaviors. The defense mechanism is twofold. We use on-chain smart contracts to replace the traditional central server and propose a stake-based majority voting mechanism to detect client-side malicious behaviors. We not only provide a solution to the problem of interest, an emerging direction on trustworthy ML, but also show the robustness of the proposed method and provide the first empirical understanding of the problem.
§ ACKNOWLEDGMENT
The authors would like to thank Shuhao Zheng from the School of Computer Science, McGill University for the discussion in the early stage.
IEEEtran
|
http://arxiv.org/abs/2307.05511v1
|
20230705014408
|
Opinions with few disciples can win in the dynamical directed networks: an evolutionary game perspective
|
[
"Yakun Wang",
"Bin Wu"
] |
physics.soc-ph
|
[
"physics.soc-ph"
] |
]Opinions with few disciples can win in the dynamical directed networks: an evolutionary game perspective
[email protected]
School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China.
The voter model on networks is crucial to understand opinion formation. Uni-directional social interactions are ubiquitous in real social networks whereas undirected interactions are intensively studied. We establish a voter model on a dynamical directed network. We show that the opinion invasion is captured by a replicator equation of an emergent four-player two-strategy game, and the average in(out)-degree for the two opinions is fully captured by an emergent three-player two-strategy game. Interestingly, it is shown that the difference between the two emergent games arises from the uni-directionality of the network. The difference implies that the opinion with a small number of disciples can take over the population for in-group bias, provided that the network is directed. Our work makes an explicit connection between opinion dynamics and evolutionary games.
§ INTRODUCTION
Opinion dynamics have become attractive in diverse disciplines, such as statistical physics, control theory and system science <cit.>. Two main topics of opinion dynamics are how opinions reach a consensus and how opinions coexist for a long time. The voter model is one of the classical models <cit.>. It is a discrete opinion dynamics model in which an individual adopts an opinion with a probability proportional to the fraction of that opinion in its neighborhood. Besides opinion dynamics, the voter model has various applications in many fields, such as epidemic spreading <cit.>, catalytic reactions in chemistry <cit.> and prey-predator interaction in biology <cit.>.
Individual interactions in opinion dynamics are typically captured by networks. The real-world networks are dynamical, rather than static <cit.>. The researches on the co-evolutionary dynamics of opinions and networks have been well thorough <cit.>. A simple model with a single parameter controlling the balance of the two dynamics is built to investigate the opinion formation <cit.>. The modified model exhibits complicated topological behaviors via introducing heterophily <cit.>. One individual can rewire to an individual chosen at random from those with the same opinion or from the whole network. The rewire-to-same and rewire-to-random models have different phase transitions <cit.>. Master equation approximation, pair approximation and heterogeneous mean-field are well-known approaches to capture the opinion dynamics on the networks <cit.>. But all of these works explicitly assume that the networks are bi-directional.
Unidirectional social interactions are ubiquitous in the real world. For example, a user follows another user on Twitter based on a common interest, and this following relationship is asymmetric <cit.>: Sally enjoys Pilates, so she follows the blogger Jessica, who teaches Pilates online. But Jessica does not follow Sally. In the US National Longitudinal Study of Adolescent Health (the “AddHealth” study), high school students were asked to identify their friends within the school. More than half of the friendships are found to be unidirectional. Lisa considering Cindy to be her friend does not imply that Cindy considers Lisa to be her friend <cit.>. A large number of biological systems also have unidirectional interactions. For example in a wolf pack, wolves in general are subservient to the alpha wolf and their socialization is strictly one-way <cit.>. Directed dynamic networks are also widely present in the field of engineering <cit.>. We concentrate on the unidirectional nature of the network <cit.> besides the dynamic nature of the social network.
In this paper, we establish a voter model on a dynamical directed network <cit.>. Each node in the network represents an individual, and each directed link represents a directed social relationship. We are to address two questions, i.e., fate of opinions and transient topology. It is found that the fate of opinions is captured by an emergent four-player two-strategy game. The expectation of in(out)-degree for the two opinions is captured by an emergent three-player two-strategy game. The two emergent games are typically different for directed networks, which facilitates us to explain some counterintuitive phenomena.
§ MODEL
Initially, the whole population of size N are situated on nodes of a regular directed graph. Each node has L incoming edges and L outgoing edges, as shown in Fig. [tu1]1(a). The total number of directed links is thus NL. We assume that N ≫ L. It implies that each individual has a limited number of neighbors compared with the population size which is ubiquitous in social networks. There are two opinions, denoted as + and -, respectively. Each individual holds one type of opinion and we denote XY as the type of the directed link, where XY∈{ + + , + - , - + , - - }Δ = S.
Here we propose a voter model on the evolving directed network. In the network, we define the direction of “learning”: for example, if node B points to node A, it implies that B unilaterally learns from A and A does not learn from B. In other words, the source node plays the role of a student to learn the target node who plays the role of a teacher, as shown in Fig. [tu1]1(b). For a node, it has a student set and a teacher set. The student set is composed of the source nodes on the edges that flow into the node, and the teacher set is composed of the target nodes on the edges that flow out from the node.
Each individual has an opportunity to either update its opinion with probability w or update its link with probability 1 - w at each time step, which is shown in Fig. [tu2]2. When w = 1, the social links between individuals are invariant, i.e., individuals only update their opinions. It refers to the opinion dynamics on a static directed network <cit.>. When w = 0, the social network evolves all the time whereas the fractions of opinions are constant.
For opinion dynamics, we focus on the voter model <cit.>. An individual is randomly selected from the population. The probability that the selected individual adopts opinion + is proportional to the number of teachers with opinion + in its teacher set. In other words, the selected individual adopts opinion + with probability Q_ +/( Q_ + + Q_ -), where Q_ ± refers to the number of its teachers whose opinion is ±. It is notable that if the teacher set of the selected node is empty, then the individual has no teachers to learn from and keeps the opinion.
For linking dynamics, our model focuses on the updating of directed links. The whole network is adjusted by at most one directed link at each time step. There are three steps as follows.
(i) Selecting a directed link. A directed link XY is randomly selected from all the directed links. The directed link XY corresponds to the student X and the teacher Y, where XY∈ S.
(ii) Selecting X or Y. X is selected with probability α _XY, where 0<α_XY<1. Otherwise Y is selected with probability β_XY. We have α _XY + β_XY = 1.
(iii) Breaking the directed link. The XY breaks off with a pre-defined probability k_XY, where 0<k_XY<1. It implies that if the student X(teacher Y) is selected, then X(Y) would like to break the directed link with probability k_XY to change the current teacher(student).
(iv) Rewiring the node. If student X is selected and the XY is broken, then X will find a new teacher who is neither in X's current teacher set nor in X's current student set. If the teacher Y is selected and the XY is broken, then Y will teach a new student who is neither in Y's current teacher set nor in Y's current student set.
Notably, the number of teachers in the entire population is constant, since the sum of out-degrees of all the nodes in the network keeps unchanged over time.
§ EMERGENT GAMES FOR THE FATE OF OPINIONS
The voter model on the evolving network is a Markov chain with state x +, i.e., the fraction of opinion + in the population. Thus, the state space is {0,1/N,2/N, ⋯ ,1}. State 0 and state 1 are absorbing states, which implies that all the individuals reach a consensus. We focus on w ≪ 1. In this case, individuals prefer to adjust their social relationships rather than change their opinions. This is widespread in real social systems. For example, users on Twitter change their opinions much less frequently than adjust their followers <cit.>. It leads to a time scale separation, that is, all the directed links are almost in the stationary regime when the opinion update occurs (see [Appendix A]Appendix A for details).
For the evolutionary dynamics of opinions, x_ + either increases or decreases by 1 / N within a time step. For example, x_ + increases by 1 / N if an individual who adopts opinion - is selected with probability x_ - = 1 - x_ +, i.e., the fraction of opinion - in the population. Then the focal individual with opinion - learns from its teachers with opinion +. And it adopts opinion + with a probability proportional to the number of its teachers with opinion +, i.e., qπ _ - + /
.
-( qπ _ - + + qπ _ - - ) = π _ - + /
.
-( π _ - + + π _ - - ).
Here q is the average size of the teacher set captured by the average out-degree of the focal individual. Thus the transition probability that x_ + increases by 1 / N is
T_x_ +^ + = x_ -π _ - + /π _ - + + π _ - - .
Similarly, the transition probability that x + decreases by 1 / N is
T_x_ +^ - = x_ +π _ + - /π _ + + + π _ + - .
The probability that x + remains constant is T_x_ +^0 = 1 - T_x_ +^ + - T_x_ +^ -, since the each row sum of the transition probability matrix is unit one.
For large population size, i.e., N → +∞, the mean-field equation is given by ẋ_ + = T_x_ +^ + - T_x_ +^ -, capturing the evolution of the opinions. Taking Eqs. [eq.1](1), [eq.2](2) yields that ẋ_ + = x_ +x_ -
[ . k_ - - ( α _ + - β _ + + x_ + + α _ - - β _ + - x_ -)A^ - 1( x_ +)-k_ + + ( α _ + + β _ - + x_ + + α _ - + β _ - - x_ -)B^ - 1( x_ +) . ], where
both A( x_ +) = k_ - - α _ + - β _ + + x_ +^2 + [ β _ + - ( k_ - + α _ + + .. + . k_ - - α _ - - ) +
. k_ - + α _ - + ( α _ + - - α _ + + )]x_ +x_ - + k_ - + α _ - + β _ + - x_ -^2 and B( x_ +)
= k_ + - α _ + - β _ - + x_ +^2 + [ β _ - + ( k_ + + α _ + + + k_ + - α _ - - ).+ k_ + - α _ + -
. ( α _ - + - α _ - - ) ]x_ +x_ - + k_ + + α _ - + β _ - - x_ -^2 are positive, provided that ∀α _XY, β _XY, k_XY∈( 0,1),XY∈ S (See Supplemental Material for more details). It implies that the opinions are driven by the probability of breaking directed links k_XY and the probability of choosing the student to reconnect α _XY. Multiplying C( x_ +) = A( x_ +)B( x_ +)k_ + + ^ - 1k_ - - ^ - 1 which is positive
on the right side does not alter the asymptotic dynamics, i.e., the fixed point and its stability. We end up with the equation
[ ẋ_ + = x_ +x_ -[ ( u_1x_ +^3 + u_2x_ +^2x_ - + u_3x_ +x_ -^2 + u_4x_ -^3).; . - ( v_1x_ +^3 + v_2x_ +^2x_ - + v_3x_ +x_ -^2 + v_4x_ -^3)], ]
where u_1 = ( k_ + - /
.
-k_ + + )α _ + - ^2β _ + + β _ - + , u_2 = α _ + + α _ + - β _ + + β _ - + + ( k_ + - /
.
-k_ + + )[ α _ + - α _ - - β _ + - .( β _ + + + .
. . β _ - + ) + α _ + - α _ - + β _ + + ( α _ + - - α _ - - )], u_3=α _ + + α _ - - β _ + - β _ - + + α _ + - α _ - + β _ + + β _ - - + ( k_ + - /
.
-k_ + + )
[ α _ - - ^2β _ + - ^2 + α _ - + α _ - - β _ + - ( α _ + - - α _ - - )], u_4 = α _ - + α _ - - β _ + - β _ - - and v_1=α _ + + α _ + - β _ + + β _ - +, v_2 = α _ + + α _ - - β _ + - β _ - + + α _ + - α _ - + β _ + + β _ - - + ( k_ - + /
.
-k_ - - )[ α _ + + ^2β _ - + ^2 + .α _ + + . α _ + - β _ - + ( α _ - + - α _ + + )], v_3 = α _ - + α _ - - β _ + - β _ - - + ( k_ - + /
.
-k_ - - )[ α _ + + α _ - + β _ - + ( β _ + - + β _ - - ) + α _ + - α _ - + β _ - - ( α _ - + - α _ + + )], v_4 = ( k_ - + /
.
-k_ - - )α _ - + ^2β _ + - β _ - -. Eq. [eq.3](3) is a replicator equation whose payoff matrix is given by
Let f_ +( x_ +) = u_1x_ +^3 + u_2x_ +^2x_ - + u_3x_ +x_ -^2 + u_4x_ -^3 which refers to the average payoff of opinion + and f_ -( x_ +) = v_1x_ +^3 + v_2x_ +^2x_ - + v_3x_ +x_ -^2 + v_4x_ -^3 which refers to the average payoff of opinion -. This implies for large population size, the voting behavior on the directed dynamical network is captured by the replicator equation of a four-player two-strategy game with payoff matrix [table 1]Table 1 in the well-mixed population. For example, the payoff of an individual with opinion + is u_1 if the focal individual interacts with three individuals with opinion +. There are eight parameters in our model, i.e., α_XY and k_XY, where XY∈ S.
§.§ Emergent two-player games: predicting bistability and coexistence of opinions
In the linking dynamics, we have two classes of parameters, i.e., the probability of choosing source nodes α_XY and the probability of breaking directed links k_XY. We analyze the fate of the opinions with the two classes of parameters, respectively.
§.§.§ The same probability of choosing source nodes
We assume that the probabilities of rewiring nodes are equal, i.e., there exists an α∈( 0,1) such that α_X Y = α, where XY∈ S. Substituting it into [table 1]Table 1, we obtain
( [ u_1; u_2/
.
- 3; u_3/
.
- 3; u_4 ]) = α ^2( 1 - α)^2/3( [ 3 0; 2 1; 1 2; 0 3 ]) ·( [ k_ + - /k_ + +; 1 ])
and
( [ v_1; v_2/
.
- 3; v_3/
.
- 3; v_4 ]) = α ^2( 1 - α)^2/3( [ 3 0; 2 1; 1 2; 0 3 ]) ·( [ 1; k_ - + /k_ - - ])
It implies that, for example, the payoff of one individual + who meets three other individuals with opinion + in the four-player game is equal to sum of the payoff of one individual + who meets one individual + in a two-player game, i.e., u_1 = α ^2( 1 - α)^2k_ + - /
.
-k_ + +. Therefore, the four-player two-strategy game degenerates to the two-player two-strategy game, whose payoff matrix is
[ M_ opinion = [ [ + - ]; [ +; ; - ] ( [ k_ + - /k_ + + 1; 1 k_ - + /k_ - - ]) ]. ]
The emergent payoff matrix is independent on α. Intuitively, the payoff of an individual + against an individual + is proportional to k_ + - /
.
-k_ + +. If k_ + - is increased solely, then the number of students with opinion + who learn opinion - decreases. A part of these students reconnect to new teachers with opinion + and adopt opinion +. Hence the proportion of opinion + increases.
In-group bias is a common phenomenon in the real world, which implies that individuals prefer to interact with those who take the same opinion <cit.>. It can lead to consensus in the population. That is to say, individuals tend to have the same opinion with in-group bias. In our model, in-group bias corresponds to k _ + - > k _ + + and k _ - + > k _ - -. Students who adopt different opinions from their teachers are more likely to break the directed links than those who adopt the same opinions. The emergent payoff matrix in this case is a coordination game. There is only one internal equilibrium of the replicator equation and it is unstable. Thus all the individuals adopt opinion + if the initial fraction of opinion + exceeds
x_ opinion 1pt + ^ * = k_ - + /
.
-k_ - - - 1/k_ + - /
.
-k_ + + + k_ - + /
.
-k_ - - -2 .
Otherwise all, the individuals reach a consensus on opinion -. It prevents the homogenization of opinions.
The out-group bias implies that individuals prefer to interact with those who adopt different opinions <cit.>. In a large campaign, it is important that the chiefs focus on how to convert voters from the other camp to their own. Out-group bias in our model refers to k _ + - < k _ + + and k _ - + < k _ - -. The payoff matrix refers to a coexistence game. Standard analysis shows that there is only one internal stable equilibrium x_ opinion 1pt +^ * of the replicator equation. In other words, opinion + and opinion - coexist if they coexist in the beginning. The network has many directed links with inconsistent opinions, i.e., + - and - +. Based on stable regimes, if k _ + + is decreasing or k _ - - is increasing, then the final fraction of opinion + increases [Figs. [tu3]3(a) and [tu3]3(b)]. Other cases are listed in Supplemental Material. Therefore, if the chiefs with opinion + would like to increase the size of their camp, then it can be achieved by decreasing k _ + + or increasing k _ - -. That is to say, increasing the number of students on the opinion + or decreasing the number of students on the opposite side.
§.§.§ The same probability of breaking directed links
We assume that the probabilities of breaking directed links are equal, i.e., there exists a k ∈( 0,1) such that k_X Y = k, where XY∈ S. It implies that the type of the directed links is not taken into account when the links are broken. Substituting k_X Y = k into Eq. [eq.3](3), we find ẋ_ + = D( x_ +)x_ +x_ -[ ( α _ + - β _ - + x_ + + ..
. . α _ - - β _ + - x_ -) - ( α _ + + β _ - + x_ + + α _ - + β _ + - x_ -)], where D( x_ +)
= β _ + + α _ + - x_ +^2 + ( α _ + + β _ - + + α _ - - β _ + - )x_ +x_ - + α _ - + β _ - - x_ -^2 is positive. Similarly, we end up with a replicator equation, i.e., ẋ_ + = x_ +x_ -[ ( α _ + - β _ - + x_ + + .α _ - - β _ + - x_ -)-
( α _ + + β _ - + x_ +. + α _ - + β _ + - x_ -)], whose payoff matrix is the two-player two-strategy game
[ ; R_ opinion = ][ [ + - ]; [ +; - ] ( [ α _ + - β _ - + α _ - - β _ + -; α _ + + β _ - + α _ - + β _ + - ]) ].
Noteworthily, the emergent payoff matrix is independent on k and the payoff entry R_XY is proportional to l_YX, i.e., the number of directed links YX. For example, the payoff of an individual + meeting an individual - is proportional to l_ - +, which refers to the number of students - who have teachers with opinion +. Here is an intuitive explanation: if α _ - - increases solely, then a part of students with opinion - reconnect to the new teachers with opinion +. Hence l_ - + increases.
Similarly, we discuss the following two cases. We address a coordination game with α _ + - > α _ + + and α _ - + > α _ - -. In this scenario, there is an unstable internal equilibrium given by
y_ opinion 1pt +^ * = β _ + - ( α _ - - - α _ - + )/β _ - + ( α _ + + - α _ + - ) + β _ + - ( α _ - - - α _ - + )
The individuals reach a consensus with opinion + if the initial fraction of opinion + exceeds y_ opinion 1pt +^ *. Otherwise, it reaches a consensus with opinion -.
We study a coexistence game defined by α_ + - < α_ + + and α_ - + < α_ - -. In this case, opinion + and opinion - coexist for a long time if they coexist in the beginning. If α_ + + is decreasing or α_ - - is increasing, then the fraction fraction of opinion + increases [Figs. [tu3]3(c) and [tu3]3(d)]. And other cases see Supplemental Material for details.
§.§ Emergent multi-player games: complexity analysis
In subsection A, the four-player two-strategy game degenerates to the two-player two-strategy game provided that there are α∈( 0,1) and k ∈( 0,1) such that α_X Y = α or k_X Y = k for ∀XY∈ S. But what is the complexity of our model? If u_1>v_1, u_2<v_2, u_3>v_3 and u_4<v_4 (or u_1<v_1, u_2>v_2, u_3<v_3 and u_4>v_4) are satisfied in [table 1]Table 1, f_ + ( x_ +) - f_ - ( x_ +) changes the sign three times with respect to x_ + when non-zero coefficients are arranged from highest to lowest according to the power of x_ +. Based on Descartes’ rule of signs <cit.>, there are one or three roots, i.e., one internal equilibrium or three internal equilibria. We choose one parameter at random from α_X Y and k_X Y respectively and make them equal. And we keep the other six parameters equal. We prove that it does not satisfy the condition of changing the sign three times (See Supplemental Material for details). Thus, to reveal the complexity, more parameters are needed to be unequal.
We find a set of parameters, i.e., k_ + + = ρ ,k_ + - = ρ ,k_ - + = ρ/4,k_ - - = ρ ,α _ + + = ρ/2,α _ + - = ρ ,α _ - + = 2ρ and α _ - - = ρ/4, where 0 < ρ < 0.5. These eight parameters are only up to ρ. There are three internal equilibria under the condition ( 21 - √(249))/32 ≈ 0.1631 < ρ < 0.5, where u_1>v_1, u_2<v_2, u_3>v_3 and u_4<v_4. For example, substituting ρ = 0.4 into [table 1]Table 1, we obtain
This four-player two-strategy game has three internal equilibria, i.e., x_ opinion 1pt + ^ * = 0.29, 0.5 and 0.89, as shown in Fig. [tu4]4. And x_ opinion 1pt + ^ * = 0.5 is only one internal stable equilibrium. In this case, the final opinions in the population are either diverse or reached a consensus among individuals. Therefore, the complexity of the voter model on the directed evolving network is captured by the four-player two-strategy game.
§.§ Robustness
We exchange the direction of learning in the network. For example, if node B points to node A, it implies that A unilaterally learns from B and B does not learn from A, that is, the target node learns the source node. Therefore, the transition probability that T_x_ +^ + = x_ -π _ + - /
.
-( π _ + - + π _ - - ). The transition probability that x_ + decreases by 1 / N is T_x_ +^ - = x_ +π _ - + /
.
-( π _ + + + π _ - + ). In this case, we obtain some dual results. Similarly, the voting behavior on the evolving directed network is captured by a four-player two-strategy game whose payoff matrix is given by [table 3]Table 3, where u_1'=( k_ - + /
.
-k_ + + )α _ + + α _ + - β _ - + ^2, u_2' = α _ + + α _ + - β _ + + β _ - + +( k_ - + /
.
-k_ + + ) [ α _ + - α _ - + β _ - + β _ - - + α _ + + α _ - - β _ + - β _ - + + α _ + + α _ - + β _ - + ( α _ + - - α _ - - ) ], u_3' = α _ + -
α _ - + β _ + + β _ - - + α _ + + α _ - - β _ + - β _ - + + ( k_ - + /
.
-k_ + + )[ α _ - + α _ - - β _ + - β _ - - + α _ - + ^β _ - - ( α _ + - - α _ - - )],
u_4' = α _ - + α _ - - β _ + - β _ - - and v_1' = α _ + + α _ + - β _ + + β _ - + , v_2' = α _ + + α _ - - β _ + - β _ - + + α _ + - α _ - + β _ + +
β _ - - + ( k_ + - /
.
-k_ - - ) [ α _ + + α _ + - . β _ + + β _ - + + α _ + - ^2β _ + + ( α _ - + - α _ + + )] , v_3' = α _ - + α _ - - β _ + - β _ - - + ( k_ + - /
.
-k_ - - ) [ α _ + - α _ - + β _ + + β _ + - + α _ + + α _ - - β _ + - β _ - + + α _ + - α _ - - . . β _ + - ( α _ - + - α _ + + )], v_4' =
( k_ + - /
.
-k_ - - )α _ - + α _ - - β _ + - ^2.
The four-player two-strategy game degenerates to the two-player two-strategy game if α_X Y = α, where XY∈ S and 0<α<1. The payoff matrix is
M_ opinion_dual = [ [ + - ]; [ +; ; - ] ( [ k_ - + /k_ + + 1; 1 k_ + - /k_ - - ]). ]
And if k_X Y = k, where XY∈ S and 0<k<1, the payoff matrix is
[ ; R_ opinion_dual = ][ [ + - ]; [ +; - ] ( [ α _ + + α _ + - α _ + - α _ - +; α _ + - α _ - + α _ - + α _ - - ]). ]
§ EMERGENT GAMES FOR THE TRANSIENT TOPOLOGY DURING THE OPINION FORMATION
In the preceding section, we focus on the fate of opinions. Here we address the other side of the coin, i.e., the transient property of the evolving networks.
What are the key topology features that pave the way for the successful invasion? In our model, the in-degree of an individual is equal to its student size, and the out-degree is equal to its teacher size. In the voter model, teachers preach their opinions and students adopt the popular opinions. The in-degree, i.e., the teacher's student size is crucial for spreading the teacher's opinions. Hence, we concentrate on the in-degree.
Suppose there is an individual, named after Sally. Without loss of generality, she adopts the opinion +. And she has in-degree d_ in +, i.e., she has d_ in + students. The in-degree d_ in + of Sally ranges from 0 to N-1. For our linking dynamics, Sally's in-degree increases or decreases by at most one. If an individual who is not Sally's current student reconnects to her, Sally's in-degree d_ in + increases by one: firstly, the probability of selecting the directed link XY which is not point to Sally is ( NL - d_ in +)/
.
-NL, where XY∈ S. Secondly, the stationary distribution of the directed links is π_S = ( π _ + + ,π _ + - ,π _ - + ,π _ - - ), which has been given by Eq. [eq.A.4](A.4). Then student X is chosen with probability α _XY and breaks the directed link XY with probability k_XY. Finally, student X connects to Sally with probability 1 /
.
-( N - 1). Thus the transition probability that d_ in + increases by one is
P_d_ in +^ + = NL - d_ in +/NL_ select a link which is not point to Sallyπ _S·( [ α _ + + k_ + +; α _ + - k_ + -; α _ - + k_ - +; α _ - - k_ - - ])_break the link 1/N - 1_rewire to Sally.
On the other hand, Sally is not reconnected provided that her student breaks the selected link. In this case, Sally has one less student. Hence, the transition probability that d_ in + decreases by one is
P_d_ in +^ - =d_ in +/NL_select a link which is point to Sally( π _ + + α _ + + k_ + + /π _ + + + π _ - + + π _ - + α _ - + k_ - + /π _ + + + π _ - + )_break the link1_ rewire toother nodes.
And P_d_ in +^0 = 1 - P_d_ in +^ + - P_d_ in +^ - [Fig. [tu5]5].
The one-step transition matrix P of the Markov process is thus obtained. The Markov chain is aperiodic and irreducible, thus ergodic. Hence it has a unique stationary distribution Ξ _D = ( ξ _0,ξ _1,ξ _2, ⋯ξ _N - 1) which is determined by Ξ _DP = Ξ _D <cit.>. Based on <cit.>, the stationary distribution is given by
ξ _j = P_0^ + /P_j^ - ∏_i = 1^j - 1P_i^ + /P_i^ - /1 + ∑_k = 1^N - 1P_0^ + /P_k^ - ∏_i = 1^k - 1P_i^ + /P_i^ - , 1 ≤ j ≤ N - 1
where the empty product is one, that is, ∏_i = 1^0 P_i^ + /
.
-P_i^ - = 1. For j = 0, we have ξ _0 = ( 1 + ∑_k = 1^N - 1P_0^ + /
.
-P_k^ - ∏_i = 1^k - 1P_i^ + /
.
-P_i^ - )^ - 1. When the population size is infinitely large, i.e., N →∞, we show that the in-degree follows the Poisson distribution (see more details in Supplemental Material). For the average in-degree of opinion +, we have E( d_ in +) = LU_+, where L is the average in-degree of the network and
U_+ = π _S·( [ α _ + + k_ + +; α _ + - k_ + -; α _ - + k_ - +; α _ - - k_ - - ])/
.
-( π _ + + α _ + + k_ + + /π _ + + + π _ - + + π _ - + α _ - + k_ - + /π _ + + + π _ - + ).
Interestingly, U_ + = g_ +/
.
-( x_ +g_ + + x_ -g_ -), where g_ +(g_ -) is regarded as the payoff of the opinion +(-). Hence, the expectation of in-degree for the two opinions is fully captured by an emergent three-player two-strategy game (See Supplemental Material for details), whose payoff table is given by
where a_1 = α _ + - β _ - + /
.
-k_ + + , b_1 = α _ + + β _ - + /
.
-k_ + - , a_2 = α _ + - β _ + + /
.
-k_ - + + [ α _ - + ( α _ + - - α _ - - ) + .
α _ - - β _ + - ]/k_ + + , b_2 = α _ - + β _ - - /
.
-k_ + - + [ α _ - + ( α _ + - - α _ + + ) + α _ + + β _ + - ]/
.
-k_ - -, a_3 = α _ - - β _ + - /
.
-k_ - + , b_3 = α _ - + β _ + - /
.
-k_ - -. The Nash equilibrium of the emergent game is the transient topology, at which the two opinions have the same student size [Fig. [tu6]6]. If the payoff of opinion + is larger than the payoff of opinion - for [table 4]Table 4, then the average in-degree of opinion + is greater than that of opinion -.
When the four probabilities of breaking links are the same, i.e., k_X Y = k, where XY∈ S and 0<k<1 and concentrate on the in-group bias. Noteworthily, Eq. [p_d_in+](10) and Eq. [p_d_in-](11) are approximations because Sally's out-degree i.e., her teachers are neglected. Bidirectional links are not excluded in the approximation. In spite of this error, the in-degree distribution via the simulation agrees perfectly with the theoretical approximations for both one opinion in majority and the other opinion in minority [Figs. [tu7]7(a) and [tu7]7(b)]. Intuitively, here N ≫ L, i.e., the total number of individuals is much larger than the number of students for an individual which is close to the reality. Thus, each node almost obeys the same in-degree distribution and each update is approximately independent. Hence, these approximations are acceptable. For the completeness of our study, we show the corresponding results for the out-degree (See Supplemental Material for details).
§.§ Emergent two-player games: the student size
We focus on two classes of breaking patterns, i.e., the probability of choosing nodes α_XY and the probability of breaking directed links k_XY. If there exists 0<α<1, s.t., α _XY = α, the three-player game degenerates to the two-player game
M_in degree = ( [ 1/k_ + + 1/k_ - +; 1/k_ + - 1/k_ - - ]).
We obtain a internal equilibrium x_in degree 1pt + ^ * for in-group bias
x_in degree 1pt + ^ * = 1 /
.
-k_ - - - 1 /
.
-k_ - + /1 /
.
-k_ + + - 1 /
.
-k_ + - - 1 /
.
-k_ - + + 1 /
.
-k_ - - .
The equilibrium is a Nash equilibrium of the emergent game Eq. [eq.14](14). It refers to a topology in which opinion + has as many students as opinion - does [Fig. [tu6]6]. For in-group bias, if x_ + > x_ in degree 1pt + ^ *, the average degree of opinion + is larger than opinion -'s. It implies that more students learn opinion +. Otherwise, the average degree of opinion - is larger.
Since the emergent games M_opinion, i.e., Eq. [eq.4](4) and M_in degree, i.e., Eq. [eq.14](14) are not equal, we cannot capture both the opinion formation and the transient topology with just one emergent game. Thus, here are some counterintuitive cases. For in-group bias, k _ + - > k _ + + and k _ - + > k _ - -, if the initial proportion of opinion + is larger than x_ opinion 1pt + ^ *, then opinion + is likely to take over. For k_ + - > k_ - +, we have x_opinion 1pt + ^ * < x_in degree 1pt + ^ *. If the initial fraction of opinion + is between x_ opinion 1pt + ^ * and x_ in degree 1pt + ^ *, then opinion + invades successfully in the end, even if more students learn the opinion - than opinion + in the beginning [Fig. [tu8]8(a)]. Similarly, if k_ + - < k_ - +, we have x_in degree 1pt + ^ * < x_opinion 1pt + ^ *. And if the initial fraction of opinion + is between x_ in degree 1pt + ^ * and x_ opinion 1pt + ^ *, then opinion + invades unsuccessfully eventually, even if more students learn the opinion + than the opinion - in the beginning [Fig. [tu8]8(b)]. It implies that the opinion with few students is likely to invade successfully. Hence, the student size is not the indicator of the successful invasion, which is counterintuitive.
If there is 0<k<1, s.t., k _XY = k, then the degenerated payoff matrix is
R_in degree = ( [ α _ + - β _ - + α _ - - β _ + -; α _ + + β _ - + α _ - + β _ + - ]).
R_ in degree is the same as R_ opinion, i.e., Eq. [eq.6](6). It implies that the internal equilibrium y_in degree 1pt + ^ * is equal to y_ opinion 1pt +^ *. For the in-group bias, if the initial fraction of opinion + is larger than y_ opinion 1pt +^ *, then more students learn the opinion + than the opinion - and the opinion + invades successfully. It implies that the student size is the indicator of the successful invasion in this case. We draw the directed network topology [Fig. [tu7]7(c)]. If the proportion of one opinion is quite small, then the in-degree of the opinion is small.
§.§ An emergent three-player game for the student size: complexity analysis
Some of the three-player games may be expanded by the two-player games. We take the number of internal equilibria of the replicated equation as the true complexity of our model. Based on Descartes’ rule of signs <cit.>, if a_1>b_1, a_2<b_2 and a_3>b_3 (or a_1<b_1, a_2>b_2 and a_3<b_3 ) are satisfied in [table 4]Table 4, the three-player two-strategy game has at most two internal equilibria. At the equilibria, the in-degree of opinion + and that of opinion - are equal. To verify whether the same parameters simultaneously lead to three internal equilibria in a four-player two-strategy game and two internal equilibria in a three-player two-strategy game, we take the set of parameters, i.e., k_ + + = ρ ,k_ + - = ρ ,k_ - + = ρ/4,k_ - - = ρ ,α _ + + = ρ/2,α _ + - = ρ ,α _ - + = 2ρ and α _ - - = ρ/4, where 0 < ρ < 0.5 into [table 4]Table 4. However, there is only one internal equilibrium. This emergent three-player game differs in complexity from the four-player game to predict the fate of opinions. We show that the four-player two-strategy game with three internal equilibria and the three-player two-strategy game with two internal equilibria cannot occur at the same time (Supplemental Material). It indicates that the complexity of the two emergent games is different and we can not use the same emergent game to describe both the fate of opinions and the transient topology except some special cases, i.e., k _XY = k, where 0<k<1. We find a new set of parameters in which the three-player two-strategy game has two internal equilibria, as shown in Supplemental Material.
§ CONCLUSION AND DISCUSSION
Evolutionary game theory is a powerful mathematical framework to explore how individuals adjust their strategies, provided that the game interactions are given in prior <cit.>. Both opinion dynamics and evolutionary game dynamics have been benefited from the statistical physics method, yet they are treated as two distinct fields. We show that opinion dynamics is equivalent to the evolutionary games, both opinion wise and network wise. We focus on a voter model on an evolving directed network without any game interactions. We have shown that the fate of opinions is captured by a replicator equation of an emergent four-player two-strategy game. The complexity of the fate of opinions is thus the same as the classic evolutionary four-player two-strategy game. It has at most three internal equilibria. This equivalence result explicitly captures how opinions reach a consensus and how opinions coexist for a long time, which are the two main questions in opinion dynamics. On the other hand, we show that the transient topology is fully captured by an emergent three-player two-strategy game. Thus it has at most two internal equilibria. The Nash equilibrium of the emergent game is the transient topology, at which the two opinions have the same student size. We obtain the in(out)-degree distribution, which is typically challenging in previous works. This equivalence result explicitly tells who has how many neighbors during the opinion formation. Thus it demonstrates the transient topology during opinion formation.
The emergent games degenerate to two-player two-strategy games, if the type of directed links is not considered when selecting an individual or initiating breaking the link, i.e., α _XY = α or k _XY = k, where 0<α<1, 0<k<1 and XY∈ S. If we focus on the bi-directionality and set α _XY = α = 1/2, the emergent game which captures the fate of opinions, i.e., Eq. [eq.4](4) is equivalent to <cit.> where networks are undirected yet dynamical. For in-group bias, individuals can reach a consensus. For out-group bias, opinions can coexist if opinions coexist in the beginning. Furthermore, the condition α _XY = α can be relaxed to α _ + + = α _ - + = γ _1 and α _ + - = α _ - - = γ _2, where 0<γ _1, γ _2<1. For example, if the teachers have the same opinion +, then their students have the same probability of being selected, i.e., α _ + + = α _ - + = γ _1. We have
[ M_opinion_new = [ ( [ γ _2k_ + - /γ _1 k_ + + 1; 1 γ _1 k_ - + /γ _2 k_ - - ]) ] ]
and
M_in degree_new = ( [ γ _2/k_ + + γ _2/k_ - +; γ _1/k_ + - γ _1/k_ - - ]).
If γ _1 = γ _2, then M_ opinion = M_ opinion_new and M_in degree = M_in degree_new.
We reveal a counterintuitive phenomenon with the aid of the two different emergent games, i.e., M_opinion [Eq. [eq.4](4)] and M_in degree [Eq. [eq.14](14)]. Intuitively, if the number of disciples of opinion + is larger than the opinion -, then opinion + is learned by more students, hence the fraction of opinion + increases and opinion + can take over the whole population. However, we show that the number of disciples is not the key to the success of the invasion. An opinion with a smaller student size can succeed in the population. Noteworthily, if k_ + - = k_ - + = k, where 0<k<1, we have M_opinion = k · M_in degree. It implies that one emergent game is sufficient to capture both the fate of opinions and the transient topology. We also show M_in degree is the same as M_out degree in this case (See Supplemental Material). It implies that the average in-degree is equal to the average out-degree, i.e., one individual has the same number of students and teachers on average. It mirrors an undirected-like network. In other words, if we do not distinguish + - and - +, the network has symmetric-like properties in a statistical sense although it is still a directed network. Furthermore, the number of students with popular opinions is not higher than that with non-popular opinions, whereas opinion leaders play a decisive role in static networks <cit.>. It implies that undirected and directed networks are fundamentally different.
Clustering is believed to play a crucial role in complex systems <cit.>. However, we find that if individuals with opinion + gather together, the opinion + does not necessarily invade successfully. It implies that the clustering of individuals with the same opinions is not the key to a successful invasion in the dynamical directed network (see more details in Supplemental Material).
To sum up, our work bridges the gap between the opinion dynamics and evolutionary game theory. Via the bridge, we are able to predict both the fate of opinions and the transient topology from a game perspective.
§ ACKNOWLEDGMENTS
We gratefully acknowledge Xunlong Wang, who inspire us to find that the in-degree follows the Poisson distribution in the infinite large population size limit. We appreciate NSFC No.61751301.
§ LINKING DYNAMICS
Here the number of directed links NL is constant. Each directed link
i ( i = 1,2, ⋯ ,NL) is selected with probability 1/NL. In time t, we randomly select a directed link i^t = i. If the selected i^t does not break, then we have i^t + 1 = i^t. Otherwise, a new directed link is introduced, denoted as i^t + 1. We denote the type of directed edge of i^t by T( i^t), where
T( i^t) ∈ S.
The linking dynamics is captured by Markov chain with transition matrix Q_ ( AB ) ( CD ), which is the probability that link AB transforms to link CD in one time step. For instance, Q_ ( + - ) ( + + ) is the probability that i^t of type + - transforms to i^t + 1 of type + +. In this case, one of the following two cases occurs:
(1) i^t is not selected (with probability ( NL - 1)/NL).
(2) i^t is selected (with probability 1/NL). Then, either the original + - link is not broken (with probability 1 - k_ + -) or the selected student with opinion + reconnects a new teacher with opinion + when the original + - link is broken (with probability k_ + - α _ + - x_ +, where x_ + is the fraction of opinion +). Hence,
Q_( + - )( + + ) = NL - 1/NL + 1/NL( 1 - k_ + - + k_ + - α _ + - x_ +).
And x_ -=1-x_ + is the fraction of opinion -. The transition probability matrix is given by
Q = NL - 1/NLI_4 + 1/NLV,
where I_4 is the identity matrix and the matrix V is given by Eq. [pingwenfenbu](A.3).
V =
[ [ [ [ [ + + ] ] + - ] [ - + - - ]; [ + +; + -; - +; - - ] ( [ 1 - k_ + + + k_ + + x_ + k_ + + α _ + + x_ - k_ + + β _ + + x_ - 0; k_ + - α _ + - x_ + 1 - k_ + - α _ + - x_ +-k_ + - β _ + - x_ - 0 k_ + - β _ + - x_ -; k_ - + β _ - + x_ + 0 1 - k_ - + β _ - + x_ + - k_ - + α _ - + x_ - k_ - + α _ - + x_ -; 0 k_ - - β _ - - x_ + k_ - - α _ - - x_ + 1 - k_ - - + k_ - - x_ - ]) ] ]
The matrix V is an approximation because it is possible that an individual reconnects its student set or teacher set of individuals. Since the population size is much larger than the average degree of the nodes, i.e., N ≫ L, the approximation is completely acceptable.
The state space of the Markov chain is S. If k_ + + k_ + - k_ - + k_ - - x_ +x_ - 0, there is a unique stationary distribution π_S = ( π _ + + ,π _ + - ,π _ - + ,π _ - - ) determinded by equation π_SQ = π_S. We find that
[ π_S = N( x_ +) * [ [ x_ + ^2/k_ + + ( x_ +α _ + - β _ - + + x_ -α _ - - β _ + - + x_ -α _ - + ( α _ + - - α _ - - )); x_ +x_ -/k_ + - ( x_ +α _ + + β _ - + + x_ -α _ - + β _ - - ); x_ +x_ -/k_ - + ( x_ +α _ + - β _ + + + x_ -α _ - - β _ + - ); x_ - ^2/k_ - - ( x_ +α _ + + β _ - + + x_ -α _ - + β _ + - + x_ +α _ + - ( α _ - + - α _ + + )) ]]^' ],
where N( x ) = [ x_ + ^2( α _ - - β _ - + x_ - + α _ + - ( ( α _ - + - α _ - - )x_ - + β _ - + x_ +))/
.
-k_ + + . + x_ +x_ -( α _ + + β _ - + x_ +. +
. α _ - + β _ - - x_ -)/
.
-k_ + - + x_ +x_ -( α _ + - β _ + + x_ + + α _ - - β _ + - x_ -)/
.
-k_ - + + x_ - ^2( α _ + + β _ + - x_ + + . α _ - + ( ( α _ + - - α _ + + )x_ +.
. . + β _ + - x_ -)/
.
-k_ - - ]^ - 1 > 0 is a normalization factor. Here π _XY refers to the probability that a directed link i is of type XY in the stationary regime.
§ REFERENCES
unsrt
|
http://arxiv.org/abs/2307.03178v1
|
20230706175723
|
Engineering non-Hermitian Second Order Topological Insulator in Quasicrystals
|
[
"Chakradhar Rangi",
"Ka-Ming Tam",
"Juana Moreno"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
APS/123-QED
[email protected]
Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803, USA
Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803, USA
Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803, USA
Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803, USA
Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803, USA
Non-Hermitian topological phases have gained immense attention due to their potential to unlock novel features beyond Hermitian bounds. PT-symmetric (Parity Time-reversal symmetric) non-Hermitian models have been studied extensively over the past decade. In recent years, the topological properties of general non-Hermitian models, regardless of the balance between gains and losses, have also attracted vast attention. Here we propose a non-Hermitian second-order topological (SOT) insulator that hosts gapless corner states on a two-dimensional quasi-crystalline lattice (QL).
We first construct a non-Hermitian extension of the Bernevig-Hughes-Zhang (BHZ) model on a QL generated by the Amman-Beenker (AB) tiling. This model has real spectra and supports helical edge states. Corner states emerge by adding a proper Wilson mass term that gaps out the edge states. We propose two variations of the mass term that result in fascinating characteristics. In the first variation, we obtain a purely real spectra for the second-order topological phase. In the latter, we get a complex spectra with corner states localized at only two corners. Our findings pave a path to engineering exotic SOT phases where corner states can be localized at designated corners.
Engineering non-Hermitian Second Order Topological Insulator in Quasicrystals
Juana Moreno
August 1, 2023
=============================================================================
§ INTRODUCTION
Non-Hermitian topological phases are an exotic array of states which represent a rapidly evolving field of study within condensed matter physics, optical science, and engineering <cit.>. While conventional Hermitian systems have long been the focus of research <cit.>, the exploration of non-Hermitian phenomena has gained significant attention in recent years. This has been motivated by viable theoretical and experimental platforms for realizing these exotic phases, such as Weyl semimetals <cit.>, models of finite quasiparticle lifetimes <cit.>, optical and mechanical systems subjected to gains and losses <cit.>, electrical circuits <cit.>, and even biological systems <cit.>.
While the interplay of non-Hermiticity and topology has extended the understanding of their Hermitian counterparts, the non-Hermitian topological phases exhibit novel and richer features with no Hermitian counterparts. Some of the prominent examples include the existence of exceptional points (EPs) where more than one eigenstate coalesces <cit.>, and the bi-orthogonal bulk-boundary correspondence accompanied by non-Hermitian skin effects <cit.>. These systems also extend the general symmetry classification of topological phases <cit.>.
Building upon the concept of topological insulators (TIs), the notion of Hermitian higher-order topological insulators (HOTIs) has been proposed <cit.>. Unlike conventional TIs, HOTIs have gapless states on lower-dimensional boundaries. For example, a second-order topological insulator (SOTI) in two dimensions hosts gapless corner modes, while a TI has gapless states on the whole boundary. Over the past few years, HOTIs have been discovered in aperiodic quasi-crystalline and amorphous systems <cit.>, expanding our understanding of topological phases in unconventional systems.
Recently, Tao Liu et al. provided a framework to investigate non-Hermitian physics in HOTIs <cit.>. They showed that 2D (3D) non-Hermitian crystalline insulators could host topologically protected second-order corner modes and, in contrast to their Hermitian counterpart, the gapless states can be localized only at one corner.
Motivated by these studies, we address whether it is possible to realize non-Hermitian HOTIs (NH-HOTIs) on quasicrystalline lattices (QLs). If these NH-HOTIs can be realized on QLs, is it possible to control and engineer them? In this work, we investigate non-Hermitian HOTIs on a 2D quasicrystalline square lattice generated by the Ammann-Beenker tiling pattern. We start with a non-Hermitian extension of the Bernevig-Hughes-Zhang (BHZ) model on a 2D quasicrystal respecting pseudo-hermiticity and reciprocity. We consider two variations of the Wilson-mass term to gap out the edge states resulting in corner states. Interestingly, we find that the NH-HOTI phase has purely real spectra in one case. The real spectrum of a non-Hermitian Hamiltonian is crucial in the context of dynamic stability. In contrast, we get complex spectra in the second case but observe unconventional phases where the corner modes can be localized at only one or two corners. This finding allows us to lay out a simple numerical approximation to understand and engineer the location of corner states.
The paper is organized as follows. In Sec. <ref>, we define a non-Hermitian extension of the BHZ model that supports quantum spin Hall states (QSH) on a 2D quasicrystalline lattice. We consider two different mass terms that are added to this model. We analyze the spectrum and the resulting corner states of those models in Sec. <ref>. In Sec. <ref>, we compute the topological phase diagram and comment on the reality of the spectra. Sec. <ref> provides a summary and discussion.
§ MASS TERM INDUCED CORNER MODES IN NON-HERMITIAN BHZ MODEL ON QL
§.§ Model
Inspired by the non-Hermitian extension of the BHZ model on a square lattice <cit.>, we define a non-Hermitian BHZ (NH-BHZ) Hamiltonian on a 2D quasi-crystalline lattice. We consider a tight-binding non-Hermitian Hamiltonian on a 2D QL generated by the Ammann-Beenker (AB) tiling pattern, where the plane is tiled using squares and rhombi. Each lattice site consists of two orbitals. The second quantized Hamiltonian is given by
NH-BHZ = ∑_m≠ nĉ^†_mH_mnĉ_n + ∑_n ĉ^†_nH_n ĉ_n,
where ĉ^†_n = (ĉ^†_nα↑,ĉ^†_nα↓,ĉ^†_nβ↑,ĉ^†_nβ↓) denotes the electron creation operator on site n; α and β denote the orbital degrees of freedom at a given lattice site, and ↑ and ↓ represent the spin degrees of freedom. The hopping part of the Hamiltonian is
H_mn = - f(r_mn)/2[it_1(σ_3τ_1cosϕ_mn + σ_0τ_2sinϕ_mn)
+ t_2 σ_0τ_3 - γσ_1τ_1cosϕ_mn].
Here t_1 and t_2 are hopping amplitudes. The function f(r_mn) ≡Θ(r_c - r_mn)exp(1-r_mn/ξ) denotes the spatial decay factor of the hopping amplitude with ξ as the decay length, and r_mn = |𝐫_m-𝐫_n|. The factor Θ(r_c - r_mn) introduces a hard cut-off, r_c, for the hopping. σ_i and τ_i (i=1,2,3) represent the Pauli matrices
acting on the spin and orbital sectors, respectively.
σ_0 and τ_0 are the 2 × 2 identity matrices.
ϕ_mn represents the polar angle made by the bond between site m and n with respect to the horizontal direction as shown in Fig. <ref> <cit.>.
As the factor cos(ϕ_mn) picks up a negative sign under m↔ n, the last term in the above equation is the non-Hermitian part of the Hamiltonian. Consequently, γ denotes the non-Hermitian strength. Physically, this results in an asymmetric hopping in our model.
The onsite term is given by
H_n = (M+2t_2)σ_0τ_3,
where M denotes the Dirac mass. Due to the distinction between conjugation and transposition in non-Hermitian Hamiltonians, the non-Hermiticity ramifies the internal symmetries extending the ten-fold Altland-Zirnbauer (AZ) symmetry of Hermitian systems to the 38-fold symmetry classes <cit.>. The Hamiltonian in Eq. (<ref>) respects variants of time-reversal symmetry in non-Hermitian systems known as reciprocity <cit.>, 𝒯≡ iσ_2τ_0 and, pseudo-hermiticity, η≡σ_3τ_0:
𝒯H^T𝒯^-1 = H, 𝒯𝒯^* = -1;
η H^†η^-1 = H, η = η^-1.
where H^T and H^† denote transposed and Hermitian conjugated Hamiltonian. A detailed symmetry analysis is carried out in Appendix A.
§.§ Spectrum and Corner States
To obtain the quantum spin Hall (QSH) states and the spectrum, we diagonalize the 4N × 4N Hamiltonian NH-BHZ
(Eq. (<ref>))
defined on the QL under open boundary conditions (OBC) with N denoting the number of sites and the following values for the parameters: t_1 = t_2 = 1.0, M = 1.0, ξ = 1.0 and γ = 0.5. The spectrum and the probability distribution of the edge states are plotted in the top panels of Fig. <ref>. The bulk states are marked in blue as opposed to the in-gap states in red. A striking feature is that the spectrum is completely real, as evident from the imaginary part of the spectrum in Fig. [fig:SpectraNonHermitianBHZ]3(b).
The presence of pseudo-hermiticity symmetry, η, ensures that the bulk spectrum is real <cit.>. In addition, the combination of reciprocity and pseudo-hermiticity makes the edge states also to have a real spectra <cit.>. In non-Hermitian systems, the reciprocity symmetry also leads to Kramer's degeneracy <cit.>. The inset of Fig. <ref>(a) shows a few in-gap states that are doubly degenerate as a consequence. These in-gap states live on the edges of the QL as indicated by the normalized probability density of a typical in-gap states displayed in Fig. [fig:SpectraNonHermitianBHZ]3(c).
Now that we have designed a non-Hermitian QSH insulator on a QL, let us introduce a mass term in the Hamiltonian to gap out the in-gap states and obtain corner modes following the prescription given in Refs. <cit.>. We define:
M = ∑_m ≠ nĉ^†_m ( f(r_mn)/2g σ_2τ_1 cos(2ϕ_mn)) ĉ_n,
where g is the magnitude of the Wilson mass and physically represents a hopping amplitude. Thus, the total Hamiltonian of a non-Hermitian second-order TI is
NH-SOTI = NH-BHZ + M.
The mass term, M breaks the reciprocity symmetry and pseudo-hermiticity but preserves the chiral symmetry,
𝒮 = 𝒯𝒞, with the non-Hermitian version defined as:
𝒮 H^†𝒮^-1 = -H.
Additional details on the symmetry analysis are provided in Appendix A.
We diagonalize the Hamiltonian NH-SOTI with g=1 and the same set of parameters we used for NH-BHZ
(Eq. (<ref>)). The spectrum is plotted in panels [fig:SpectraNonHermitianBHZ]3(d) and [fig:SpectraNonHermitianBHZ]3(e). In panel [fig:SpectraNonHermitianBHZ]3(d), we see that the in-gap states are gapped out, and four zero-energy modes (ZEMs) appear. An interesting feature is that the imaginary part of the spectrum is again zero. The corresponding ZEMs live on the corners of the QL, as evident from the probability distribution in panel [fig:SpectraNonHermitianBHZ] 3(f).
The appearance of these corner modes can be attributed to the generalized Jackiw-Rebbi (JR) index theorem <cit.>, where the Wilson mass changes its sign.
To understand this, let us assume each edge of the QL to form a long bond <cit.>. Since the mass term depends on the polar angle, ϕ_mn, we can compare the angle each bond makes with the horizontal and obtain the sign of the term cos(2ϕ_mn). This is illustrated in Fig. [fig:mechanismMassTerm] 4(b) where the edges of the QL are approximated by a square. The panel [fig:mechanismMassChange]4(a) shows a circular chart, where the colors represent the sign of the mass term as a function of the polar angle of the edge, θ_edge.
For example, the top right localized state in panel (b) will be formed by electrons flowing towards the right at the top horizontal edge and moving up along the right vertical edge. The right top horizontal edge forms an angle θ_edge=0, while the up right vertical edge forms an angle θ_edge=π/2 with the horizontal axis.
The label on each color section in Fig. [fig:mechanismMassChange]4(a) represents the values of θ_edge, at which the mass term changes sign. Effectively, the mass term, cos(2θ_edge), will distinguish the positive region, θ_edge∈ [3π/4,5π/4] ⋃ [7π/4,π/4] and the negative region, θ_edge∈ [π/4,3π/4] ⋃ [5π/4,7π/4].
At each corner of Fig. [fig:mechanismMassTerm] 4(b)
the adjacent sides of the square pass through orange (positive mass) and purple (negative mass) regions, indicating a mass domain wall resulting in a localized state.
We also study the mass term suggested in <cit.>:
M' = ∑_m ≠ nĉ^†_m ( f(r_mn)/2g σ_1τ_1 cos(2ϕ_mn)) ĉ_n,
NH-SOTI' = NH-BHZ + M'.
The spectrum and the corner states of NH-SOTI' are plotted in Fig. <ref> for g=1.0, γ = 0.5.
Comparing the spectrum and corner modes in Fig. <ref> and <ref> we immediately notice the following differences: (i) the spectrum is complex in the former; (ii) the corner modes are localized at only two corners in Fig. <ref>. We now shall address (ii) and resort to the next section to comment on (i). In short, we find that the interplay between the non-Hermitian asymmetric hopping term in Eq. (<ref>) and the mass term in Eq. <ref>, M',
plays a crucial role in dictating the number of corner modes.
The explanation for the apparent difference in the number of corner modes again employs the approximation scheme described in Fig. <ref>.
Since the mass term, M', and the non-Hermitian hopping term (Eq. (<ref>)) in NH-BHZ are both proportional to σ_1τ_1, we expect both terms to contribute to the magnitude of the effective Wilson mass of each edge state.
Note that we do not have an analytical expression for the effective Wilson mass due to the lack of translational symmetry, but we are able to give a crude estimate using the parameters of our model, namely, the Wilson mass parameter, g, and the non-Hermitian strength, γ. We assume that the strength of the effective Wilson mass at each edge roughly depends on g≡(gcos2θ_edge + γcosθ_edge). For convenience, we call g the effective Wilson mass parameter. In Fig. <ref>, we compute g at each corner with the help of the circular chart displayed in panel [fig:DonutMassRatio]6(a), which represents the behavior of cosθ_edge as a function of θ_edge. In particular, cosθ_edge is positive when the edge intercepts the chart in the left half which is colored green. This corresponds to θ_edge in quadrants I and IV. On the other hand, cosθ_edge is negative when the edge intercepts the chart at the right (θ_edge in quadrants II and III). See Fig. [fig:DonutMassRatio]6(a).
In Fig. [fig:HMMechanismCorners]6(b), we observe asymmetric values of g due to the non-Hermitian strength, γ. Namely, g for the horizontal edges at corners I and IV take a value of g+γ whereas a value of g-γ at corners II and III. As a result, the value of g at the horizontal edge over its value at the vertical edge increases at corners I and IV and decreases at corners II and III. Due to the increase in the ratio of the effective Wilson mass parameters at corners I and IV, the corresponding probability density of these modes is enhanced, while the probability of a localized state at corners II and III is suppressed. This results in only two corners modes being observed.
The suppression of amplitude can be understood from the Jackiw-Rebbi solution of the Dirac equation. The JR solution for the wavefunction probability density with a mass domain at the origin depends on the masses as m_1m_2/(m_1+m_2), where m_1,m_2>0. For a fixed mass m_1 the probability density only depends on the ratio m_1/m_2 as 1/(1+m_1/m_2) <cit.>.
This line of argument provides a remarkable approximation and guides our intuition in the numerical simulations. To demonstrate the utility of this approximation, we engineer a few scenarios for corner states by modifying the non-Hermitian hopping term of Eq. (<ref>) . We consider two different variations:
M” = ∑_m ≠ nĉ^†_m H_1 ĉ_n , where
H_1 = f(r_mn)/2γsinϕ_mnσ_1τ_1 ;
M”' = ∑_m ≠ nĉ^†_m H_2 ĉ_n , where
H_2 = f(r_mn)/2γ(sinϕ_mn+cosϕ_mn)σ_1τ_1.
The term H_1 increases the ratio of g between the horizontal and vertical edges at corners III and IV and decreases it at corners I and II as described in Fig. [fig:H1Mechanism]7(a).
This results in localization of wavefunction probability density at corners III and IV as opposed to I and IV for M'. H_2 produces an intriguing effect to produce a localized probability density at only corner IV. The corresponding wavefunction probability densities are shown in Figs. [fig:H1corners]7(c) and [fig:H2corners]7(d), respectively. A further inspection at Fig. [fig:H2Mechanism]7(b) reveals that the ratio of g at corners II and IV are the same. This naturally leads to the question: Why do we see suppression of probability density at corner II as opposed to IV?
We again invoke the JR solution for the wavefunction probability density, m_1m_2/(m_1+m_2), which tells us that if both masses m_1 and m_2 decrease, the probability density is suppressed, explaining our observation at corner II compared to IV. Thus, this simple approximation scheme guides our intuition in engineering unique SOT phases.
§.§ Topological Phase Diagram
Figs. [fig:SpectraNonHermitianBHZ]3(d) and [fig:SpectraNonHermitianBHZ]3(e) revealed that the spectrum of NH-SOTI at g=1.0 and γ=0.5 is real.
We now ask if the reality of the spectrum is achieved only at one point or persists over a range of parameters. To answer this question, we tweak the non-Hermitian parameter γ over a range of values, γ∈ [-2,2], and plot the corresponding spectra as a function of γ. The values of other parameters in NH-SOTI remain the same. The results are plotted in Fig. <ref>. We witness a topological phase transition as we sweep γ around 1.0, where the ZEMs disappear and merge with the bulk bands. Another interesting characteristic of this transition is the disappearance of the real spectrum, as seen from the evolution of the imaginary part of the eigenenegies in Figs. [fig:phaseDiagramNonHBHZ]8(b) and [fig:phaseDiagramNonHBHZ]8(d).
To understand the persistence of a real spectra over a finite range of γ, let us construct NH-SOTI on a 2D square lattice.
This allow the use of analytical expressions for the spectrum in k-space, which we can then compare with NH-SOTI defined on a QL where there are not analytical expressions. The motivation for such an approach stems from the observation that in our numerical simulations for NH-BHZ on the QL, we recover the phase-diagram displayed in Ref. <cit.> where NH-BHZ was defined on a square lattice.
The momentum space representation of NH-SOTI on a 2D square lattice is:
H_NH-SOTI(𝐤) = t_1[ σ_3τ_1 sink_x +σ_0τ_2 sink_y] + [ M
+ t_2(2 - cosk_x - cosk_y) ]σ_0τ_3 + g ×
[cosk_x- cosk_y]σ_2τ_1 + iγsink_xσ_1τ_1,
and the corresponding eigenvalues, E(𝐤), can be computed as:
E(𝐤) = ±[t_1^2sin^2k_x + t_1^2sin^2k_y + (M + t_2[2 -cosk_x
- cosk_y])^2 + g^2(cosk_x - cosk_y)^2 - γ^2sink_x^2 ]^1/2.
We recover the spectrum in Ref. <cit.> for g=0, t_1 = -t_2 up to a k-independent term in t_2. It is interesting to note that E(𝐤) in Eq. (<ref>) is either real or purely imaginary depending on the relative magnitudes of the parameters and 𝐤. This is surprising as the addition of mass term breaks reciprocity and pseudo-Hermiticity which are the crucial symmetries responsible for the reality of the spectrum in the non-Hermitian BHZ model <cit.>.
To compare we can construct the model of NH-SOTI' with M' as the mass term (Eq. <ref>) and obtain the corresponding eigenvalues, E'(𝐤):
E'(𝐤) = ±[t_1^2sin^2k_x + t_1^2sin^2k_y + (M + t_2[2
-cosk_x - cosk_y])^2 + g^2(cosk_x - cosk_y)^2
- γ^2sink_x^2 + 2igγsink_x(cosk_x - cosk_y)]^1/2.
On comparing Eqs. (<ref>) and (<ref>) we observe that the resulting spectrum is complex when the mass term is proportional to σ_1τ_1.
§ DISCUSSION AND OUTLOOK
We propose a non-Hermitian second-order topological phase on a 2D quasicrystalline lattice by adding two variations of a Wilson-mass term to a non-Hermitian extension of the BHZ model. In the former case, we find the spectrum to be purely real, which is important in the context of the dynamical stability of non-Hermitian systems. In the latter case, we find a complex spectrum, but the non-Hermiticity allow us to engineer more exotic SOT phases where localized states appear at only one or two corners. We also explore the reality of the spectra by comparing the eigenvalues of our models on the square lattice to those on the quasicrystalline lattice. To address whether such quasicrystals can be experimentally realized, one may consider avenues such as photonic quantum walks <cit.>.
Several open questions need to be addressed: (1) Is there a symmetry ensuring the reality of the spectra of the non-Hermitian second-order TI model we consider, NH-SOTI (Eq. <ref>)? Even though the mass term M breaks pseudo-Hermiticity and reciprocity symmetry, which are crucial for the reality of spectra, we still end up with a real spectra. We do not find any obvious symmetry that is responsible for the real spectra. It would be interesting to further explore the reason behind this behavior. (2) Another question that arises in the context of topological phases is the nature of the topological invariant describing these SOT phases. We note that for the Hermitian case, it has been proposed that a topological invariant can be defined as a projection of the Hamiltonian from a higher dimension <cit.>. It would be interesting to obtain the topological classification of non-Hermitian SOT in quasicrystals. Finally, it would be interesting to extend the study of non-Hermitian SOT phases to 3D quasicrystals.
We acknowledge Justin H. Wilson for helpful suggestions. This manuscript is based on work supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award Number DE-SC0017861. This work used high-performance computational resources provided by the Louisiana Optical Network Initiative and HPC@LSU computing.
§ SYMMETRY ANALYSIS
In this section, we investigate the symmetry properties of the non-Hermitian BHZ model and the mass term induced SOTI model NH-SOTI defined on a 2D QL with AB tiling. Table <ref> shows the symmetries of the three Hamiltonians: NH-BHZ (Eq. (<ref>)), NH-SOTI = NH-BHZ + M (Eq. (<ref>)), and NH-SOTI' = NH-BHZ + M' (Eq. (<ref>)). The Hamiltonian NH-BHZ respects a variant of time-reversal symmetry in non-Hermitian systems (TRS^†): 𝒯 = U_𝒯T, a variant of particle-hole symmetry (PHS^†): 𝒞 = U_𝒞𝒦 and, thus, chiral symmetry: 𝒮 = 𝒯𝒞. Here, the unitary matrices U_𝒯,𝒞 satisfy U_𝒯U_𝒯^* = -1, U_𝒞U_𝒞^* = 1, and, T and 𝒦 denotes transposition and complex conjugation respectively. m_x,m_y and m_z represents the mirror symmetries reflecting the QL about x,y and z respectively. 𝒫 denotes the parity operator (spatial inversion). The Hamiltonian NH-SOTI breaks both 𝒯 and 𝒞 but preserves the combined symmetry 𝒮, whereas NH-SOTI' preserves 𝒞. We found that the zero energy modes (ZEMs) of NH-SOTI are most likely protected by the combined symmetry m_z𝒞 while the ZEMs of NH-SOTI' are protected by the combined symmetry of 𝒮 and η.
|c|c|c|c|c|
Symmetry Condition on H NH-BHZ NH-SOTI NH-SOTI'
TRS^† = 𝒯 = U_𝒯T U_𝒯H^TU_𝒯^-1 = H × ×
TRS = 𝒯' = U_𝒯𝒦 U_𝒯H^*U_𝒯^-1 = H × × ×
PHS^† = 𝒞 = U_𝒞𝒦 U_𝒞H^*U_𝒞^-1 = -H ×
PHS = 𝒞' = U_𝒞T U_𝒞H^TU_𝒞^-1 = -H × × ×
𝒮 = 𝒯𝒞 U_𝒮H^†U_𝒮^-1 = -H ×
η = σ_3τ_0 η H^†η^-1 = H × ×
m_x = U_m_xℳ_x m_xHm_x^-1 = H × × ×
m_y = U_m_yℳ_y m_yHm_y^-1 = H ×
m_z = U_m_z m_zHm_z^-1 = H × × ×
m_zm_x = U_m_zU_m_xℳ_x m_zm_xH(m_zm_x)^-1 = H ×
𝒫 = U_𝒫ℐ_xy 𝒫H𝒫^-1 = H × ×
m_x𝒯' = U_m_xℳ_xU_𝒯𝒦 m_xU_𝒯H^*(m_xU_𝒯)^-1 = H ×
m_x𝒞' = U_m_xℳ_xU_𝒞T m_xU_𝒞H^T(m_xU_𝒞)^-1 = -H ×
m_x𝒞 = U_m_xℳ_xU_𝒞𝒦 m_xU_𝒞H^*(m_xU_𝒞)^-1 = -H ×
m_y𝒯 = U_m_yℳ_yU_𝒯T m_yU_𝒯H^T(m_yU_𝒯)^-1 = H ×
m_z𝒞' = U_m_zU_𝒞T m_zU_𝒞H^T(m_zU_𝒞)^-1 = -H ×
m_z𝒯' = U_m_zU_𝒯𝒦 m_zU_𝒯H^*(m_zU_𝒯)^-1 = H ×
Symmetries of NH-BHZ, NH-SOTI and NH-SOTI' on a square QL. Here the unitary matrices are U_𝒯 = iσ_2τ_0, U_𝒞 = σ_3τ_1, U_m_x = σ_1τ_0, U_m_y = σ_2τ_3, U_m_z = σ_3τ_0 and U_𝒫 = σ_0τ_3. The matrices ℳ_x,ℳ_y, and ℐ_xy are orthogonal matrices permuting the sites of the QL to flip the lattice vertically, horizontally and both combined, respectively.
§ ADDITION OF A NON-HERMITIAN ON-SITE GAIN-AND-LOSS TERM TO HERMITIAN BHZ MODEL
In the main text, we explore the reality of the spectra and the corner states in non-Hermitian SOTI models obtained through the addition of two different mass terms to the non-Hermitian BHZ model. Here, we take an alternative path. We start with a Hermitian SOTI defined on a QL <cit.> and add a non-Hermitian onsite gain-and-loss term. Our goal is twofold: (1) To check if the corner modes are robust to the inclusion of non-Hermiticity. (2) If they turn out to be robust, investigate the reality of the corresponding spectrum.
The Hamiltonian for Hermitian SOTI can be written as <cit.>
SOTI = ∑_m≠ nĉ^†_mH'_mnĉ_n + ∑_n ĉ^†_nH'_n ĉ_n,
with the hopping term H'_mn given by
H'_mn = - f(r_mn)/2[it_1(σ_3τ_1cosϕ_mn + σ_0τ_2sinϕ_mn) +t_2σ_0τ_3 - gσ_1τ_1cos(2ϕ_mn)],
and on-site term
H'_n = (M+2t_2)σ_0τ_3,
with all the parameters retaining their meaning from Sec. <ref>. The Hamiltonian in (<ref>) preserves particle-hole symmetry 𝒞, defined by 𝒞 =σ_3τ_1𝒦, with 𝒦 denoting conjugation. The model supports four corner states that are protected by the combined symmetries 𝒞 and C_4m_z, with C_4 denoting fourfold rotation symmetry, and m_z denoting mirror symmetry. We introduce an on-site non-Hermitian term representing gain-and-loss,
loss-gain = ∑_n ĉ^†_n(iγσ_3τ_3)ĉ_n.
We diagonalize the Hamiltonian matrix under open boundary conditions(OBC) with ξ = 1, t_1 = t_2 = 1, g = 1, and M = -1. We choose γ = 1, corresponding to the topological non-trivial phase hosting corner modes. The spectrum and the probability distribution of the zero energy modes (ZEMs) are displayed in Fig. <ref>.
We immediately observe from Fig. [fig:appendix1]9(a,b) that the spectrum is complex. Also, from Fig. [fig:appendix1]9(c), we notice that there is no asymmetry of the corresponding corner modes unlike the models discussed in the main text. The zero energy modes are protected by the particle-hole symmetry 𝒞'.
|
http://arxiv.org/abs/2307.00385v1
|
20230701165749
|
Sulcal Pattern Matching with the Wasserstein Distance
|
[
"Zijian Chen",
"Soumya Das",
"Moo K. Chung"
] |
q-bio.NC
|
[
"q-bio.NC",
"eess.IV"
] | |
http://arxiv.org/abs/2307.02664v1
|
20230705213842
|
Logical circuits in colloids
|
[
"Nic Roberts",
"Noushin Raeisi Kheirabadi",
"Michail-Antisthenis Tsompanas",
"Alessandro Chiolerio",
"Marco Crepaldi",
"Andrew Adamatzky"
] |
cs.ET
|
[
"cs.ET",
"cond-mat.soft"
] |
a,b]Nic Roberts
a]Noushin Raeisi Kheirabadi
a]Michail-Antisthenis Tsompanas
c,a]Alessandro Chiolerio
d]Marco Crepaldi
a]Andrew Adamatzky
[a]Unconventional Computing Laboratory, UWE, Bristol, UK
[b]Department of Engineering and Technology, University of Huddersfield, UK
[c]Center for Bioinspired Soft Robotics, Istituto Italiano di Tecnologia, Genova, Italy
[d]Electronic Design Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
Colloid-based computing devices offer remarkable fault tolerance and adaptability to varying environmental conditions due to their amorphous structure. An intriguing observation is that a colloidal suspension of ZnO nanoparticles in DMSO exhibits reconfiguration when exposed to electrical stimulation and produces spikes of electrical potential in response. This study presents a novel laboratory prototype of a ZnO colloidal computer, showcasing its capability to implement various Boolean functions featuring two, four, and eight inputs.
During our experiments, we input binary strings into the colloid mixture, where a logical “True" state is represented by an impulse of an electrical potential. In contrast, the absence of the electrical impulse denotes a logical “False" state. The electrical responses of the colloid mixture are recorded, allowing us to extract truth tables from the recordings. Through this methodological approach, we demonstrate the successful implementation of a wide range of logical functions using colloidal mixtures.
We provide detailed distributions of the logical functions discovered and offer speculation on the potential impacts of our findings on future and emerging unconventional computing technologies. This research highlights the exciting possibilities of colloid-based computing and paves the way for further advancements.
Unconventional computing, Colloids, Liquid computers, Liquid electronics, Liquid robotics
§ INTRODUCTION
A substance that cannot sustain shear stress when at rest is classified as a fluid: such stress necessarily produces a change in shape and the most remarkable dynamic phenomenon of flow. Specifically, a liquid is categorised as an incompressible fluid. Using liquids as computing devices can be traced back to documented evidence in papers discussing hydraulic algebraic machines <cit.>. In our recent comprehensive overview <cit.>, we thoroughly examine various families of liquid computing devices. These include hydraulic machine integrators, fluid mappers, fluid jets employed in fluidic logic devices to realise logical gates, liquid marble computers, and reaction-diffusion computers.
Several years ago, we developed the concept of the liquid cybernetic system <cit.>, a colloidal autonomous system, which is a soft holonomic processor realising autolographic features <cit.>. Further, these theoretical ideas of colloid computers were implemented into laboratory prototypes.
Our experiments conducted in controlled laboratory conditions also revealed the potential of ZnO colloid mixtures to function as electrical-analogue neurons, successfully implementing synaptic-like learning as described in <cit.>, as well as demonstrating the manifestation of Pavlovian reflexes <cit.>.
The experimental study presented in <cit.> showcases the classification capabilities of a Fe3O4 water-based ferrofluid for digit recognition within an 8×8 pixel dataset. Additionally, we demonstrated that this ferrofluid could be programmed using quasi-direct current signals and read in radio frequency mode.
To thoroughly assess the computational capabilities of colloid computers more formally than previously explored, we embarked on a study to determine whether Boolean functions could be straightforwardly implemented in colloid mixtures. To achieve this, we adopted a theoretical approach outlined in <cit.>. This technique involves selecting a pair of input sites and systematically applying all possible combinations of inputs to these sites, where the electrical characteristics of the input signals represent logical values. The resulting outputs, represented by the electrical responses of the substrate, are recorded on a designated set of output sites.
This approach falls within the realm of reservoir computing <cit.> and in materia computing<cit.>, which are techniques used to analyse the computational properties of physical and biological substrates. By utilising these methodologies, we aim to gain deeper insights into the computational potential of colloid mixtures in a more rigorous and structured manner.
§ METHODS
Zinc Oxide nanoparticles were purchased from US research nanomaterials. Sodium Dodecyl Sulphate (SDS) and Sodium Hydroxide (NaOH) were purchased from Merck. DMSO Pharmaceutical Grade 99.9% were purchased from Fisher Scientific. A Millipore de-ionized water generating unit, model Essential, with a resistance of 15 Mohm cm, was used to create DIW in the lab. SDS was added to DIW and stirred to get a homogenous surfactant solution with a concentration of 0.22 wt%. Under stirring, 2 ml of SDS solution and 1 ml of NaOH 10 M were added to the DMSO. The mixture was then treated with 1 mg ZnO nanoparticles while constantly stirring. The resulting dispersion concentration was kept constant at 0.11 mg/ml. For 30 minutes, the resultant suspension was placed in an ultrasonic bath. The stirring operation was then repeated for a few more hours to achieve a homogeneous dispersion of ZnO <cit.>.
The nanoparticle suspensions were characterized using field emission scanning electron microscopy (FEI Quanta 650 FESEM). In this study, the accelerating voltage was set to 10 kV, while the working distance was roughly 5 mm. The contrast and brightness of the photos were adjusted so that particles could be differentiated from the backdrop.
An Ultraviolet-visible spectrometer (Perkin Elmer Lambda XLS) was used to quantify sample absorbance at room temperature.
Dynamic Light Scattering (DLS) measurements were performed on a Zetasizer Nano ZS (1000 HS, Malvern Instrument Ltd., UK) to analyze the z-average hydrodynamic diameter.
The developed hardware could send sequences of 2, 4, and 8-bit strings to the colloid sample. The strings were encoded as step voltage inputs where -5 V denoted a logical '0' and 5 V a logical '1'. The hardware was based around an Arduino Mega 2560 (Elegoo, China) and a series of programmable signal generators, AD9833 (Analog, USA).
To search for two-, four- and eight-input Boolean circuits, we used two, four, and eight-input electrodes, respectively. The input electrodes were 10 μm diameter platinum rods inserted into the colloid container with a separation of 5 mm between them. Data acquisition (DAQ) probes were placed in a parallel line, separated by 5 mm. There were 2 DAQ differential outputs from the sample container inputted to a Pico 24 (Pico Technology, UK) analogue-to-digital converter (ADC). The 3th channel was used to pass a pulse to the ADC on every input state change. See Fig. <ref> for a schematic of the apparatus. There were a total of 138 repeats.
A sequence of two, four, and eight-bit strings counting up from binary
00 to 11, 0000 to 1111 and 00000000 to 11111111 with a state change every 15 seconds, were passed into the colloid. All 138 repeats of the experiment were done on the same colloid.
Samples from 2 channels were taken at 1 Hz over the whole duration of a given experimental run. Peaks for each channel were located for a set of 10 thresholds, from 100 mV to 600 mV with step 50 mV, for each input state, 0000 to 1111.
§ RESULTS
§.§ Colloid Structural Characteristics
The absorption spectrum of a ZnO colloid, with a concentration of 0.11 mg/ml, was measured at room temperature using UV-visible spectroscopy. The recorded spectrum covers a wavelength range of 200-700 nm. Figure (<ref>, a) illustrates the UV-visible absorption spectrum plot. The spectrum shows a prominent peak at 372 nm, indicating hexagonal ZnO nanoparticles <cit.>. Comparing these findings with existing literature, there is a strong agreement with previous reports <cit.>.
The optical band gap was calculated using the following equation:
E_g (eV) = hc/λ = 1240/λ
In this equation, E_g represents the optical band gap, h is Planck's constant, c is the speed of light, and λ is the wavelength corresponding to the maximum absorption. The calculated value for the optical band gap is 3.35 eV, which aligns closely with previous findings from other sources <cit.>.
Dynamic Light Scattering (DLS) was utilized to characterize the ZnO nanoparticles in the colloid. Figure (<ref>, b) displays the size distribution of these nanoparticles. A particle's average gyration (hydrodynamic) diameter is determined to be 496 nm, nearly 20 times the average diameter of an individual particle.
As an amphoteric oxide, ZnO undergoes hydrolysis when exposed to water, forming a hydroxide coating on its surface. This coating contributes to an increase in the hydrodynamic diameter of the particles <cit.>.
A thin layer of ZnO colloid was prepared to analyse the particles' morphology and size using the FESEM (Field-Emission Scanning Electron Microscopy) technique. This was done by drop-casting a drop of ZnO particle suspension, with a concentration of 0.11 mg/ml, onto a Copper foil with a 100 μm thickness. The preparation was carried out at room temperature.
The FESEM results, as shown in Figure (<ref>, c), reveal the occurrence of particle agglomeration during the sample preparation process. Due to the surface tension of the solvent as it evaporates, the FESEM observations rarely display individual, separated spheres. Instead, most of the ZnO spheres appear to be multilayered. This can be attributed to the increased liquid surface tension, which draws the nanoparticles closer and leads to their re-aggregation during the drying process <cit.>.
§.§ Extracting Boolean Gates
Boolean strings were extracted from the data, where a logic ‘1’ was noted for a channel if it had a peak outside the threshold band for a particular state. Otherwise, a value of ‘0’ was recorded, and the peak's polarity was not considered.
The strings for each experimental repeat were stored in their respective Boolean table. To extract state graphs, a state/node was defined as the string of output values from each channel at each input state, and transitions/edges were defined as a change in the input state. This led to a total (500 + 470 + 410 = 1380) state graphs. The sum of products (SOP) Boolean functions were calculated for the output channel. For each repetition, we collected data and applied 10 thresholds, giving 1380 individual truth tables.
SOP extraction is depicted in Fig. <ref>. If a peak is discovered during an input state, it is considered a logical 1. The DAQ measurements are shown in blue. The synchronisation signal is in orange, indicating the state change. The threshold band is green, while peaks outside of it are marked with 'x'.
The resulting truth table is then reduced to the sum of products depicted in Fig. <ref>.
We have discovered a wide range of Boolean gates. Distributions of gates are shown in Fig. <ref>. Frequently found in experiments with two-input gates are shown in Tab <ref> and illustrated in terms of circuits in Fig. <ref>ab. The most common gate is A+B, which is a NAND gate, a logic gate producing an output that is false only if all its inputs are true; thus, its output is a complement to that of an AND gate. The NAND gate is followed by OR gate and then by two NOT-AND gates.
Most frequently four-input gates are shown in Tab. <ref> and illustrated with example circuits in Fig. <ref>cd.
The size of a Boolean circuit is the number of gates in the circuit. Amongst most frequent four-input circuits (Tab. <ref>) smallest circuits are
A · overlineB·C· D and
A ·B·C·D, and largest one is (A · D ·B) + (B · D ·A) + (A ·B·C) + (B ·A·C) + (D ·A·C).
The most frequent two-input and four-input gates are shown in Tab. <ref>. With regards to eight-input gates, all discovered gates are unique, i.e. have been measured just once, and the only following function has been found twice:
(A · B · F ·C·E)+(A · D · F ·C·E)+(A · G · H ·B·C)+(B · D · E ·A·F)+(B · E · H ·A·C)+(C · D · E ·B·F)+(D · E · H ·B·G)+(D · F · H ·A·B)+(B · C · D · F · G ·H)+(B · C · D · G · H ·F)+(B · E ·A·G·H)+(C · F ·B·G·H)+(E · H ·A·C·F)+(F · H ·B·D·E)+(A · C · E · G ·B·D)+(A · E · F · G ·C·D)+(B · D · E · G ·C·F)+(B · E · F · G ·A·D)+(C · F · G · H ·D·E)+(A · C · D · E · F · H ·G)+(B ·C·E·G·H)+(C ·B·D·F·G)+(D ·C·E·F·H)+(E ·A·B·D·H)+(E ·B·D·G·H)+(B · C · D ·A·E·G)+(B · D · F ·C·G·H)+(B · D · G ·A·C·E)+(B · E · H ·C·F·G)+(C · D · G ·B·E·H)+(C · E · F ·D·G·H)+(C · G · H ·A·E·F)+(D · E · F ·A·B·C)+(B · D ·E·F·G·H)+(B · F ·A·D·E·G)+(C · D ·A·B·E·H)+(C · H ·D·E·F·G)+(A · C · E · G ·D·F·H)+(A ·B·C·F·G·H)+(D ·A·B·C·E·G)+(G ·A·C·D·E·H)
§ DISCUSSION
Our laboratory experiments successfully demonstrated the feasibility of implementing a wide range of many-input logical gates within a colloid mixture comprising ZnO nanoparticles. The discovered two-input gates exhibit functional completeness, enabling the implementation of arbitrary Boolean functions. Notably, the four- and eight-input functions discovered showcase a remarkable level of non-linearity, suggesting that the dynamical behaviour of colloid-based logical devices could be characterized by multiple attractors and bifurcation points, more than features such as resistive switching which was already observed <cit.>.
Looking ahead, our future research direction could focus on cascading the colloid droplets to construct multi-level logical circuits. Additionally, we aim to develop protocols for programming dynamical logical circuits within the colloid droplets. Leveraging on the observed multistability that is the presence of multiple attractors, a possible blue sky objective would regard the implementation of a sequential computing machine, which, similarly to solid-state computers, can be programmed and can execute instructions. Our laboratory experiments have already demonstrated the fundamental assumptions needed to reach this goal, showcasing the possibility of implementing in-memory computing with ferrofluids, therefore showing the coexistence of memorising and computing capabilities. For the particular case studied here, the presence of multiple Boolean transfer functions inherently suggests the existence of a pre-built program in the colloid, here, a function of threshold, but in general, a function of other parameters, including time. A possible solution to achieve functionalities similar to a microprogrammed solid-state computer could regard modulating input signals to convey an equivalent inline and just-in-time executed program. Further studies can then focus on the stimulations' meaning and timing and the feasible techniques for their modulation. This concept, inter alia, further overlaps with the field of neuromorphic computing because inputs can degenerate into spikes for low-duty cycles.
These advancements would enhance the capabilities and expand the potential applications of colloid-based computing systems.
§ ACKNOWLEDGEMENT
This project has received funding from the European Innovation Council And SMEs Executive Agency (EISMEA) under grant agreement No. 964388.
10
adamatzky2017logical
Andrew Adamatzky.
Logical gates in actin monomer.
Scientific reports, 7(1):1–14, 2017.
adamatzky2019brief
Andrew Adamatzky.
A brief history of liquid computers.
Philosophical Transactions of the Royal Society B,
374(1774):20180372, 2019.
adamatzky2020boolean
Andrew Adamatzky, Martin Tegelaar, Han AB Wosten, Anna L Powell, Alexander E
Beasley, and Richard Mayne.
On boolean gates in fungal colony.
Biosystems, 193:104138, 2020.
anand2017role
K Anand, Sibi Varghese, and A Krishnamoorthy.
Role of surfactants on the stability of nano-zinc oxide dispersions.
Part. Sci. Technol, 35:67–70, 2017.
baskoutas2010conventional
Sotirios Baskoutas and Gabriel Bester.
Conventional optics from unconventional electronics in zno quantum
dots.
The Journal of Physical Chemistry C, 114(20):9301–9307, 2010.
baskoutas2011transition
Sotirios Baskoutas and Gabriel Bester.
Transition in the optical emission polarization of zno nanorods.
The Journal of Physical Chemistry C, 115(32):15862–15867,
2011.
chiolerio2017smart
A Chiolerio and Marco B Quadrelli.
Smart fluid systems: The advent of autonomous liquid robotics.
Advanced Science, 4(7):1700036, 2017.
chiolerio2020liquid
Alessandro Chiolerio.
Liquid cybernetic systems: The fourth-order cybernetics.
Advanced Intelligent Systems, 2(12):2000120, 2020.
RSCA2016
Alessandro Chiolerio, Ignazio Roppolo, Katarzyna Bejtka, Abil Asvarov, and
Candido Fabrizio Pirri.
Resistive hysteresis in flexible nanocomposites and colloidal
suspensions: interfacial coupling mechanism unveiled.
RSC Advances, 6:56661–56667, 2016.
crepaldi2023experimental
Marco Crepaldi, Charanraj Mohan, Erik Garofalo, Andrew Adamatzky, Konrad
Szaciłowski, and Alessandro Chiolerio.
Experimental demonstration of in-memory computing in a ferrofluid
system.
Advanced Materials, 35(23):2211406, 2023.
dale2017reservoir
Matthew Dale, Julian F Miller, and Susan Stepney.
Reservoir computing as a model for in-materio computing.
In Advances in Unconventional Computing, pages 533–571.
Springer, 2017.
dale2019substrate
Matthew Dale, Julian F Miller, Susan Stepney, and Martin A Trefzer.
A substrate-independent framework to characterize reservoir
computers.
Proceedings of the Royal Society A, 475(2226):20180723, 2019.
emch1901two
Arnold Emch.
Two hydraulic methods to extract the n th root of any number.
The American Mathematical Monthly, 8(1):10–12, 1901.
fatehah2014stability
Mohd Omar Fatehah, Hamidi Abdul Aziz, and Serge Stoll.
Stability of zno nanoparticles in solution. influence of ph,
dissolution, aggregation and disaggregation effects.
Journal of Colloid Science and Biotechnology, 3(1):75–84,
2014.
frame1945machines
JS Frame.
Machines for solving algebraic equations.
Mathematics of Computation, 1(9):337–353, 1945.
gibb1914
D. Gibb.
The instrumental solution of numerical equations.
In Ellice Martin Horsburgh, editor, Modern Instruments and
Methods of Calculation: a Handbook of the Napier Tercentenary Exhibition,
pages 259–268. The Royal Society of Edinburgh, 1914.
kheirabadi2022pavlovian
Noushin Raeisi Kheirabadi, Alessandro Chiolerio, and Andrew Adamatzky.
Pavlovian reflex in colloids.
arXiv preprint arXiv:2211.06699, 2022.
kheirabadi2022learning
Noushin Raeisi Kheirabadi, Alessandro Chioleriob, Neil Phillipsa, and Andrew
Adamatzky.
Learning in colloids: Synapse-like zno+ dmso colloid.
arXiv preprint arXiv:2211.00419, 2022.
konkoli2018reservoir
Zoran Konkoli, Stefano Nichele, Matthew Dale, and Susan Stepney.
Reservoir computing with computational matter.
In Computational Matter, pages 269–293. Springer, 2018.
lu2018methodology
Pei-Jia Lu, Wei-En Fu, Shou-Chieh Huang, Chun-Yen Lin, Mei-Lin Ho, Yu-Pen Chen,
and Hwei-Fang Cheng.
Methodology for sample preparation and size measurement of commercial
zno nanoparticles.
journal of food and drug analysis, 26(2):628–636, 2018.
lukovsevivcius2009reservoir
Mantas Lukoševičius and Herbert Jaeger.
Reservoir computing approaches to recurrent neural network training.
Computer Science Review, 3(3):127–149, 2009.
miller2002evolution
Julian F Miller and Keith Downing.
Evolution in materio: Looking beyond the silicon box.
In Proceedings 2002 NASA/DoD Conference on Evolvable Hardware,
pages 167–176. IEEE, 2002.
miller2014evolution
Julian F Miller, Simon L Harding, and Gunnar Tufte.
Evolution-in-materio: evolving computation in materials.
Evolutionary Intelligence, 7(1):49–67, 2014.
miller2018materio
Julian F Miller, Simon J Hickinbotham, and Martyn Amos.
In materio computation using carbon nanotubes.
In Computational Matter, pages 33–43. Springer, 2018.
miller2019alchemy
Julian Francis Miller.
The alchemy of computation: designing with the unknown.
Natural Computing, 18(3):515–526, 2019.
pudukudy2015facile
Manoj Pudukudy and Zahira Yaakob.
Facile synthesis of quasi spherical zno nanoparticles with excellent
photocatalytic activity.
Journal of Cluster Science, 26(4):1187–1201, 2015.
reddy2011combustion
A Jagannatha Reddy, MK Kokila, H Nagabhushana, JL Rao, C Shivakumara,
BM Nagabhushana, and RPS Chakradhar.
Combustion synthesis, characterization and raman studies of zno
nanopowders.
Spectrochimica Acta Part A: Molecular and Biomolecular
Spectroscopy, 81(1):53–58, 2011.
stepney2019co
Susan Stepney.
Co-designing the computational model and the computing substrate.
In International Conference on Unconventional Computation and
Natural Computation, pages 5–14. Springer, 2019.
sun2011enhanced
Jian-Hui Sun, Shu-Ying Dong, Jing-Lan Feng, Xiao-Jing Yin, and Xiao-Chuan Zhao.
Enhanced sunlight photocatalytic performance of sn-doped zno for
methylene blue degradation.
Journal of Molecular Catalysis A: Chemical, 335(1-2):145–150,
2011.
verstraeten2007experimental
David Verstraeten, Benjamin Schrauwen, Michiel d’Haene, and Dirk Stroobandt.
An experimental unification of reservoir computing methods.
Neural networks, 20(3):391–403, 2007.
|
http://arxiv.org/abs/2307.01099v2
|
20230703152251
|
Random Chern-Simons matter in $D=1$
|
[
"Jeff Murugan",
"Ruach Pillay Slayen",
"Hendrik J. R. Van Zyl"
] |
hep-th
|
[
"hep-th",
"cond-mat.str-el"
] |
fourMassInsertions
(350,180)
i3,i31,i,i12,i2
j3,j31,j,i12,j2
fermion, label=ii,k1
fermion, label=k_1k1,k2
fermion, label=k_2k2,k3
fermion, label=k_3k3,k4
fermion,label=jk4,j
fermion,label=ii2,l1
fermion,label=k_1l1,l2
fermion,label=k_2l2,l3
fermion,label=k_3l3,l4
fermion,label=jl4,j2
fermion,label=ii3,m1
fermion, label=k_1m1,m2
fermion, label=k_2m2,m3
fermion, label=k_3m3,m4
fermion, label=jm4,j3
dashes,left=1.2,tension=0.2,label=m_i k_1 m_k_1 k_2 k1,k2
dashes,left=1.2,tension=0.2,label=m_k_2 k_3 m_k_3 j k3,k4
dashes,left=1.2,tension=0.4,label=m_i k_1 m_k_3 j l1,l4
dashes,left=1.2,tension=0.2,label=m_k_1 k_2 m_k_2 k_3 l2,l3
dashes,left=1.2,tension=0.2,label=m_i k_1 m_k_2 k_3m1,m3
dashes,left=1.2,tension=0.2,label=m_k_1 k_2 m_k_3 j,label.side=leftm2,m4
m4
l4
k4
|
http://arxiv.org/abs/2307.01366v1
|
20230703214721
|
Minimizing Age of Information for Mobile Edge Computing Systems: A Nested Index Approach
|
[
"Shuo Chen",
"Ning Yang",
"Meng Zhang",
"Jun Wang"
] |
cs.AI
|
[
"cs.AI",
"cs.NI"
] |
Minimizing Age of Information for Mobile Edge Computing Systems: A Nested Index Approach
Shuo Chen,
Ning Yang*,
Meng Zhang*,
Jun Wang
Shuo Chen and Ning Yang are with Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China. (e-mail: [email protected], [email protected]).
Meng Zhang is with the ZJU-UIUC Institute, Zhejiang University, Zhejiang, 314499, China. (e-mail: [email protected]).
Jun Wang is with the Department of Computer Science, University College London, WC1E 6BT, UK. (e-mail: [email protected]).
(*Corresponding author: Ning Yang, Meng Zhang)
August 1, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Exploiting the computational heterogeneity of mobile devices and edge nodes, mobile edge computation (MEC) provides an efficient approach to achieving real-time applications that are sensitive to information freshness, by offloading tasks from mobile devices to edge nodes. We use the metric Age-of-Information (AoI) to evaluate information freshness. An efficient solution to minimize the AoI for the MEC system with multiple users is non-trivial to obtain due to the random computing time. In this paper, we consider multiple users offloading tasks to heterogeneous edge servers in a MEC system. We first reformulate the problem as a Restless Multi-Arm-Bandit (RMAB) problem and establish a hierarchical Markov Decision Process (MDP) to characterize the updating of AoI for the MEC system. Based on the hierarchical MDP, we propose a nested index framework and design a nested index policy with provably asymptotic optimality. Finally, the closed form of the nested index is obtained, which enables the performance tradeoffs between computation complexity and accuracy. Our algorithm leads to an optimality gap reduction of up to 40%, compared to benchmarks. Our algorithm asymptotically approximates the lower bound as the system scalar gets large enough.
§ INTRODUCTION
§.§ Motivation
Large-scale cyber-physical applications necessitate real-time information. For example, Internet of Things (IoT) devices, constrained by limited computational resources, rely on cloud computing to boost performance, while sensor data from vehicles must be collected and processed to depict surroundings and facilitate navigation. Users demand prompt status updates. The Age-of-Information (AoI) is a recently introduced metric designed to assess the freshness of information, quantifying the time elapsed since the most recent message update (e.g., <cit.>).
In numerous real-time applications, such as autonomous driving, the updated information is computationally demanding and necessitates processing. Offloading data to the cloud for computation can lead to data staleness and is computationally expensive. The Mobile Edge Computing (MEC) paradigm shifts servers from the cloud to the edge, bringing users closer to servers and thereby reducing transmission delay (e.g., <cit.>). Consequently, MEC emerges as a promising technology capable of reducing latency and enhancing information freshness.
The majority of existing studies <cit.> primarily concentrate on optimizing AoI in MEC systems with a single user or server, or under the assumption of fixed computation time and task size. However, in practical scenarios, multiple heterogeneous users and servers are prevalent, prompting further exploration of heterogeneous MEC systems. Nevertheless, minimizing AoI in MEC systems with heterogeneous servers presents two challenges: determining the optimal location for task offloading and deciding the time for this offloading. To this end, we first answer the following question:
How should one minimize the AoI in a MEC system with multiple heterogeneous users and servers?
The task of minimizing AoI is frequently formulated as a Restless Multi-Arm Bandit (RMAB) problem, as it can be optimally solved by value iteration <cit.>. However, such strategies are prone to the curse of dimensionality, necessitating near-optimal solutions with low complexity. A promising method for addressing the RMAB problem is the index policy approach <cit.>, which is particularly suitable for scheduling systems with multiple nodes. This approach can yield near-optimal results with relatively low computational complexity. The effectiveness of the index policy is attributed to two primary factors: the ability to decompose the original problem into several sub-problems with practicable optimal solutions and the potential to express the index in closed form for a specific Markov Decision Process (MDP) structure, thereby reducing computational complexity. Regrettably, neither of these factors can be easily assured: the optimal solution for the sub-problem may not exist, and obtaining the index function is non-trivial due to the presence of multiple state variables in MEC systems with heterogeneous users and servers. This leads us to the following question:
How should we design an index-based policy for RMAB problems with multi-dimensional state variables?
§.§ Solution Approach
In response to this challenge, we suggest a framework where multiple heterogeneous users offload tasks to heterogeneous edge servers. We construct a multi-layer Markov Decision Process model aimed at minimizing the average AoI in MEC. The primary contributions of our research are as follows:
∙Problem Formulation:
We formulate the problem of minimizing average AoI for MEC by optimizing offloading policies and reformulating it as an RMAB problem. To the best of our knowledge, this represents the first formulation of an age-minimal MEC problem that takes into account multiple heterogeneous users and edge servers.
∙Nested Index Approach:
We construct a multi-layer MDP model and, based on this, introduce a nested index framework to solve our RMAB problem. We demonstrate the indexability of the multi-layer MDP for our MEC system and design the corresponding index function. We propose a nested index algorithm with provable asymptotic optimality.
∙Numerical Results:
Our nested index algorithm results in an optimality gap reduction of up to 40% compared to benchmarks. Our algorithm converges to the lower bound as the system scale increases sufficiently.
§ RELATED WORK
§.§ Age-of-Information
Kaul et al. in <cit.> first proposed AoI as a metric to evaluate information freshness. The optimal AoI scheduling policy was to send messages from a source to the monitor through a single channel <cit.>. In <cit.>, multiple sources could send updates over a single-hot network to a monitor, and they derived an approximate expression for the average AoI. In <cit.>, they minimized AoI by considering multiple sources for queuing systems. In <cit.>, they proposed a scheduling policy to minimize AoI in the wireless broadcast network with unreliable channels. In <cit.>, they derived the structure of optimal policies for AoI minimizing problem and proved the optimality with reliable channel and unreliable channel assumptions. However, there was a lack of research on minimizing AoI in the more general MEC systems with heterogeneous multi-sources and edge servers.
§.§ Restless Multi-Arm Bandit
The RMAB problem arises when the state of an arm keeps changing whenever it is pulled or not <cit.>. In <cit.>, they formulated the problem of minimizing AoI as an RMAB problem and demonstrated that Whittle's Index was optimal when the arms were stochastically identical in a single-hop network. They also mentioned that a classic MDP was always indexable and proved the indexability of certain RMAB problems. Hsu et al. <cit.> assumed that only one user could update at each time slot and obtained Whittle's Index in closed form.
Hsu et al. <cit.> further studied the online and offline versions of the index approach and showed that the index policy was optimal when the arrival rate was constant. All the above index policies can only solve RMAB problems with one-dimensional state variables. However, in general, there exist more factors that affect decisions in wireless networks. Therefore, we need to consider multiple state variables for general wireless networks.
§.§ Mobile Edge Computing
In MEC scenarios, mobile edge servers are well equipped with sufficient computational resources and are close to users, enabling them to expedite the computation process. Yang et al. <cit.> studied the resource management problem in MEC utilizing reinforcement learning approaches. In <cit.>, an MDP-based policy was proposed to determine whether to offload a task and when to transmit it. Zou and Ozel in <cit.> studied the transmission and computation process for MEC systems as coupled two queues in tandem. The computing time is random in the MEC system. The optimal scheduling policy contains non-preemptive <cit.> and preemptive <cit.> structures, respectively. In <cit.>, the optimal scheduling policy under preemptive structure had a threshold property, and it was a benefit for minimizing AoI to wait before offloading.
For minimizing AoI problems with multiple sources (or users), they established the MDP model and index-based policies <cit.>, which had less complex and relatively efficient. Such an index-based policy was proved to be asymptotic for many single-hop wireless network scheduling. To summarize, random offloading time and indeterminate computation durations considering preemptive techniques posed significant offloading challenges in the MEC system.
§ SYSTEM MODEL
§.§ System Overview
We consider a MEC system with N users who generate computational tasks and offload them to M heterogeneous edge servers, as shown in Fig. <ref>.
Let n ∈𝒩, 𝒩={1, 2, ..., N} be the index of users and m ∈ℳ, ℳ= {1, 2, ..., M} denote the index of edge servers. Let t∈𝒯 be the index of each time slot with 𝒯 = { 1,2,...,T}.
We consider the generate-at-will model <cit.>, i.e., the user can decide whether to generate a new task or not at each time slot t. The transmission time between users and edge servers is negligible, and once the computation of one task completes, its result is immediately sent back to the user. Each user can send a proportion of its task to any server for computing at each time slot <cit.>.
We assume that the edge servers are heterogeneous and the computing time of tasks is stochastic. Each task of user n has a specific workload. When offloaded to a server, the task needs a specific number of CPU cycles to finish computing, and the computing time is based on both the workload and CPU frequency of the chosen server. We assume there is a minimum computing time τ_n^min for the task of user n.
We use the notion AoI to measure the freshness of information. We denote Δ_n(t) as the AoI for user n at time t.
The age of user n decreases to the age of the latest offloaded task when the computing finishes or increases by 1 otherwise.
Let G_n(t) denote the generation time of the most recent task offloaded by user n at time t. Then, the age of user n at time t if the computing finishes can be expressed as
Δ_n(t) = t-G_n(t), ∀ n ∈𝒩.
§.§.§ Offloading Decision
At time t, each user can choose one edge server to offload its tasks. When a task is offloaded, the computation starts at the beginning of each time slot.
We denote y_nm(t)∈{0,1} as the offloading decision variable for user n at time t: y_nm(t)=1 if user n decides to offload a task to server m. When y_nm(t)=0,∀ m∈ℳ, no task is to offload or the current task is to be dropped.
Users' offloading decisions are subject to the following constraints:
∑_n∈𝒩∑_m∈ℳy_nm(t)≤ M, ∀ t∈𝒯,
∑_m∈ℳy_nm(t)≤ 1, ∀ n∈𝒩,t∈𝒯,
y_nm(t)∈{1,0}, ∀ n∈𝒩,m∈ℳ,t∈𝒯.
Specifically, constraint (<ref>) means there are at most M servers to be chosen for offloading, and constraint (<ref>) means each user can offload its task to only one server at the same time. Constraint (<ref>) is a indicator function, which denotes whether a task is offload to server m at time t.
§.§.§ Shifted Geometric Distribution
The transition of AoI during computation obeys a shifted geometric distribution <cit.> with parameter p_m = 1-e^-λ_m for tasks offloaded to server m, where λ_m is the parameter of an exponential distribution.
We consider a minimal computational time for each computational task, denoted by τ^min_n, i.e., only after τ^min_n time slots,
each edge server m completes the computation of the task with a probability p_m within each time slot. Therefore, the transition probability of the AoI of each user n during the computation can be written as
ℙ{ Δ_n(t+1) = Δ_n(t) + 1 |
t-G_n(t)>τ_n^min,y_nm(t)=1} = 1-p_m,
ℙ{ Δ_n(t+1) = t-G_n(t)+1|
t-G_n(t)>τ_n^min,y_nm(t)=1} = p_m,
ℙ{ Δ_n(t+1) = Δ_n(t)+1|
t-G_n(t)≤τ_n^min,y_nm(t)=1} = 1,
ℙ{ Δ_n(t+1) = Δ_n(t)+1| y_nm(t)=0} = 1.
§.§ Problem Formulation
We aim to minimize the overall AoI of the MEC system.
In the following, we formulate the AoI minimization problem.
Let π∈Π denote the scheduling policy, which maps from the system state to the actions of all users.
We define the long-term average AoI <cit.> under policy π as:
limsup_T→∞1/TN∑_t=1^T∑_n∈𝒩𝔼_y∼π[Δ_n(t)],
where we consider the policy π is deterministic stationary <cit.>.
We reformulate the minimization problem of the long-term average AoI into the following form:
min_π limsup_T→∞1/TN∑_t=1^T∑_n∈𝒩𝔼_y∼π[Δ_n(t)]
s.t. (<ref>)
Based on the Lagrangian relaxation <cit.>, we relax the instantaneous constraint (<ref>) to average constraint, then drop constraint (<ref>) by introducing dual variables ν_m, ∀ m∈ℳ and deriving N sub-problems. Define π_n∈Π_n as the policy which maps from the state of user n to the action of user n. Given dual variables, each sub-problem n is formulated as:
min_π_n limsup_T→∞1/TN ∑_t=1^T𝔼_y_n∼π_n[Δ_n(t)+∑_m∈ℳν_m y_nm(t)]
s.t. ∑_m∈ℳy_nm(t)≤ 1, ∀ n∈𝒩, t∈𝒯,
y_nm(t)∈{1,0}, ∀ n∈𝒩,m∈ℳ, ∀ t∈𝒯.
When the dual variables converge, the sum of solutions to each sub-problem (<ref>) reaches the lower bound of the solution to problem (<ref>), i.e., limsup_T→∞1/TN∑_t=1^T∑_n∈𝒩𝔼_y∼π[Δ_n(t)]≥∑_n∈𝒩limsup_T→∞1/TN∑_t=1^T𝔼_y_n∼π_n[Δ_n(t)+∑_m∈ℳν_m y_nm(t)].
We will further study the optimal policy π_n^* for the decomposed problem (<ref>) given {ν_1,ν_2,…,ν_M} by considering a multi-layer MDP.
§.§ Multi-Layer MDP
An L-layer MDP is a tuple ⟨𝒮,𝒜, 𝒫, 𝒞,L⟩, where 𝒮 denotes the state space, 𝒜 is the action space, the transition function is 𝒫:𝒮×𝒜→ PD(𝒮), the cost function is 𝒞:𝒮×𝒜→ℝ, and L∈ℤ_+ is the number of layers. Denote 𝒮_l=ℕ^l as the state space at layer l, and 𝒮=𝒮_1∪𝒮_2∪⋯∪𝒮_L. An L-layer MDP fulfills the following conditions: 𝒮_l⊂ℕ^L, and 𝒮=𝒮_1∪𝒮_2∪⋯∪𝒮_L
* ∀ 0<l<L and ∀ s∈𝒮_l, there exist some a∈𝒜 and s'∈𝒮_l+1 that satisfies ℙ{s' | s, a}>0,
* ∀ 0<l≤ L and ∀ s∈𝒮_l, there exist some a∈𝒜 and s”∈𝒮_l that satisfies ℙ{s”| s, a}>0,
* ∀ L≥ l> 1, there exists some s∈𝒮_l, s”'∈𝒮_1, and a∈𝒜 that satisfies ℙ{s”'| s, a}>0,
and we term the sub-space 𝒮_l as layer l.
The multi-layer MDP defines the transition probability among states at different layers. The state at layer l should be able to transit to states at layer l, l+1, and layer 1. In a multi-layer MDP, states only transit among neighbor layers, which gives insights into the analysis of multi-dimensional state variables.
Now, we specify the multi-layer MDP for the MEC system. Each sub-problem (<ref>) can be formulated as a 2-layer MDP:
* Action space: Let 𝒜={0,1}^M be the action space for each user, and the action of user n, which is denoted as y_n(t)={y_n1(t),y_n2(t),…,y_nM(t)}∈𝒜, contains the information of offloading decisions. The action vector y_n(t) is composed entirely of zero elements, with at most one element being one.
* State space: Let 𝒮_l denote the state space for each user at layer l. Recall that we denote Δ_n(t) as the age of user n and G_n(t) denotes the generation time of the latest task of user n. User n who is idle is defined at layer 1, which has state s_n(t)=Δ_n(t)∈𝒮_1. We consider users waiting for the result of the computation is at layer 2. Let D_n(t)=Δ_n(G_n(t)) denote the age of user n when the latest task was generated. Thus we have state s_n(t)=(Δ_n(t),D_n(t))∈𝒮_2.
* Transition function:
In the former section, we derive the transition probability in terms of Δ_n(t), while solely using Δ_n(t) can not fully characterize the transition of the states in multi-layer MDP. We use q^ss'_nm to denote the transition probability of user n from state s to next state s' by choosing server m at time t, and we have
q_nm^ss'=ℙ{s' | s, y_nm(t)=1},
where q_nm^ss' can be derived from the transition probability in Section <ref>.
* Cost function: We define the immediate cost as
C_n(s_n(t),m)≜Δ_n(t) + ν_m ,
which includes the current AoI and the server cost.
According to <cit.>, it is simple to derive that there exists one deterministic stationary policy π_n^* that reaches optimal average AoI. However, value iteration when deriving the optimal policy suffers from the curse of dimensionality <cit.>. We need to seek an approach that owns less complexity and is near-optimal.
For example, we have q_nm^ss'=p_m when s=s_n(t), s'=s_n(t+1) and (<ref>) holds.
For a user at layer 1, we have ∀ m≠ 0:
ℙ{ s_n(t+1)=Δ_n(t+1)=A+1| s_n(t)=Δ_n(t)=A,
y_n(t)=0}=1.
The user will transit from layer 1 to layer 2 by offloading a task to server m and will stay at layer 1 if no task is offloaded. For a user at layer 2, we have ∀ m≠ 0 and A-D>τ_n^min:
ℙ{ s_n(t+1)=Δ_n(t+1)=A-D+1|
s_n(t)=(Δ_n(t)=A,D_n(t)=D),y_n(t)=m}=p_m,
ℙ{ s_n(t+1)=(Δ_n(t+1)=A+1,D_n(t+1)=D)|
s_n(t)=(Δ_n(t)=A,D_n(t)=D),y_n(t)=m}=1-p_m.
The user will transit from layer 2 to layer 1 if the computing finishes or the user decides to drop the current task. The user will stay at 2 if the computing is not finished. By showing the state transition among layers, we assert each sub-problem Eq.(<ref>) is a 2-layer MDP.
§ INDEX-BASED POLICY
In this section, we introduce a nested index approach to our RMAB problem, which is proven to be an asymptotically optimal offloading policy. First, we define the nested index and prove that the 2-layer MDP for MEC systems fulfills the indexability condition. Next, we propose the nested index policy to schedule tasks. In addition, we also verify the asymptotic optimality of the proposed approach and obtain the nested index function in a closed form.
§.§ Nested Index
We first introduce the following definition of a passive set based on <cit.>. Define ν≜(ν_1,⋯,ν_M) as the vector of activating cost, where each ν_m is the server cost of choosing server m for computation. We focus on sub-problem (<ref>) with given cost ν. We introduce the cost-to-go function which is a prediction of cost to evaluate the value of the state s. Denote the optimal average cost of sub-problem n as γ_n^*, which is the minimum cost per stage. We can write the Bellman Equation of each sub-problem (<ref>) as:
γ_n^*+V_n(s, ν)=
min_m∈ℳ,l∈ℒ[ C_n(s,m)+∑_s'∈𝒮_lq^ss'_nmV_n(s',ν)],
where function V_n(s, ν) is the differential cost-to-go <cit.>.
Let
μ_nm(s,ν)=C_n(s,m)+∑_s'∈𝒮q^ss'_nmV_n(s',ν)-γ^*_n
denote the expected cost of choosing server m given state s.
As the decision process for our multi-layer MDP is different from Whittle's Index, which involves multi-dimensional state variables and multiple feasible actions, it motivates us to consider a multi-layer index structure.
The passive set for user n to transit to layer l at server m given activating cost ν is denoted as:
𝒫_nm^l(ν)≜
{s∈𝒮_l |min_m'∈ℳ, m'≠ m μ_nm'(s,ν)≤μ_nm(s,ν)}.
We denote 𝒫_nm(ν)≜∪_l=1^L𝒫_nm^l(ν) as the overall passive set.
The passive set refers to the set of states at layer l that are sub-optimal for selecting server m for computing with activating costs ν.
In classic RMAB problems <cit.>, activating cost ν is a scalar. Whittle <cit.> stated that if the cardinality of the passive set increases monotonically from 0 to +∞ as activating cost ν increases from 0 to +∞, the problem is indexable. Each state is assigned a maximum activating cost, which makes the same cost-to-go to take action or not at this state. The activating cost also gives the urgency of such a state as Whittle stated <cit.>. A state with a higher activating cost has a higher priority for selection.
In MEC systems, we have multiple actions to choose from for each user, and we have heterogeneous servers with varying activating costs, which complicates the definition of indexability. Therefore, we need to design a more sophisticated index technique.
In a Multi-Layer MDP ⟨𝒮,𝒜, 𝒫, 𝒞,L⟩, given servers cost ν, if for any layer l, the cardinality of passive set |𝒫_nm^l(ν)| increases monotonically to the cardinality |𝒮_l| of layer l as cost ν_m for server m increases from 0 to +∞, then this multi-layer MDP is intra-indexable.
The intra-indexability describes the relation between server costs and the optimal state to choose server m at each layer. Given layer l, there exists the largest server cost ν_m' that state s_n(t) is no longer included in the passive set 𝒫^l_nm(ν) at layer l. The monotonic property guarantees the uniqueness of such an activating cost. The cardinality passive set over all layers |𝒫_nm(ν)| is non-decreasing in ν if |𝒫_nm^l(ν)| is non-decreasing in ν,∀ 1≤ l≤ L.
It is non-trivial to derive the optimal state at layer l for server m through the Bellman Equation Eq.(<ref>). We can, however, conclude the structure-property of the optimal solution for each sub-problem (<ref>). As there are multiple layers in the multi-layer MDP, the optimal solution has a Multi-Layer-Threshold Type (MLTT) structure:
Denote m=max_m∈ℳ p_m as the index of the server that owns the best computational performance. If the following two conditions hold:
* If user n is at layer 1 with state s_n(t)=(Δ_n(t)=A):
* for any server m'≠m,m'∈ℳ, there exists H_n(m',m,0) that ∀ A≥max_m'{H_n(m',m,0)}, π_n(s_n(t))= m;
* for any two states s_n(t), s_n(t') which fulfill Δ_n(t)<Δ_n(t'), then p_π_n(s(t))≤ p_π_n(s(t'));
* If user n is at layer 2 with state s_n(t)=(Δ_n(t)=A,D_n(t)=D):
* given D, there exists H_n(m',m,D) that the ∀ A≥ H_n(m,m',D), π_n(s_n(t))= m;
* if D_n(t)=D_n(t'), p_π_n(s(t))≤ p_π_n(s(t'));
then the offloading policy π_n for the sub-problem
(<ref>) n has a Multi-Layer-Threshold Type (MLTT) structure.
The MLTT structure shows some common properties for optimal thresholds at both layer 1 and layer 2.
The proposition proposes that a threshold exists for a user when choosing between two servers, and the threshold depends only on the current age of the user given the same age at generation. Due to the limited space, all the
proofs are provided in the online appendix <cit.>.
The optimal solution π_n^* to the sub-problem (<ref>) is MLTT.
The proof is shown in Appendix <ref>.
The MLTT is a more strict property than the threshold policy, as different actions have their own optimal thresholds on the age of users, and the threshold is determined by the layer l and the generating age D_n(t). By utilizing the MLTT property of the solution to problems in MEC systems, we can show the intra-indexability of the sub-problem (<ref>).
The MDP sub-problem (<ref>) is intra-indexable given cost ν.
The proof is shown in Appendix <ref>.
Since the AoI minimizing problem in MEC systems is a 2-layer MDP, we design thresholds for actions at both layers. The intra-indexability property guarantees that there is one unique threshold for each server at each layer, and the size of the passive set for one server at each layer also increases as the server cost increases. Therefore, we can still use the index to represent the urgency of a state, and the comparison of states among layers is possible. We can then define the index for our 2-layer MDP.
Let ν_-m denote the activating cost of edge servers except for server m. The nested index for taking y_n(t)=m at state s_n(t) is defined as
I_nm(s_n(t), ν) ≜
max[0, inf{ν_m | min_m'∈ℳμ_nm'(s_n(t),[ν_-m,ν_m])
<μ_nm(s_n(t),[ν_-m,ν_m])}].
The nested index allows us to characterize the urgency of each state. Compared with the partial index <cit.>, the nested index requires server costs other than server m, and the shows the emergency of transiting to different layers.
Fig. <ref> illustrates the relationship between the nested index and the MDP. Fig. <ref> compares the decision of offloading a task to server m at neighbor time slots for user n at layer 1. The optimal threshold H_n(m,m',0) decreases as the server cost ν_m increases from 0 to ∞. The infimum of ν_m that makes s_n(t), i.e., Δ_n(t), the optimal state to offload a task to server m is the corresponding nested index. Fig. <ref> illustrates the nested index at layer 2. For simplicity, we consider user n with the same generating age at time t and t+1, i.e., D_n(t)=D_n(t+1). The nested index can also be derived by adjusting ν_m when it is optimal to offload a task to server m at time t. The nested index gives the emergency of offloading tasks for a user at state s_n(t).
§.§ Hierarcical MDP Formulation for Non-preemptive Condition
In section III, we establish the hierarchical MDP for a preemntion-enabled MEC system. However, the model will be quite different when preemption is not allowed. We will define a new 2-layer hierarchical MDP for this situation.
State space: In this new hierarchical MDP, we still use A_n(t) to denote the current age of user n at time slot t. We then introduce another state variable B_n(t) which is the accumulated age since the latest update. As preemption is not allowed, the edge user can not take other actions during the computation. Therefore, we use k to denote the index of time when the user is idle. The state user n at time k can be written as s_n(k)=(A_n(k),B_n(k)).
Cost function: We can define the immediate cost in the following form:
C(s_n(k),y_n(k))= B_n(k)+∑_m∈ℳν_my_nm(k),
where B_n(k) is the accumulated
The optimal policy π_n^* for the sub-problem (<ref>) under non-preemptive condition is MLTT, and the corresponding MDP is intra-indexable.
The proof is shown in Appendix F.
§.§ Nested Index Policy
Based on the nested index derived at each time slot, the central actuator can schedule the tasks of all users following the nested index policy. Define u_nm as the decision variable for user n on server m, and w_nm=I_nm(s_n(t),ν(t)) is the decision weight given by the nested index at time t.
We will solve the following binary decision scheduling problem at each time slot t∈𝒯:
max_u ∑_n∈𝒩∑_m∈ℳ I_nm(s_n(t),ν_t-1)u_nm
s.t. ∑_n∈𝒩u_nm≤ 1, ∀ m∈ℳ,
∑_m∈ℳu_nm≤ 1, ∀ n∈𝒩,
u_nm∈{0,1} ,∀ n∈𝒩, ∀ m∈ℳ.
We then make offloading decisions according to y_nm(t)=u_nm at the solution. The decision variable u_nm represents the policy that maps from the current state s_n(t). The mapping process from the current state s_n(t) to the action variable y_nm(t) is named the nested index policy.
In Algorithm <ref>, we compute nested index I_nm(s_n(t),ν_t-1) for each user via Eq. (<ref>) and obtain the optimal solution y_nm(t) for the problem (<ref>) which is a simple linear programming problem. Next, the solution for problem (<ref>) is mapped to offloading decisions and computing decisions to schedule tasks. We also get state updates from edge servers, offloading decisions, and computing decisions. Finally, we update activating cost ν_t. We execute lines 4-8 process at each time slot until the nested policy converges.
The computation of the index value can be very complex, and we gave an approximation of the nested index given s_n, ν_-m, ∀ 0<m<M.
Given s_n(t)=(Δ_n(t),D_n(t)), the index function satisfies
I_nm(s_n(t),ν)=ν_m-1+Δ_n(t)-γ_n^*.
The index for server m can be derived by solving I_nm(s_n(t),ν)=ν_m.
The proof is given in Appendix <ref>. The index for other layers can be similarly derived within finite steps of computation. We derive the optimal average cost γ_n^* by the technique similar to that used in <cit.>, which involves solving a set of a finite number of equations. This reduces the complexity when computing the index function and makes our algorithm more feasible.
§.§ Fluid Limit Model
We use a fluid limit argument to show the optimality for the index policy in Algorithm 1 as in <cit.>.
The fixed point solution is the solution for the fluid limit model of the original problem.
We will show that the fixed point of the fluid limit model for problem (<ref>) is equivalent to that of problem (<ref>).
We define the fluid fixed point and the fluid limit model as follows. Let z_ns∈ [0,1] denote the fraction of user n in state s, where ∑_s∈𝒮z_ns=1. Let x_nm^s∈[0,1] denote the fraction of user n combined with state s at server m given by the optimal solution of the relaxed problem (<ref>). Let ν^* is the associated dual variable when it converges, and (x^*,z^*,ν^*) represent the fluid fixed point of the following fluid limit reformulation of problem (<ref>):
min_x,z ∑_n∈𝒩∑_s∈𝒮∑_m∈ℳz_nsC_nsx_nm^s
s.t. ∑_n∈𝒩∑_s∈𝒮z_nsx_nm^s≤ 1, ∀ m∈ℳ,
∑_m∈ℳx_nm^s≤ 1, ∀ n∈𝒩, ∀ s∈𝒮,
∑_s∈𝒮z_ns= 1, ∀ n∈𝒩,
z_ns,x_nm^s∈[0,1],∀ n∈𝒩, ∀ m∈ℳ,∀ s∈𝒮,
∑_s'∈𝒮z_ns∑_m∈ℳx_nm^s q^ss'_nm= ∑_s'∈𝒮z_ns'∑_m∈ℳx_nm^s'q^s's_nm,
∀ n∈𝒩,∀ s∈𝒮.
where (<ref>) is a fluid balance constraint <cit.>.
We can similarly derive the fluid limit reformulation problem (<ref>) for the scheduling problem (<ref>). Denote z_nm' as the fraction of user n in state s, v_nm^s as the fraction of user n assigned with server m under the index policy, and ν'^* as the dual variable for the relaxed problem of the scheduling problem (<ref>) when it converges. Let (v^*,z'^*,ν'^*) be the fixed point solution, and we have:
max_v,z' ∑_n∈𝒩∑_s∈𝒮∑_m∈ℳz_ns'w_nm^sv_nm^s
s.t. ∑_n∈𝒩∑_s∈𝒮z_ns'v_nm^s≤ 1, ∀ m∈ℳ,
∑_m∈ℳv_nm^s≤ 1, ∀ n∈𝒩,∀ s∈𝒮.
Then, we can evaluate the performance of our nested index policy based on the fluid limit model for both problems.
The fixed point solution to problem (<ref>) is equivalent to the solution (the fluid fixed point) to problem (<ref>), i.e., we have
(v^*,z'^*,ν'^*)=(x^*,z^*,ν^*).
The proof is shown in Appendix <ref>. The equivalence of fixed point solution builds the connection between problem (<ref>) and problem (<ref>). Though problem (<ref>) follows an instantaneous constraint (<ref>), its fixed point solution still reaches the optimality at the fluid limit, which contributes to the asymptotic optimality of our policy in Algorithm <ref>.
By scaling a system by r, we scale the number of users N^r and servers M^r by r proportionally, i.e., let N^r=r· N, M^r=r· M while keeping N^r/M^r a constant.[The system parameters τ and p are also scaled proportionally, i.e., τ^r=[τ,τ,…,τ]∈ℤ_+^1× N^r and p^r=[p',p',…,p']∈[0,1]^M^r× N^r, where p'=[p^T,p^T,…,p^T]^T∈[0,1]^M^r× N.]
Under a mild global attractor assumption,
the expected objective V^π_r for problem (<ref>) under the nested index policy π achieves the optimal objective V^* for the fluid limit model of problem (<ref>) asymptotically, i.e.,
lim_r→+∞ V^π_r=V^*.
We refer to <cit.> for the details of the global attractor assumption. The equivalence of the fixed point solution shows the accordance of the fluid limit model for both problems. Under a mild global attractor assumption, the objective of problem (<ref>) under our nested index converges to the optimal cost of problem (<ref>).
§ NUMERICAL RESULTS
In this section, we perform numerical studies to evaluate the performance of our nested index algorithm and verify the convergence property. We simulate a MEC system with initial tasks of N=50 users that can be divided into 6 groups, with
τ^min=[2, 4, 8, 16, 32, 64], and successful updating probability of p=[0.8, 0.7, 0.6, 0.5, 0.3, 0.1], each of which has the number of [5,10,5,5,10,15] users respectively. We set β=50 and simulated T=10000 slots. In the simulation, we mainly test the performance of the nested index when solving a multi-layer MDP.
§.§ Convergence of the Cost Update
We compare the dynamics of the cost updates of the proposed nested index policy with that of the optimal solution to the problem (<ref>). The optimal solution in Fig. <ref> represents the dynamics of the server cost ν of the relaxed problem. We obtained a new cost ν(t) at each time slot by dual gradient ascent. In Fig. <ref>, the cost of the server smoothly converges to a small neighborhood of the optimal cost.
Next, we verify the server cost dynamics of the proposed index-based policy when the system scalar increases. Fig. <ref> represents the dynamics of the cost update at scale r=2 and r=20. With the increase of the scalar r, the cost for the proposed index-based policy approaches is close to the optimal value of the dual cost.
§.§ Average AoI
We evaluate the average AoI performance of our proposed policy. We use the optimal solution to problem (<ref>) as the lower bound of the index policy <cit.> for comparison.
We consider three benchmark policies in the experiments: Max-Age Matching Policy (MAMP), Max-Age Reducing Policy (MARP), which are both greedy policies, and Rounded Relax Policy (RRP).
* The MAMP chooses users with the highest current AoI.
* The MARP takes the transition probability of MC into consideration. Define the weight in this policy as w_n=Δ_n(t)+1/p_n(Δ_n(t)-D_n(t)), which represents the probable approximation of optimality gap reduction.
* The RRP is derived from the solution of the relaxed problem.
The RRP chooses users uniformly at random to satisfy the feasibility when violating the constraint (<ref>).
Fig. <ref> evaluates the average AoI under different policies such as nested index policy, MAMP, MARP, RRP, and lower bound of problem (<ref>) with r=20. The greedy policy MAMP and MARP is 40% worse than the nested index policy and RRP is 21% worse than our approach when r=20. The normalized system AoI gets closer to that of the optimal AoI for the relaxed problem with the increase of the system scalar r. Fig. <ref> shows the normalized AoI of the system, i.e., the average age per user. The normalized AoI decreases almost monotonically as r increases even, which supports the asymptotic optimality of our proposed policy.
§ CONCLUSION
In this study, we explored the minimization of AoI in a MEC system with heterogeneous servers and users. We formulated the problem as a two-layer MDP and introduced a novel nested index. We devised a scheduling policy that employs the nested index, ensuring the asymptotic optimality of the average expected AoI of the MEC system as the system scale expands. We also derived the computation of the nested index, which exhibits lower computational complexity. Through simulation, we demonstrated that our algorithm converges and delivers near-optimal performance.
§ ACKNOWLEDGMENTS
The research leading to these results received funding from Beijing Municipal Natural Science Foundation under Grant Agreement Grant No. 4224092. In addition, it received funding from National Key R&D Program of China (2022ZD0116402). It was also supported by National Natural Science Foundation of China grant 62202427.
IEEEtran
[
\begin@twocolumnfalse
Minimizing Age of Information for Mobile Edge Computing Systems: A Nested Index Approach – Online Appendix
\end@twocolumnfalse
§
§ PROOF OF PROPOSITION <REF>
We drop subscript n to simplify the notation, as the proposition holds for each sub-problem. We suppose π^* is the optimal policy under the optimal server cost ν^*. We first show the monotonicity of the optimal cost-to-go function under π^*.
Denote the state of a user at layer 2 as s=(A,D). The cost-to-go function under π^* is non-decreasing in A, i.e.,
A≤ A',D=D' ⇒ V((A,D),ν)≤ V((A',D),ν),
V((A,D),ν)-V((A-D),ν)
≤ V((A',D),ν)-V((A'-D),ν).
The proof exploits Proposition 3.1 in <cit.>, which extends the average cost within finite steps to the infinite horizon. We show the first inequality fulfills the condition of Proposition 3.1 in <cit.>, and use backward induction to derive the second inequality.
We consider the following state transition function f:𝒮×𝒜×𝒲→𝒮, i.e., s_n(t+1)= f(s_n(t),y_n(t),w_t), where w_t∈𝒲 is the information process at time t <cit.>. Given s'=(A',D') and s≼ s', we have:
* For every a∈𝒜 and w∈𝒲 the state transition function satisfies f(s,a,w)≼ f(s',a,w). Since at state s the next state could be either (A+1,D) or (A+1-D). We have (A+1,D)≼ (A'+1,D') and A-D≤ A'-D at the same time given D=D';
* Denote g_m(s)=C(s,a)=ν_m+(1-p_m)A+p_m(A-D)=ν_m+A-p_m· D as the per stage cost. We also have g_m(s)≤ g_m(s');
* w_t+1∈𝒲 is independent of state s∈𝒮.
By Proposition 3.1 in <cit.>, we conclude the inequality V_T((A,D),ν)≤ V_T((A',D),ν) of the T-stage cost minimization problem, where V_T(·) is the T-stage value function. By utilizing the convergence of the value iteration <cit.>, we can derive the first inequality in (<ref>).
For the second inequality in (<ref>), we also consider the T-stage value function. Given V_t+1((A,D),ν)-V_t+1((A-D),ν)≤ V_t+1((A',D),ν)-V_t+1((A'-D),ν). We have
𝔼[V_t+1(f((A,D),a,w_t+1),ν)| S_t=(A,D),a_t=a]
-𝔼[V_t+1(f((A-D),b,w_t+1),ν)| S_t=(A-D),a_t=b]
≤𝔼[V_t+1(f((A',D),a,w_t+1),ν)| S_t=(A',D),a_t=a]
-𝔼[V_t+1(f((A'-D),b,w_t+1),ν)| S_t=(A'-D),a_t=b].
We also have
V_t(s,ν)=min_a∈𝒜[ C(s,a)
+𝔼[V_t+1(f(s,a,w_t+1),ν)| S_t=s,a_t=a]].
Since
C((A,D),a)-C((A-D),b)
= ν_m-ν_m'+(1-p_m)D
= C((A',D),a)-C((A'-D),b),∀a,b∈𝒜,
we can derive V_t((A,D),ν)-V_t((A-D),ν)≤ V_t((A',D),ν)-V_t((A'-D),ν) by adding (<ref>) to (<ref>). By using backward induction, it holds for all t.
Now, we will verify the MLTT property for sub-problem (<ref>). For any user with state s=A at layer 1, we compare the expected cost-to-go of choosing server m and m' when p_m> p_m':
μ_m'(A,ν)- μ_m(A,ν)=
τ^min·ν_m'
+(1-p_m')[A+τ^min+V((A+τ^min+1,A),ν)]
+p_m' V(τ^min,ν)
-τ^min·ν_m
-(1-p_m)[A+τ_n^min+ V((A+τ^min+1,A),ν)]
-p_m V(τ^min,ν).
Eq. (<ref>) is a function of variable A, and can be rewritten as:
μ_m'(A,ν)- μ_m(A,ν)=
⋯+(p_m-p_m')[A+ V((A+τ^min+1,A),ν)
- V(τ^min+1,ν)],
where we omit terms irrelevant to A. Since p_m>p_m', Eq. (<ref>) is strictly monotonically increasing in A. Therefore, there must exist a threshold H(m,m',0)=A^m,m', where we have μ_m(·)=μ_m'(·).
For any user with state s=(A,D) at layer 2, we have
μ_m'((A,D),ν)- μ_m((A,D),ν)=
⋯+(p_m-p_m')[A+ V((A+1,D),ν)
- V(A-D+τ^min+1,ν)].
The difference between the two cost-to-go is also monotonically decreasing in A, hence the sub-problem (<ref>) is MLTT.
For Condition II, we can prove this by contradiction. Suppose A_1<A_2 while p_m^*_1>p_m^*_2,
μ_m^*_1((A_2,G),ν)- μ_m^*_2((A_2,G),ν)≤
τ_m^*_1^min·ν_m^*_1-τ_m^*_2^min·ν_m^*_2+(p_m^*_2-p_m^*_1)[A_2+τ_m^*_2+ V((A_2,G),ν)]≤
⋯+(p_m^*_2-p_m^*_1)[A_1+ V((A_1,G),ν)]≤
0
Eq. (20) is decreasing in d_1, therefore, μ_m^*_1(A_2,G,ν)≤μ_m^*_2(A_2,G,ν) holds, which contradicts the Condition II.
For Condition III, we have such inequity:
μ_m/p_m=μ_m(p_m-p_m')/p_m(p_m-p_m')>μ_mp_m-μ_m'p_m/p_m(p_m-p_m')=μ_m-μ_m'/p_m-p_m'
Combining Eq. (18) with A=G, we have μ_m(A,G,ν)<μ_m'(A,G,ν). Therefore, server m' is never optimal.
§
§ PROOF OF THEOREM <REF>
For simplicity, we specify the order of p_m, i.e., p_m-1≤ p_m,∀ 1<m≤ M. To prove the intra-indexability, we have to introduce the following lemma:
Given server cost ν and state s=(A,D) at layer 2, denote ν'=[ν_1,⋯,ν_m+Δ,⋯,ν_M], ∀Δ≥ 0. The difference between two cost-to-go functions given ν nad ν' can be upper-bounded by
V_n(s,ν')-V_n(s,ν)<Δ/p_m^2,∀ 1≤ m≤ M.
Denote p_M+1≜ 1.Under the condition in lemma <ref>, the difference between two cost-to-go function can be lower-bounded by
V_n(s,ν')-V_n(s,ν)>-Δ/p_m+1^2,
∀, 1≤ m≤ M, A≥ H_n(m,m+1,D).
The proof is shown as follows. The minimizing AoI problem can be seen as a stochastic shortest path (SSP) problem <cit.>. Recall the optimal cost for user n is denoted as γ_n^*. The cost-to-go function can be rewritten as:
V_n(s,ν)= min_π𝔼_π[the cost from state s to the recurrent state
for the first time]-
𝔼_π [the cost from state s to the recurrent state
with stage cost γ_n^*].
Unlike appendix B in <cit.>, the recurrent state for reference is not defined as the state with minimum AoI, but we can set any state at layer 1 as a reference recurrent state with zero cost-to-go. The expected cost from s=(A,D) to recurrent state ζ given policy π_n and server cost ν can be denoted as Cost^π_n_sζ(ν), and the expected step can be denoted as N_sζ^π_n. Considering recurrent states ζ=(j),∀ j≥ A-D, we have <cit.>
V_n(s,ν')-V_n(s,ν) ≤Cost^π_n_sζ(ν')-Cost^π_n_sζ(ν).
≤∑_a=A^∞[g_π_n((a,D),ν')-g_π_n((a,D),ν)]∏_j=a^A-1(1-q^π_n_(j,D)ζ)
=∑_a:π_n((a,D))=mΔ·∏_j=a^A-1(1-q^π_n_(j,D)ζ).
where
Cost^π_n_sζ(ν')-Cost^π_n_sζ(ν)
= Δ· [the expected time of hitting state s, ∀π_n(s)=m
when transiting from s to ζ under policy π_n].
According to Proposition <ref>, there exits H_n(m-1,m,D),∀ D. We can derive the upper bound by considering the following policy π_n': (a) for all s=(A,D), D < H_n(m-1,m,0) and A>=max_d H_n(m-1,m,d), π_n'(s)=m and for any other s', π_n'(s')=1. By using such a policy π_n', we have p_π_n'(s)<=p_π_n(s), i.e., the probability of finishing computation at each state gets lower, and for all s and |{s|π_n'(s)=m}|>|{s|π_n(s)=m}|. Therefore, we have
Cost^π_n_sζ(ν')-Cost^π_n_sζ(ν)≤Δ· N_sζ^π_n'.
We have
N_sζ^π_n' =p_m· 1+∑_i=2^∞ p_m(1-p_m)^i-1(i+N_sζ^π_n'),
N_sζ^π_n' =1/p_m^2.
Therefore,
V_n(s,ν')-V_n(s,ν)≤Δ/p_m^2.
Similar to Lemma 4.4 in <cit.>, we also have
V_n(s,ν')-V_n(s,ν)≥(γ^*_n-γ^*'_n)· N_sζ^π_n≥ -Δ/p_m+1^2,
∀ A≥ H_n(m,m+1,D),
where γ^*' is the optimal stage cost under server cost ν'.
To prove the sub-problem (<ref>) is intra-indexable, we have to show the following two claims:
* If s=(A,D)∈𝒫_nm^l(ν), then s ∈𝒫_nm^l(ν') must hold for ν'=[ν_1,⋯,ν_m+Δ,⋯,ν_M], ∀Δ≥ 0 ,m∈ℳ.
* If ν_m→+∞, then lim_ν_m→+∞𝒫_nm^l(ν')=𝒮_l.
We have shown that the optimal policy for the MDP is MLTT. Recall that the threshold splits server m-1 and m given generate age d is denoted as H_n(m-1,m,d). Then, the passive set for layer 2 can be expressed as <cit.>:
𝒫_nm^l(ν)=
∪_d=1^∞{(a,d)|
a∈{1,⋯,H_n(m-1,m,d)-1}∪{H_n(m,m+1,d),⋯}},
∀ 1<m<M,
𝒫_nM^l(ν)=
∪_d=1^∞{(a,d) | a∈{1,⋯,H_n(M-1,M,d)-1}}.
To show statement (i), we want to show 𝒫_nm^l(ν)⊆𝒫_nm^l(ν'), we can instead show that for all d≥ 1:
(a) H_n(m-1,m,d) ≤ H_n'(m-1,m,d) , ∀ m ≤ M;
(b) H_n(m,m+1,d)≥ H_n'(m,m+1,d), ∀ m≥ M;
We first show (a). For contradiction, suppose H_n(m-1,m,d) > H_n'(m-1,m,d). At state (H_n'(m-1,m,d), d), taking server m-1 has a smaller expected cost, and we have:
(p_m-p_m-1)[H_n'+ V_n(H_n',ν)
-H_n'+d
-V_n((H_n'+1-d,H_n'+1-d),ν)]≤ν_m-ν_m-1,
where we denote H_n= H_n(m-1,m,d)+1,d) and H_n'= H_n'(m-1,m,d)+1,d) for simplicity.
If given ν', taking server m-1 has a smaller expected cost, i.e.,
(p_m-p_m-1)[H_n' + V_n((H_n'+1,d),ν')-H_n'+d
-V_n((H_n'+1-d,H_n'+1-d),ν')]≤ν_m'-ν_m-1.
Let Eq. (<ref>) - Eq. (<ref>), we have
V_n((H_n'+1,d),ν')
- V_n((H_n'+1,d),ν)
-[V_n((H_n'+1-d,H_n'+1-d),ν')
-V_n((H_n'+1-d,H_n'+1-d),ν)]
≥Δ/p_m-p_m-1,
when p_m-p_m-1≤ p_m^2, it contradicts the upper bound in Lemma 3.
Then, we will show the sufficiency of condition (b). Suppose H_n(m-1,m,d)≤ H_n'(m-1,m,d). At state s=(H_n'(m-1,m,d)-1,d), policy π_n prefer a server m-1>m over server m, i.e., H_n'(m-1,m,d)≥ H_n(m-1,m,d)≥ H_n(m,m+1,d), and we have
(p_m+1-p_m)[H_n'+ V_n((H_n' +1,d),ν)]
≤ν_m+1-ν_m.
Similarly, policy π_n' prefers server m at state s=(H_n'(m,m+1,d)-1,d), i.e.,
(p_m+1-p_m)[H_n' + V_n((H_n'+1,d),ν')]
≤ν_m+1'-ν_m'=-Δ.
For p_m+1-p_m>0, we have
V_n((H_n'+1,d),ν')
- V_n((H_n'+1,d),ν)
≤ -Δ/p_m+1-p_m,
when p_m+1-p_m≤ p_m+1^2, contradicts the lower bound derived in lemma 4.
Therefore, the cardinality of passive set 𝒫_nm(ν) grow monotonically to |𝒮| as server cost ν_m increases from 0 to +∞.
Here we consider a strict condition that p_m-p_m-1≤ p_m^2,∀ 1<m≤ M due to that we obtain a rough upper bound on N_sζ^π_n'. In future studies, we will seek a tighter bound on N_sζ^π_n'.
§
§ PROOF OF PROPOSITION <REF>
Appendix <ref> shows that the problem (<ref>) satisfies the MLTT property. For simplicity, we denote the minimal age to offload tasks to server m for a user at layer 1 as H^*_m and the optimal age to offload tasks to server m for a user at layer 2 given generated age d as H^*_m(d), and the minimum computational time is 1.
Let the cost-to-go for the recurrent state s=(1) to be 0, i.e., . We first focus on the cost-to-go for one state at layer 1 with age H^*_m-1≤ A≤ H^*_m:
V(A,ν)= A-γ^*+ν_m-1+p_m-1· 0
+(1-p_m-1)V((A+1,A),ν).
For A^*_M≤ A, we assert that the optimal server for state s=(A+1,A) is also M, and we have
V(A,ν)= A-γ^*+ν_M
+(1-p_M)V((A+1,A),ν)
= A-γ^*+ν_M/p_M+1-p_M/p^2_M.
For H^*_m-1≤ A≤ H^*_m, we have
V (A,ν)=∑_i=1^A^*_m(A)-A*_m-i(1-p_m-1)^i-1
· (A*_m-1+i-1+p_m-1V(i,ν)+ν_m-1-γ^*)
+(1-p_m-1)^A^*_m(A)-1V((A^*_m(A),A),ν),
and we have
V ((A^*_m(A),A),ν)=∑_i=1^A^*_m+1(A)-A^*_m(A)(1-p_m)^i-1
·(A^*_m(A)+i-1+p_mV(i,ν)+ν_m-γ^*)
+ (1-p_m)^A^*_m+1(A)-A^*_m(A)-1V((A^*_m+1(A),A),ν),
V((A^*_M(A),A),ν)=A^*_M(A)-γ^*+ν_M/p_M+1-p_M/p^2_M.
For A< H^*_1, we have
V(A,ν)=A+V(A+1,ν)-γ^*.
V(1,ν)=0.
Since p_1,p_2,…,p_M and ν is given, we can derive the closed-form relationship of γ^* with H^*_m,H^*_m(d),∀ m∈ℳ,d≤ A^*_M from Eq. (<ref>) to Eq. (<ref>).
The optimal threshold follows:
ν_m-1+V((A,D),ν) ≤ν_m+ V((A,D-1),ν)
ν_m+ V((A,D-1),ν) ≤ν_m+ V((A+1,D),ν)
ν_m+ V((A+1,D),ν) ≤ν_m-1+ V((A+1,D+1),ν).
Subtracting ν_m+V((A,D-1),ν)-γ^* to every term above:
ν_m-1-ν_m+A≤γ^*≤ν_m-1-ν_m+A+1.
Let ν_m-1-ν_m+A=γ^*, we derive the expression of ν_m, which is the index function <cit.>.
§
§ PROOF OF PROPOSITION <REF>
In Appendix <ref>, we have shown that the subproblem (<ref>) satisfies the intra-indexability. The asymptotic optimum will hold for the original problem under the nested index policy if precise division property satisfies <cit.>. The precise division property is defined as follows:
Given state s and the server cost ν, suppose the sub-problem is intra-indexable. We say the preference for server m is precisely divisible at layer l by the nested-index I_nm(s,ν,l) if the following holds:
(i) If I_nm(s,ν)=ν_m, then μ_nm(s,ν)≤μ_nm'(s,ν).
(ii) If I_nm(s,ν)>ν_m, then μ_nm(s,ν)<μ_nm'(s,ν).
(iii) Otherwise, there exists m' m s.t. μ_nm(s,ν)>μ_nm'(s,ν).
The precise division property established the connection between the index value and the optimal policy.
Then, we will show that the sub-problem (<ref>) satisfies the precise division property.
Given state 𝒮 and server cost ν, the sub-problem (<ref>) satisfies the precise division property defined in Definition 7.
The precise division property is more strict than intra-indexability, due to the transition of optimal action happens when the index value coincides with the server cost. The author in <cit.> proves this proposition by showing the difference V_n(s,ν')-V_n(s,ν) is uniformly bounded:
Denote p_M+1≜ 1.Under the condition in lemma <ref>, the difference between two cost-to-go function can be lower-bounded by
V_n(s,ν')-V_n(s,ν)>-Δ/p_m+1,∀, 1≤ m≤ M.
According to Case 3, Lemma 4.6 in <cit.>, we have to show the monotonicity of h(π_n',s=(Δ,D),ν') in Δ, where
h(π_n',s=(Δ,D),ν')=μ_π_n'(s)(s,ν')-μ_m(s,ν')+Δ-γ_n^*'+γ_n^*.
We have
μ_π_n'(s)(s,ν')-μ_m(s,ν') =ν_π_n'(s)-ν_m
+(p_m-p_π_n'(s))(D+V((A+1,D),ν')-V((A+1-D),ν')),
where we have shown V((A+1,D),ν')-V((A+1-D),ν') is monotonically increasing with A in Appendix <ref>.
A possible way to prove the proposition is to study the range of change in value function given a certain vibration on server cost. Therefore, we have the following lemma:
In the precise division property, the connection between cost-to-go function is established based on the nested-index. Therefore, we are willing to show the affect on the cost function with a small disturbance of the index value.
First, we will show that with disturbed server cost ν_m' = ν_m+Δ, the difference between V_n(s_n,ν) and V_n(s_n,ν') is lower bounded by a function of δ and model parameters. We have notice that adding Δ to the channel cost ν only affect the stage cost. Thus, we have:
Given ν, suppose ν'=[ν_1,ν_2,⋯,ν_m+Δ,⋯,ν_M].
V_n(s,ν')-V_n(s,ν)≤Δ· N_sζ_n^π_n
≤Δ·(1/p_mτ_n^min+1/p_m^2), ∀ m∈ℳ,
where the N_sζ_n^π_n represents the expected time to transit from state s to a recurrent state ζ_n under policy π_n.
According to <cit.>, the problem (<ref>) can be reformulated as a stochastic shortest path problem, and N_sζ_n^π_n represents the expected length of the path from the initial point s to a predetermined terminal ζ_n. Without loss of generality, we set ζ_n=(τ_n^min+1,τ_n^min+1).
Here we find the upper bound. Similarly, we can derive the lower bound:
The difference between the two value function can be lower bounded by:
V_n(s,ν')-V_n(s,ν)
≥ (-γ^*_nν'+γ^*_nν) N_sζ_n^π_n
≥-Δ·(1/p_mτ_n^min+1/p_m^2), ∀ m∈ℳ
With these two Lemma, we can complete this proof, and the detail is shown as follows.
We first show Lemma 4. We assume Δ >0, then γ^*(ν')>γ^*(ν). Therefore, we have
V(s,ν')-V(s,ν)
≤[Cost^π_sζ(ν')-N^π_sζ·γ^*(ν)]-[Cost^π_sζ(ν)-N^π_sζ·γ^*(ν)]
≤Δ·𝔼[N^π_sζ | s': π(s')=m].
(i) We want to show the sufficiency:
μ_m(s,ν)-μ_k(s,ν)
=ν_m-ν_k-Δ+(p_k-p_m)[A+ν(s',ν_Δ)]
≤Δ+(p_k-p_m)[ν(s',ν)-ν(s',ν_Δ)]
According to the upper bound (18), we can always find a Δ(ϵ) satisfies.
μ_m(s,ν)-μ_k(s,ν)
=ν_m-ν_k-Δ+(p_k-p_m)[A+ν(s',ν_Δ)]
≤-(Δ+(p_k-p_m)[ν(s',ν_I_m,Δ)-ν(s',ν_I_m)]
(ii) Let Δ = I_m(s)-ν_m/2
similarly, derive
We prove that even a more complicated MC with more than one state variable can be stationary. We present the complete proof in Appendix B.
To study the action among layers, we assume that all tasks will be offloaded to a certain server m, and we omit subscript m. We aim to find the optimal threshold for the decoupled problem. We reformulate the Bellman equation as
V((A,D),ν') = A -γ^*+
min{ν'+ V((A+1,A),ν'), V((A+1,D),ν')}.
§
The idea of our proof is similar to <cit.>. We build the connection between Problem (<ref>) and Problem (<ref>).
We show the KKT conditions for both problems are identical <cit.>.
We consider optimizing the Lagrangian for scheduling Problem (<ref>). Suppose the stationary distribution is reached. We denote the state distribution, dual cost, and decision variable as z', ν' and y'. We denote w_ns^l as the decision weight computed by the nested index. At the steady state (i.e., a process whose initial state satisfies the stationary distribution), (y',z',ν',) must solve the following problem (<ref>).
The fluid balance holds when the stationary distribution is reached. Therefore, we omit it.
We relax the constraint (<ref>) to the objective function and derive the decoupled problem
max_y ∑_m∈ℳ∑_l∈ℒy_nm^sl(w_nm^sl-ν_m^*)
s.t. ∑_m∈ℳ∑_l∈ℒy_nm^sl≤ 1, ∀ n∈𝒩
In <cit.>, each channel has only one optimal cost, and there must exist a channel cost ν_m which satisfies w_nm^s-ν_m>0. However, given state s, there are at least 2 actions to be chosen (i.e., keeping in the same layer and transit to the next layer), and w_nm^sl-ν_m>0 is not guaranteed in our model. Actually, adding layer l does not violate the conditions of the index in <cit.>. In our multi-layer MDP, the transition among layers happens at the given edge server m. The user can not transit to layer l from server m to a different server m' when l>1, i.e., it is invalid for a user to offload the remaining part of a task to a different server, which violates our system setting. Therefore, the activating cost for invalid layer transition should be -∞, since no matter how great a subsidy we assign to the action, the user can never transit to a corresponding layer. Then, our proof becomes much simpler.
To show the equivalence of the fixed point solution, we denote the new cost vector ν_m'=[ν_m, ν_m^l], where ν_m^l is the cost of transit to layer l at server m. We perceive Problem (<ref>) and Problem (<ref>) as new problems with M+1 cost. If Problem (<ref>) is intra-indexable in each layer l∈ℒ, we can also utilize Lemma 3.8. in <cit.>.
|
http://arxiv.org/abs/2307.02727v1
|
20230706022452
|
On efficient linear and fully decoupled finite difference method for wormhole propagation with heat transmission process on staggered grids
|
[
"Xiaoli Li",
"Ziyan Li",
"Hongxing Rui"
] |
math.NA
|
[
"math.NA",
"cs.NA"
] |
On efficient linear and fully decoupled finite difference method for wormhole propagation with heat transmission process on staggered grids
This work is supported by the National Natural Science Foundation of China grants 12271302, 12131014.
Xiaoli Li
School of Mathematics, Shandong University, Jinan, Shandong, 250100, P.R. China. Email: [email protected].
Ziyan Li
Department of Mathematics, City University of Hong Kong, Hong Kong SAR, China. Email: [email protected].
Hongxing Rui
Corresponding author. School of Mathematics, Shandong University, Jinan, Shandong, 250100, P.R. China. Email: [email protected].
===============================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, we construct an efficient linear and fully decoupled finite difference scheme for wormhole propagation with heat transmission process on staggered grids, which only requires solving a sequence of linear elliptic equations at each time step. We first derive the positivity preserving properties for the discrete porosity and its difference quotient in time, and then obtain optimal error estimates for the velocity, pressure, concentration, porosity and temperature in different norms rigorously and carefully by establishing several auxiliary lemmas for the highly coupled nonlinear system. Numerical experiments in two- and three-dimensional cases are provided to verify our theoretical results and illustrate the capabilities of the constructed method.
wormhole propagation with heat transmission; finite difference scheme on staggered grids; positivity preserving property; optimal error estimates
35K05, 65M06, 65M12.
XIAOLI LI, ZIYAN LI AND HONGXING RUI Wormhole Propagation with Heat Transmission Process
§ INTRODUCTION
As an efficient technique of enhanced oil recovery, the acid treatment of carbonate reservoirs has been widely used in improving oil production rate. In this technique, acid is injected into matrix to dissolve the rocks and deposits around the well bore, which facilitates oil flow into production well, and thus forms a channel with high porosity. However, the efficiency of this technique strongly depends the dissolution patterns.
Specifically three dissolving patterns can be observed with the increase of injection rate, which include face dissolution pattern, wormhole pattern and uniform dissolution pattern. Wormhole pattern with narrow channel is the most efficient one for successful simulation <cit.>.
Due to the important role that wormhole plays in enhancing productivity, several research works have been conducted to study the formation and propagation of wormholes <cit.>.
McDuff et al. <cit.> developed a new methodology to give high-resolution nondestructive imaging and analysis in the experimental studies for the wormhole model. Pange et al. <cit.> proposed the well-known two-scale continuum model for this problem in reactive dissolution of a porous medium. There are also many numerical research works for the wormhole model. Kou et al. <cit.> developed a mixed finite element method and established stability analysis and a priori error estimates for velocity, pressure, concentration and porosity in different norms. They <cit.> also proposed a semi-analytic time scheme for the wormhole propagation with the Darcy-Brinkman-Forchheimer model. Li et al. <cit.> extended finite difference methods on the staggered grids to the wormhole models with different frameworks. Later the discontinuous Galerkin method was applied to the wormhole model in <cit.>.
Recently Xu <cit.> constructed the high-order bound-preserving discontinuous Galerkin method for this problem to preserve the boundedness of porosity and concentration of acid. However, the above works do not consider the factor of temperature, which has an important influence on the thermodynamic parameters including the surface reaction rate and molecular diffusion coefficient <cit.>. As far as we know, most previous numerical works have focused only on the chemical reaction and mass transport processes in wormhole propagation, but completely ignored the significant influence of temperature factor. There are only a few works to consider the wormhole model with heat transmission process. Kalia et al. <cit.> applied a mathematical model to investigate the effect of temperature on carbonate matrix acidizing. They also presented the numerical simulation for the wormhole model by using the finite volume method. A radial heat transfer model is introduced to capture heat transfer and reaction heat in <cit.>. Recently Wu et al. <cit.> proposed the modified momentum conservation equation and established the thermal DBF framework by introducing the energy balance equation. However, to the best of our knowledge, there are no related work to consider the theoretical analysis for wormhole propagation with heat transmission process. It is much more challenging to develop efficient numerical schemes and to carry out corresponding error analysis for this highly coupled nonlinear system.
The main purposes of this work are to construct an efficient linear and fully decoupled finite difference scheme for wormhole propagation with heat transmission process on staggered grids, and carry out error analysis rigorously. We also give several
numerical experiments in two- and three-dimensional cases to verify our theoretical results and illustrate the capabilities of the constructed method. More precisely, the work presented in this paper is unique in the following aspects:
(i) Efficient linear and fully decoupled scheme for this highly coupled nonlinear system is proposed by introducing auxiliary variables w and v, and using the implicit-explicit discretization. The constructed scheme only requires solving a sequence of linear elliptic equations at each time step;
(ii) We first derive the positivity preserving properties for the discrete porosity and its difference quotient in time, and then handle with the complication resulted from the fully coupling relation of multivariables, including porosity, pressure, velocity, solute concentration and temperature by establishing several auxiliary lemmas.
(iii) The optimal error analysis for the velocity, pressure, concentration, porosity and temperature in different norms is established. We believe that our error analysis for the constructed fully decoupled and linear scheme is the first work.
The paper is organized as follows. In Section 2 we describe mathematical model. In Section 3 we construct finite difference method on staggered grids. In Section 4 we carry out error estimates for the discrete scheme. In Section 5, we present numerical experiments in two- and three-dimensional cases to verify our theoretical results and illustrate the capabilities of the constructed method. In Section 6 we give some concluding remarks.
§ MATHEMATICAL MODEL
In this paper, we consider a heat-transfer model to describe the temperature behavior for wormhole propagation by using the two-scale continuum model <cit.>, which is established by coupling local pore-scale phenomena to macroscopic variables (Darcy velocity, pressure and concentration) through structure-property relationships (permeability-porosity, interfacial area-porosity, and so on).
§.§ Darcy scale model
The Darcy scale model equations are given by
γ∂p/∂t+∂ϕ/∂t+∇·u=
f, x∈Ω, t∈J,
u=-K(ϕ)μ∇p, x∈Ω, t∈J,
∂(ϕc_f)∂t+∇·(uc_f)
-∇·(ϕD ∇c_f)=k_ca_v(c_s-c_f)+f_Pc_f+f_Ic_I,
x∈Ω, t∈J,
∂ϕ/∂t= R(c_f,T) a_v α/ρ_s, x∈Ω, t∈J,
R(c_f,T) = k_c (c_f-c_s), x∈Ω, t∈J,
where Ω is an open bounded domain. J=(0,Q], and Q denotes the final time. p is the pressure, μ is the fluid viscosity, u is the Darcy velocity of the fluid, f=f_I+f_P,
f_P and f_I are production and injection rates respectively. γ is a pseudo-compressibility parameter
that results in slight change of the density of the fluid phase in the dissolution process. ϕ and K are the porosity and permeability of the rock respectively,
c_f is the cup-mixing concentration of the acid in the fluid phase. c_I is the injected concentration.
For simplicity we assume that diffusion coefficient D(x)=d_molI
=diag(D_ll), l=1,2 is diagonal matrix in the following.
k_c is the local mass-transfer coefficient, a_v is the interfacial area available for reaction per unit volume of the medium.
The variable c_s is the concentration of the acid at the fluid-solid interface, and the relationship between c_f and c_s can be described as follows.
c_s=c_f/1+k_s(T)/k_c,
where the surface reaction rate k_s is a function of the temperature <cit.>, no longer deemed as a constant in <cit.>. Here we assume that k_s1< k_s(T) < k_s2 and k_s(T) is Lipschitz continuous for simplicity.
α is the dissolving power of the acid and ρ_s is the density of the solid phase.
§.§ Pore scale model
The pore-scale model uses structure property relations to describe the changes in permeability and interfacial surface area as dissolution occurs. The relationship between the permeability and the porosity is established by the Carman-Kozeny correlation:
K/K_0=ϕ/ϕ_0(ϕ(1-ϕ_0)/ϕ_0(1-ϕ))^2,
where ϕ_0 and K_0 are the initial porosity and permeability of the rock respectively. Using porosity and permeability, a_v is shown as
a_v/a_0=ϕ/ϕ_0√(K_0ϕ/Kϕ_0),
where a_0 is the initial interfacial area.
§.§ Heat-transfer model
In this paper, a heat-transfer model is introduced to determine
the temperature behavior during wormhole propagation, which considers heat conduction, heat convection and reaction heat <cit.>.
∂ [ ( ρ_s(1-ϕ) θ_s + ρ_f ϕθ_f) T ]/∂ t + ∇· (ρ_f θ_fu T) = ∇· ( λ(ϕ) ∇ T ) + a_v(ϕ) H_r(T)R(c_f, T) ,
where T is the temperature, ρ_s and ρ_f are the density of rock and acid respectively. θ_s and θ_f are the heat capacities of rock and acid respectively. The average thermal conductivity between acid solution and rock λ(ϕ)= (1-ϕ) λ_s + ϕλ_f, where λ_s and λ_f are the thermal conductivities of rock and acid respectively. Here we assume that the reaction heat H_r(T) is Lipschitz continuous for simplicity.
(<ref>) establishes the heat transmission process during acid
injection. The first term in this equation describes the variation of temperature over time, the second term represents thermal convection due to the acid flow during wormhole propagation. The first term on the right hand side describes thermal conduction, and the last term represents the reaction heat generation rate.
In this paper, we assume that the temperature of the acid and matrix can be represented by a single notation T, since the speed of heat transfer is much faster than the fluid speed. i. e. the temperature of the acid and matrix is the same when acid is injected into the matrix. Differentiating between the acid temperature and matrix temperature would cause great difficulty to theories and applications, and the details of the heat transfer between the acid and matrix must be researched carefully. In fact, due to the geothermal factor, the initial matrix temperature may be higher than the injected acid temperature. Relevant work can be left to the future.
§.§ Boundary and initial conditions
In this paper, the boundary and initial conditions are as follows.
{[ u·n=0, ϕD∇ c_f ·n=0, λ∇ T ·n=0, x∈∂Ω, t ∈ J,; p(x,0)=p_0(x), x∈Ω,; c_f(x,0)=c_f0(x), x∈Ω,; ϕ(x,0)=ϕ_0(x), x∈Ω,; T (x,0)= T_0(x), x∈Ω, ].
where n is the unit outward normal vector of the domain Ω.
§ FINITE DIFFERENCE METHOD ON STAGGERED GRIDS
In this section, we consider the finite difference method for the coupled system on staggered grids.
To fix the idea, we consider Ω=(L_lx,L_rx)× (L_ly,L_ry). Three dimensional rectangular domains can be dealt with similarly. The grid points are denoted by
(x_i+1/2,y_j+1/2), i=0,...,N_x, j=0,...,N_y,
and the notations similar to those in <cit.> are used.
x_i=(x_i-1/2+x_i+1/2)/2, i=1,...,N_x,
h_i^x=x_i+1/2-x_i-1/2, i=1,...,N_x,
h_i+1/2^x=x_i+1-x_i=(h_i^x+h_i+1^x)/2, i=1,...,N_x-1,
y_j=(y_j-1/2+y_j+1/2)/2, j=1,...,N_y,
h_j^y=y_j+1/2-y_j-1/2, j=1,...,N_y,
h_j+1/2^y=y_j+1-y_j=(h_j^y+h_j+1^y)/2, j=1,...,N_y,
h=max_i,j{h_i^x,h_j^y}.
Let g_i,j, g_i+1/2,j, g_i,j+1/2 denote g(x_i,y_j), g(x_i+1/2,y_j), g(x_i,y_j+1/2). Define the discrete inner products and norms as follows,
(f,g)_M=∑_i=1^N_x∑_j=1^N_yh_i^xh_j^yf_i,jg_i,j,
(f,g)_x=∑_i=1^N_x-1∑_j=1^N_yh_i+1/2^xh_j^yf_i+1/2,jg_i+1/2,j,
(f,g)_y=∑_i=1^N_x∑_j=1^N_y-1h_i^xh_j+1/2^yf_i,j+1/2g_i,j+1/2,
(v,r)_TM=(v^x,r^x)_x+(v^y,r^y)_y.
For simplicity from now on we always
omit the superscript n if the omission does not cause conflicts.
Define
[d_xg]_i+1/2,j=(g_i+1,j-g_i,j)/h_i+1/2^x,
[d_yg]_i,j+1/2=(g_i,j+1-g_i,j)/h_j+1/2^y,
[D_xg]_i,j=(g_i+1/2,j-g_i-1/2,j)/h_i^x,
[D_yg]_i,j=(g_i,j+1/2-g_i,j-1/2)/h_j^y,
[d_tg]^n_i,j=(g_i,j^n-g_i,j^n-1)/Δ t.
For simplicity we only consider the case that h_i+1/2=h, k_j+1/2=k, i.e. uniform meshes are used both in x and y-directions.
Define w=(w^x,w^y)=uc_f - ϕD∇ c_f and v=(v^x,v^y)= ρ_f θ_fu T - λ(ϕ) ∇ T, then (<ref>) and (<ref>) can be transformed into
∂ (ϕ c_f)∂ t+∇·w =k_c a_v(ϕ)
( 1/ 1+ k_s(T) /k_c -1 ) c_f
+f_Pc_f+f_Ic_I,
and
∂ [ ( ρ_s(1-ϕ) θ_s + ρ_f ϕθ_f) T ]/∂ t + ∇·v = a_v(ϕ) H_r(T)R(c_f, T) .
Set Δ t=Q/N, t^n=nΔ t, for n≤ N,
and define
[d_tf]^n=f^n-f^n-1/Δ t. Denote by {Ψ^n, P^n, U^n, C_f^n, W^n, Z^n, V^n }_n=1^N, the approximations to {ϕ^n, p^n, u^n, c_f^n, w^n , T^n, v^n}_n=1^N respectively, with the boundary approximations
{[ U_1/2,j^x,n=U_N_x+1/2,j^x,n=0, 1≤ j≤ N_y,; U_i,1/2^y,n=U_i,N_y+1/2^y,n=0, 1≤ i≤ N_x,; W_1/2,j^x,n=W_N_x+1/2,j^x,n=0, 1≤ j≤ N_y,; W_i,1/2^y,n=W_i,N_y+1/2^y,n=0, 1≤ i≤ N_x,; V_1/2,j^x,n=V_N_x+1/2,j^x,n=0, 1≤ j≤ N_y,; V_i,1/2^y,n=V_i,N_y+1/2^y,n=0, 1≤ i≤ N_x, ].
and the initial approximations for 1≤ i≤ N_x,1≤ j≤ N_y,
{[ P_i,j^0=p_0,i,j, C_f,i,j^0=c_f0,i,j,; Z_i,j^0=T_0,i,j, Ψ_i,j^0=ϕ_0,i,j . ].
Then, the fully discrete scheme based on the finite difference method on staggered grids is as follows:
γ [d_t P]^n+1_i,j + [d_t Ψ ]^n+1_i,j + [D_xU]^x,n+1_i,j + [D_yU]^y,n+1_i,j = f^n+1_i,j,
U^x,n+1_i+1/2,j = - K(Π_h Ψ^n+1_i+1/2,j ) /μ [d_x P]^n+1_i+1/2,j , U^y,n+1_i,j+1/2 = - K(Π_h Ψ^n+1_i,j+1/2 ) /μ [d_y P]^n+1_i,j+1/2 ;
[d_t (Ψ C_f) ]^n+1_i,j + [D_x W]^x,n+1_i,j + [D_y W]^y,n+1_i,j
= k_c a_v( Ψ^n+1_i,j ) ( 1/ 1+ k_s(Z^n_i,j ) /k_c -1 ) C^n+1_f,i,j + f_P^n+1 C^n+1_f,i,j + f_I^n+1 c_I^n+1,
W^x,n+1_i+1/2,j = U^x,n+1_i+1/2,jΠ_h C^n+1_f,i+1/2,j - Π_h Ψ^n+1_i+1/2,j D_11 [d_x C_f]^n+1_i+1/2,j,
W^y,n+1_i,j+1/2 = U^y,n+1_i,j+1/2Π_h C^n+1_f,i,j+1/2 - Π_h Ψ^n+1_i,j+1/2 D_22 [d_y C_f ]^n+1_i,j+1/2 ;
[ d_t ( ( ρ_s(1- Ψ) θ_s + ρ_f Ψθ_f) Z ) ]^n+1_i,j + [D_xV]^x,n+1_i,j + [D_yV]^y,n+1_i,j = a_v( Ψ^n+1_i,j ) H_r(Z^n_i,j) R( C^n+1_f,i,j, Z^n_i,j ),
V^x,n+1_i+1/2,j = ρ_f θ_f U^x,n+1_i+1/2,jΠ_h Z^n+1_i+1/2,j - λ ( Π_h Ψ^n+1_i+1/2,j) [d_x Z]^n+1_i+1/2,j,
V^y,n+1_i,j+1/2 = ρ_f θ_f U^y,n+1_i,j+1/2Π_h Z^n+1_i,j+1/2 - λ ( Π_h Ψ^n+1_i,j+1/2 ) [d_y Z]^n+1_i,j+1/2 ;
where Π_h is an interpolation operator with second-order or higher precision.
Using (<ref>)-(<ref>), we have
∂ϕ/∂ t = α k_c a_0 /ρ_s ( 1- 1 / 1+ k_s(T)/k_c ) 1-ϕ/ 1- ϕ_0 c_f.
For the calculation of the discrete porosity Ψ, we use the following scheme.
[d_t Ψ ]^n+1_i,j = α k_c a_0 /ρ_s ( 1- 1/1+ k_s(Z^n_i,j )/k_c ) 1- Ψ^n+1_i,j/ 1-Ψ^0_i,j C ^n_f,i,j,
where C ^n_f,i,j =max{0,min{ C^n_f,i,j ,1}}.
The difference method will consist of four parts:
(1) If the approximate concentration C_f,i,j^n and porosity Ψ_i,j^n, n=0,⋯,N-1
are known, equation (<ref>) will be used to obtain
a new porosity Ψ_i,j^n+1.
(2) By using difference scheme (<ref>) and (<ref>), an approximation P_i,j^n+1
to the pressure will be calculated using Ψ_i,j^n+1, and then the approximate velocity U^x,n+1_i+1/2,j and U^y,n+1_i,j+1/2 will be evaluated.
(3) A new concentration C_f,i,j^n+1 will be calculated using U^x,n+1_i+1/2,j, U^y,n+1_i,j+1/2, Z_i,j^n and Ψ_i,j^n+1 in (<ref>)-(<ref>), then we get the approximations W^x,n+1_i+1/2,j and W^y,n+1_i,j+1/2 by using (<ref>).
(4)
A new temperature Z_i,j^n+1 will be calculated in (<ref>) by using U^x,n+1_i+1/2,j, U^y,n+1_i,j+1/2, C_f,i,j^n+1 and Ψ_i,j^n+1, then we get the approximations V^x,n+1_i+1/2,j and V^y,n+1_i,j+1/2 in (<ref>).
It is easy to see
that at each time level, the difference scheme has an explicit solution or is a linear
pentadiagonal system with strictly diagonally dominant coefficient matrix, thus the approximate solutions exist uniquely.
§ ERROR ANALYSIS FOR THE DISCRETE SCHEME
In this section, we give the error estimates for the fully discrete scheme (<ref>)-(<ref>).
Set
{[ E_p=P-p, E_u=(E_u^x,E_u^y)=U-u,; E_c_f= C_f -c_f, E_w=(E_w^x,E_w^y)=W-w,; E_T=Z-T, E_v=(E_v^x,E_v^y)=V-v,; E_ϕ=Ψ-ϕ. ].
First we present the following lemma which will be used in what follows.
<cit.> Let q_i,j,w_i+1/2,j^x and w_i,j+1/2^y be any values such that w_1/2,j^x=w_N_x+1/2,j^x=w_i,1/2^y=w_i,N_y+1/2^y=0, then
(q,D_xw^x)_M=-(d_xq,w^x)_x,
(q,D_yw^y)_M=-(d_yq,w^y)_y.
Next we will prove a priori bounds for the discrete solution Ψ which will be used in what follows.
Assuming that 0<ϕ_0*≤ϕ_0≤ϕ_0^*<1, then the discrete porosity Ψ_i,j^n is bounded,
i.e.,
ϕ_0*≤Ψ_i,j^n<1, 0≤ i≤ N_x, 0≤ j≤ N_y, n≤ N.
It also holds that
0≤ [d_tΨ]_i,j^n< α k_c a_0/ρ_s (1- ϕ_0) , 0≤ i≤ N_x, 0≤ j≤ N_y, n≤ N.
The proof is given by induction. It is trivial that ϕ_0*≤Ψ_i,j^0<1.
Suppose that
ϕ_0*≤Ψ_i,j^n-1<1, n≤ N,
next we prove that Ψ_i,j^n also does.
Set β^n = α k_c a_0 /ρ_s ( 1- 1/1+ k_s(Z^n)/k_c ) 1/1-Ψ^0 C _f^nΔ t for simplicity. Then we can easily obtain that 0<β^n< α k_c a_0/ρ_s (1- ϕ_0) Δ t. By using the definition of β^n, we can transform (<ref>) into the following.
Ψ_i,j^n+1=β_i,j^n/1+β_i,j^n+Ψ_i,j^n/1+β_i,j^n,
where we can easily obtain that Ψ_i,j^n+1<1.
Since (<ref>) also can be recast as
Ψ_i,j^n+1-Ψ_i,j^n=β_i,j^n(1-Ψ_i,j^n+1 ).
Thus we have that Ψ_i,j^n+1>Ψ_i,j^n, which leads to the desired results.
The approximate errors of discrete porosity satisfy
E_ϕ^m+1^2_M≤ C Δ t ∑_n=0^m E_c_f^n_M^2+ C Δ t ∑_n=0^m E_T^n_M^2
+ C Δ t∑_n=0^m E_ϕ^n+1_M^2+ C (Δ t)^2, m ≤ N-1,
Δ t ∑_n=0^m d_tE_ϕ^n+1_M^2 ≤ C Δ t ∑_n=0^m E_c_f^n_M^2+ C Δ t ∑_n=0^m E_T^n_M^2
+ C Δ t∑_n=0^m E_ϕ^n+1_M^2+ C(Δ t)^2, m ≤ N-1,
where the positive constant C is independent of h, k and Δ t.
Subtracting (<ref>) from (<ref>), we can obtain
d_t E_ϕ,i,j^n+1 = χ ( 1 / 1+ k_s(T^n+1_i,j )/k_c -
1/1+ k_s(Z^n_i,j )/k_c ) 1- Ψ^n+1_i,j/ 1-Ψ^0_i,j C ^n_f,i,j
+ χ ( 1- 1 / 1+ k_s(T^n+1_i,j )/k_c ) ( 1- Ψ^n+1_i,j/ 1-Ψ^0_i,j - 1-ϕ^n+1_i,j/ 1- ϕ_0,i,j ) C ^n_f,i,j
+ χ ( 1- 1 / 1+ k_s(T^n+1_i,j )/k_c ) 1-ϕ^n+1_i,j/ 1- ϕ_0,i,j ( C ^n_f,i,j - c_f,i,j^n+1 )
+ ∂ϕ/∂ t |^n+1_i,j- d_t ϕ^n+1_i,j,
where χ = α k_c a_0 /ρ_s.
Multiplying (<ref>) by E_ϕ,i,j^n+1 h k and making summation on i,j for 1 ≤ i ≤ N_x, 1 ≤ j ≤ N_y, we have that
(d_tE_ϕ^n+1, E_ϕ^n+1 )_M= χ( ( 1 / 1+ k_s(T^n+1 )/k_c -
1/1+ k_s(Z^n )/k_c ) 1- Ψ^n+1/ 1-Ψ^0 C ^n_f, E_ϕ^n+1)_M
+ χ( ( 1- 1 / 1+ k_s(T^n+1 )/k_c ) ( 1- Ψ^n+1/ 1-Ψ^0 - 1-ϕ^n+1/ 1- ϕ_0 ) C ^n_f, E_ϕ^n+1)_M
+ χ( ( 1- 1 / 1+ k_s(T^n+1 )/k_c ) 1-ϕ^n+1/ 1- ϕ_0 ( C ^n_f - c_f^n+1 ), E_ϕ^n+1)_M
+ ( ∂ϕ^n+1/∂ t - d_t ϕ^n+1, E_ϕ^n+1 )_M .
The term on the left side of (<ref>) can be transformed into
(d_tE_ϕ^n+1, E_ϕ^n+1 )_M=E_ϕ^n+1_M^2- E_ϕ^n_M^2/2Δ t+
Δ t/2d_tE_ϕ^n+1_M^2.
Using the Cauchy-Schwarz inequality, the first term on the right side of (<ref>) can be bounded by
χ( ( 1 / 1+ k_s(T^n+1 )/k_c -
1/1+ k_s(Z^n )/k_c ) 1- Ψ^n+1/ 1-Ψ^0 C ^n_f, E_ϕ^n+1)_M
≤ C E_T^n _M^2 + C E_ϕ^n+1_M^2 + C ∂ T /∂ t_L^∞(J;
L^∞(Ω))^2(Δ t)^2 .
Recalling Lemma <ref>, the second term on the right side of (<ref>) can be estimated by
χ( ( 1- 1 / 1+ k_s(T^n+1 )/k_c ) ( 1- Ψ^n+1/ 1-Ψ^0 - 1-ϕ^n+1/ 1- ϕ_0 ) C ^n_f , E_ϕ^n+1)_M≤ C E_ϕ^n+1_M^2.
Using the Cauchy-Schwarz inequality, the third term on the right side of (<ref>) can be recast as
χ( ( 1- 1 / 1+ k_s(T^n+1 )/k_c ) 1-ϕ^n+1/ 1- ϕ_0 ( C ^n_f - c_f^n+1 ), E_ϕ^n+1)_M
≤ C E_c_f^n _M^2 + C E_ϕ^n+1_M^2 + C ∂ c_f/∂ t_L^∞(J;
L^∞(Ω))^2 (Δ t)^2 .
where we used the fact that |C^n_f -c_f^n|≤ |C^n_f -c_f^n|.
The last term on the right side of (<ref>) can be estimated by
( ∂ϕ^n+1/∂ t - d_t ϕ^n+1, E_ϕ^n+1 )_M≤ C E_ϕ^n+1_M^2 + C ∂^2 ϕ/∂ t^2_L^∞(J;
L^∞(Ω))^2 (Δ t)^2.
Combing (<ref>) with (<ref>)-(<ref>), multiplying by 2Δ t, and summing for n from 0 to m, m ≤ N-1, we have
E_ϕ^m+1^2_M≤ E_ϕ^0^2_M + C Δ t ∑_n=0^m E_c_f^n_M^2+ C Δ t ∑_n=0^m E_T^n_M^2
+ C Δ t∑_n=0^m E_ϕ^n+1_M^2+ CΔ t^2,
which leads to the desired result (<ref>).
On the other hand, multiplying (<ref>) by d_t E_ϕ,i,j^n+1 h k, making summation on i,j for 1 ≤ i ≤ N_x, 1 ≤ j ≤ N_y and following the similar procedure as (<ref>)-(<ref>), we can easily obtain the desired result (<ref>).
The approximate errors of discrete pressure and velocity satisfy
Δ t ∑_n=0^md_t E_p^n+1_M^2+ E_u^m+1_TM^2
≤ C Δ t ∑_n=0^m (E_ϕ^n+1_M^2+d_tE_ϕ^n+1_M^2)
+ C Δ t ∑_n=0^m E_u^n+1_TM^2 + O( (Δ t)^2+h^4+k^4), m≤ N-1,
where the positive constant C is independent of h,k and Δ t.
Since the proof of this lemma shares similar procedures with the proof of Lemma 7 in <cit.>, we omit the proof for brevity.
The approximate error of discrete concentration satisfy
E_c_f^m+1^2_M
+ Δ t ∑_n=0^m ( d_x E_c_f^n+1_x^2 + d_y E_c_f^n+1_y^2 )
≤ C Δ t ∑_n=0^m E_ϕ^n+1_M^2 + C Δ t ∑_n=0^m E_c_f^n+1_M^2 + C Δ t ∑_n=0^m d_t E_ϕ^n+1_M^2
+ C Δ t ∑_n=0^m E_u^n+1_TM^2 + C Δ t ∑_n=0^m E_T^n _M^2 + C (h^4+k^4+ (Δ t)^2 ), m≤ N-1,
where the positive constant C is independent of h,k and Δ t.
Subtracting (<ref>) from (<ref>), we have that
d_t (Ψ C_f - ϕ c_f ) ^n+1_i,j + [D_x E_w ]^x,n+1_i,j + [D_y E_w ]^y,n+1_i,j
= k_c a_v( Ψ^n+1_i,j ) ( 1/ 1+ k_s(Z^n_i,j ) /k_c -1 ) E_c_f,i,j^n+1
+ k_c a_v( Ψ^n+1_i,j ) ( 1/ 1+ k_s(Z^n_i,j ) /k_c -1/ 1+ k_s( T^n+1_i,j ) /k_c ) c^n+1_f,i,j
+ k_c ( a_v( Ψ^n+1_i,j )- a_v(ϕ^n+1_i,j ) ) 1/ 1+ k_s( T^n+1_i,j ) /k_c c^n+1_f,i,j
+ f^n+1_P,i,j E_c_f,i,j^n+1
+ S^n+1_1,i,j + S^n+1_2,i,j,
where
S_1,i,j^n+1 = ∂ (ϕ c_f )_i,j^n+1∂ t - d_t (ϕ c_f )_i,j^n+1,
and
S_2,i,j^n+1 = [D_x w]^x,n+1_i,j + [D_y w]^y,n+1_i,j - ( ∂ w^x,n+1_i,j/∂ x + ∂ w^y,n+1_i,j/∂ y ).
Noting (<ref>), we can obtain
E_w,i+1/2,j ^x,n+1 = U^x,n+1_i+1/2,jΠ_h C^n+1_f,i+1/2,j - u^x,n+1_i+1/2,j c_f,i+1/2,j^n+1
-D_11 ( Π_h Ψ^n+1_i+1/2,j [d_x C_f ]^n+1_i+1/2,j - ϕ^n+1_i+1/2,j∂ c_f,i+1/2,j^n+1/∂ x ),
and
E_w,i,j+1/2 ^y,n+1 = U^y,n+1_i,j+1/2Π_h C^n+1_f,i,j+1/2 - u^y,n+1_i,j+1/2 c_f,i,j+1/2^n+1
-D_22 ( Π_h Ψ^n+1_i,j+1/2 [d_y C_f ]^n+1_i,j+1/2 - ϕ^n+1_i,j+1/2∂ c^n+1_f,i,j+1/2/∂ y ) .
Multiplying (<ref>) by E_c_f,i,j^n+1 h k and making summation on i,j for 1 ≤ i ≤ N_x, 1 ≤ j ≤ N_y lead to
( d_t (Ψ C_f - ϕ c_f ) ^n+1, E_c_f^n+1)_M+ ( D_x E_w^x,n+1 ,E_c_f^n+1 )_M + (D_y E_w^y,n+1, E_c_f^n+1 )_M
= ( k_c a_v( Ψ^n+1 ) ( 1/ 1+ k_s(Z^n) /k_c -1 ) E_c_f^n+1,
E_c_f^n+1)_M
+ ( k_c a_v( Ψ^n+1 ) ( 1/ 1+ k_s(Z^n) /k_c -1/ 1+ k_s( T^n+1 ) /k_c ) c_f^n+1, E_c_f^n+1)_M
+ ( k_c ( a_v( Ψ^n+1 )- a_v(ϕ^n+1 ) ) 1/ 1+ k_s( T^n+1 ) /k_c c_f^n+1, E_c_f^n+1)_M
+ ( f_P^n+1 E_c_f^n+1, E_c_f^n+1 )_M + (S_1^n+1, E_c_f^n+1 )_M +
(S_2^n+1, E_c_f^n+1 )_M .
Recalling Lemma <ref>, the first term on the left side of (<ref>) can be bounded by
( d_t (Ψ C_f - ϕ c_f ) ^n+1, E_c_f^n+1)_M =
( d_t (Ψ E_c_f+ c_f E_ϕ )^n+1, E_c_f^n+1)_M
= ( d_t Ψ^n+1 E_c_f^n+1, E_c_f^n+1 )_M + ( Ψ^n d_t E_c_f^n+1 , E_c_f^n+1 )_M
+ ( d_t c_f^n+1E_ϕ^n+1, E_c_f^n+1 )_M + (c_f^n d_t E_ϕ^n+1 , E_c_f^n+1 )_M
≥ ϕ_0*/2 E_c_f^n+1^2 - E_c_f^n^2 /Δ t +
ϕ_0*/2 E_c_f^n+1-E_c_f^n^2 /Δ t
+ ( d_t c_f^n+1E_ϕ^n+1, E_c_f^n+1 )_M + (c_f^n d_t E_ϕ^n+1 , E_c_f^n+1 )_M.
Taking notice of (<ref>) and Lemma <ref>, the second term on the left side of (<ref>) can be transformed into
( D_x E_w^x,n+1 ,E_c_f^n+1 )_M =
-(E_w^x,n+1, d_x E_c_f^n+1 )_x
= D_11 ( Π_h Ψ^n+1 [d_x C_f ]^n+1 - ϕ^n+1∂ c_f^n+1/∂ x , d_x E_c_f^n+1 )_x - ( U^x,n+1Π_h C^n+1_f - u^x,n+1 c_f^n+1, d_x E_c_f^n+1 )_x
= D_11 ( Π_h Ψ^n+1 d_x E_c_f^n+1 , d_x E_c_f^n+1 )_x +
D_11 ( d_x c_f^n+1Π_h E_ϕ^n+1, d_x E_c_f^n+1 )_x
+D_11 ( Π_h ϕ^n+1 d_x c_f^n+1 - ϕ^n+1∂ c_f^n+1/∂ x, d_x E_c_f^n+1 )_x
- ( U^x,n+1Π_h E_c_f^n+1 , d_x E_c_f^n+1 )_x - ( E_u^x,n+1Π_h c_f^n+1, d_x E_c_f^n+1 )_x
- ( u^x,n+1 (Π_h c_f^n+1-c_f^n+1), d_x E_c_f^n+1)_x ,
where we shall first assume that there exists a positive constant C^* such that
U^n+1_∞≤ C^*,
and the proof of (<ref>) is essentially identical with the estimates in <cit.> by using an induction process. So we omit the details for simplicity.
Taking notice of (<ref>) and Lemma <ref>, the third term on the left side of (<ref>) can be transformed into
( D_y E_w^y,n+1 ,E_c_f^n+1 )_M
= D_22 ( Π_h Ψ^n+1 d_y E_c_f^n+1 , d_y E_c_f^n+1 )_y +
D_22 ( d_y c_f^n+1Π_h E_ϕ^n+1, d_y E_c_f^n+1 )_y
+D_22 ( Π_h ϕ^n+1 d_y c_f^n+1 - ϕ^n+1∂ c_f^n+1/∂ y, d_y E_c_f^n+1 )_y
- ( U^y,n+1Π_h E_c_f^n+1 , d_y E_c_f^n+1 )_y - ( E_u^y,n+1Π_h c_f^n+1, d_y E_c_f^n+1 )_y
- ( u^y,n+1 (Π_h c_f^n+1-c_f^n+1), d_y E_c_f^n+1)_y.
Recalling that a_v(ϕ ) = 1-ϕ/1-ϕ_0 a_0, we have
0 ≤ a_v(ϕ^n+1 ) ≤ a_0.
Thus the first term on the right side of (<ref>) can be estimated by
( k_c a_v( Ψ^n+1 ) ( 1/ 1+ k_s(Z^n) /k_c -1 ) E_c_f^n+1,
E_c_f^n+1)_M ≤ C E_c_f^n+1_M^2.
The second term on the right side of (<ref>) can be bounded by
( k_c a_v( Ψ^n+1 ) ( 1/ 1+ k_s(Z^n) /k_c -1/ 1+ k_s( T^n+1 ) /k_c ) c_f^n+1, E_c_f^n+1)_M
≤ C T^n+1-Z^n ^2_M + C E_c_f^n+1_M^2
≤ C E_T^n _M^2 + C E_c_f^n+1_M^2 + C (Δ t)^2.
Using Cauchy-Schwartz inequality, the third term on the right side of (<ref>) can be bounded by
( k_c ( a_v( Ψ^n+1 )- a_v(ϕ^n+1 ) ) 1/ 1+ k_s( T^n+1 ) /k_c c_f^n+1, E_c_f^n+1)_M
≤ C E_ϕ^n+1_M^2 + C E_c_f^n+1_M^2 .
Combining (<ref>) with (<ref>)-(<ref>) and using the Cauchy-Schwartz inequality leads to
ϕ_0*/2 E_c_f^n+1^2_M - E_c_f^n^2_M /Δ t +
ϕ_0*/2 E_c_f^n+1-E_c_f^n^2_M /Δ t + D_11ϕ_0* d_x E_c_f^n+1_x^2 + D_22ϕ_0* d_y E_c_f^n+1_y^2
≤ C E_ϕ^n+1_M^2 + C E_c_f^n+1_M^2 + C d_t E_ϕ^n+1_M^2 + D_11/2ϕ_0* d_x E_c_f^n+1_x^2
+ D_22/2ϕ_0* d_y E_c_f^n+1_y^2 + C E_u^n+1_TM^2 + C E_T^n _M^2 + C (h^4+k^4) + C (Δ t)^2,
which leads to the desired result (<ref>).
The approximate error of discrete temperature satisfy
E_T^m+1^2_M + Δ t ∑_n=0^m ( d_x E_T^n+1_x^2 + d_y E_T^n+1_y^2 )
≤ C Δ t ∑_n=0^m E_T^n+1_M^2 + C Δ t ∑_n=0^m E_ϕ^n+1_M^2 + C Δ t ∑_n=0^m d_t E_ϕ^n+1_M^2
+ C Δ t ∑_n=0^m E_u^n+1_TM^2 + C Δ t ∑_n=0^m E_c_f^n+1_M^2 + C (h^4+k^4 + C (Δ t)^2), m≤ N-1,
where the positive constant C is independent of h,k and Δ t.
Subtracting (<ref>) from (<ref>), we have that
ρ_s θ_s d_t ( (1-ϕ)E_T+( 1-E_ϕ ) T )_i,j^n+1 +
ρ_f θ_f d_t ( ϕ E_T +E_ϕ T )_i,j^n+1
+ [D_x E_v ]^x,n+1_i,j + [D_y E_v ]^y,n+1_i,j
= a_v( Ψ^n+1_i,j ) H_r(Z^n_i,j) R(C^n+1_f,i,j, Z^n_i,j ) -
a_v( ϕ^n+1_i,j ) H_r(T^n+1_i,j )R(c_f,i,j^n+1, T^n+1_i,j )
+ S^n+1_3,i,j + S^n+1_4,i,j,
where
S_3,i,j^n+1 = ∂ [ ( ρ_s(1-ϕ) θ_s + ρ_f ϕθ_f) T ] /∂ t |_i,j^n+1 - d_t [ ( ρ_s(1-ϕ) θ_s + ρ_f ϕθ_f) T ]_i,j^n+1,
and
S_4,i,j^n+1 = [D_x v]^x,n+1_i,j + [D_y v]^y,n+1_i,j - ( ∂ v^x,n+1_i,j/∂ x + ∂ v^y,n+1_i,j/∂ y ).
Noting (<ref>), we can obtain
E_v,i+1/2,j ^x,n+1 = ρ_f θ_f U^x,n+1_i+1/2,jΠ_h Z^n+1_i+1/2,j - ρ_f θ_f u^x,n+1_i+1/2,j T^n+1_i+1/2,j
- ( λ ( Π_h Ψ^n+1_i+1/2,j ) [d_x Z]^n+1_i+1/2,j -
λ(ϕ^n+1_i+1/2,j ) ∂ T_i+1/2,j^n+1/∂ x),
and
E_v,i,j+1/2 ^y,n+1 = ρ_f θ_f U^y,n+1_i,j+1/2Π_h Z^n+1_i,j+1/2 - ρ_f θ_f u^y,n+1_i,j+1/2 T^n+1_i,j+1/2
- ( λ ( Π_h Ψ^n+1_i,j+1/2 ) [d_y Z]^n+1_i,j+1/2 -
λ(ϕ^n+1_i,j+1/2 ) ∂ T_i,j+1/2 ^n+1/∂ y).
Multiplying (<ref>) by E_T,i,j^n+1 h k and making summation on i,j for 1 ≤ i ≤ N_x, 1 ≤ j ≤ N_y lead to
ρ_s θ_s( d_t ( (1-Ψ)E_T - E_ϕ T )^n+1 , E_T^n+1)_M + ρ_f θ_f( d_t ( Ψ E_T +E_ϕ T )^n+1, E_T^n+1)_M
+ ( D_x E_v^x,n+1 ,E_T^n+1 )_M + (D_y E_v^y,n+1, E_T^n+1 )_M
= ( a_v( Ψ^n+1 ) H_r(Z^n ) R( C^n+1_f, Z^n ) -
a_v( ϕ^n+1 ) H_r(T^n+1 )R(c_f^n+1, T^n+1 ), E_T^n+1)_M
+ ( S^n+1_3 + S^n+1_4, E_T^n+1 )_M,
The first two terms on the left side of (<ref>) can be transformed into
ρ_s θ_s( d_t ( (1-Ψ)E_T -E_ϕ T )^n+1 , E_T^n+1)_M + ρ_f θ_f( d_t ( Ψ E_T +E_ϕ T )^n+1, E_T^n+1)_M
= ρ_s θ_s( E_T^n+1 d_t (1-Ψ^n+1 ) + (1-Ψ^n ) d_t E_T^n+1 , E_T^n+1)_M
- ρ_s θ_s( T^n+1 d_t E_ϕ^n+1 + E_ϕ^n d_t T^n+1 , E_T^n+1)_M
+ ρ_f θ_f( d_t Ψ^n+1 E_T^n+1 + Ψ^n d_t E_T^n+1 + T^n+1 d_t E_ϕ^n+1 + E_ϕ^n d_t T^n+1 , E_T^n+1)_M
= ( ( ρ_s θ_s ( 1- Ψ^n) + ρ_f θ_fΨ^n ) d_tE_T^n+1 , E_T^n+1)_M
+ ( ( ρ_s θ_s d_t ( 1- Ψ^n+1 ) + ρ_f θ_f d_t Ψ^n+1) E_T^n+1 , E_T^n+1)_M
+ ( ( ρ_f θ_f - ρ_s θ_s ) ( d_t E_ϕ^n+1 T^n+1 + E_ϕ^n d_t T^n+1 ) , E_T^n+1)_M ,
where we should note that ρ_s θ_s ( 1- Ψ^n_i,j ) + ρ_f θ_fΨ^n_i,j≥min{ρ_s θ_s, ρ_f θ_f} due to (<ref>).
Taking notice of (<ref>) and Lemma <ref>, the third term on the left side of (<ref>) can be transformed into
( D_x E_v^x,n+1 ,E_T^n+1 )_M =
-(E_v^x,n+1, d_x E_T^n+1 )_x
= ( λ ( Π_h Ψ^n+1 ) [d_x Z]^n+1 - λ(ϕ^n+1 ) ∂ T^n+1/∂ x , d_x E_T^n+1)_x
- ρ_f θ_f ( U^x,n+1Π_h Z^n+1 - u^x,n+1 T^n+1, d_x E_T^n+1 )_x
= ( λ ( Π_h Ψ^n+1 ) d_x E_T^n+1 , d_x E_T^n+1)_x +
( d_x T^n+1λ ( Π_h E_ϕ^n+1 ), d_x E_T^n+1)_x
+( λ ( Π_h ϕ^n+1 ) d_x T^n+1 - λ ( ϕ^n+1 ) ∂ T^n+1/∂ x, d_x E_T^n+1)_x
- ρ_f θ_f ( U^x,n+1Π_h E_T^n+1 , d_x E_T^n+1 )_x - ρ_f θ_f ( E_u^x,n+1Π_h T^n+1, d_x E_T^n+1 )_x
- ρ_f θ_f( u^x,n+1 (Π_h T^n+1-T^n+1), d_x E_T^n+1)_x ,
where we should note that λ ( Π_h Ψ^n+1 ) ≥min{λ_s, λ_f } due to (<ref>).
Taking notice of (<ref>) and Lemma <ref>, the last term on the left side of (<ref>) can be transformed into
( D_y E_v^y,n+1 ,E_T^n+1 )_M =
-(E_v^y,n+1, d_y E_T^n+1 )_y
= ( λ ( Π_h Ψ^n+1 ) [d_y Z]^n+1 - λ(ϕ^n+1 ) ∂ T^n+1/∂ y , d_y E_T^n+1)_y
- ρ_f θ_f ( U^y,n+1Π_h Z^n+1 - u^y,n+1 T^n+1, d_y E_T^n+1 )_y
= ( λ ( Π_h Ψ^n+1 ) d_y E_T^n+1 , d_y E_T^n+1)_y +
( d_y T^n+1λ ( Π_h E_ϕ^n+1 ), d_y E_T^n+1)_y
+( λ ( Π_h ϕ^n+1 ) d_y T^n+1 - λ ( ϕ^n+1 ) ∂ T^n+1/∂ y, d_y E_T^n+1)_y
- ρ_f θ_f ( U^y,n+1Π_h E_T^n+1 , d_y E_T^n+1 )_y - ρ_f θ_f ( E_u^y,n+1Π_h T^n+1, d_y E_T^n+1 )_y
- ρ_f θ_f( u^y,n+1 (Π_h T^n+1-T^n+1), d_y E_T^n+1)_y .
Recalling (<ref>) and (<ref>), we have
R(c_f, T ) = k_c ( 1- 1/ 1+ k_s(T) /k_c ) c_f,
Thus taking notice of (<ref>) and using Cauchy-Schwartz inequality, the first term on the right side of (<ref>) can be estimated by
( a_v( Ψ^n+1 ) H_r(Z^n ) R( C^n+1_f, Z^n ) -
a_v( ϕ^n+1 ) H_r(T^n+1 )R(c_f^n+1, T^n+1 ), E_T^n+1)_M
= ( a_v( Ψ^n+1 ) R( C^n+1_f, Z^n ) ( H_r(Z^n) - H_r(T^n+1 ) ), E_T^n+1)_M
+ ( a_v( Ψ^n+1 ) H_r(T^n+1 ) ( R( C^n+1_f, Z^n )- R(c_f^n+1, T^n+1 ) ), E_T^n+1)_M
+ ( R(c_f^n+1, T^n+1 ) H_r(T^n+1 ) ( a_v( Ψ^n+1 ) - a_v( ϕ^n+1 ) ), E_T^n+1)_M
≤ C E_T^n+1^2_M + C E_T^n^2_M + C E_c_f^n+1_M^2
+ C E_ϕ^n+1_M^2
+ C(Δ t)^2 .
Combining (<ref>) with (<ref>)-(<ref>) and using the Cauchy-Schwartz inequality leads to
E_T^n+1^2_M - E_T^n^2_M /Δ t +
E_T^n+1-E_T^n^2_M /Δ t + d_x E_T^n+1_x^2 + d_y E_T^n+1_y^2
≤ C E_T^n+1_M^2 + C E_ϕ^n+1_M^2 + C d_t E_ϕ^n+1_M^2 + C E_u^n+1_TM^2
+ C E_T^n _M^2 + C E_c_f^n+1_M^2 + C (h^4+k^4) + C (Δ t)^2,
which leads to the desired result (<ref>).
We are now in position to derive our main results.
Suppose the analytical solutions are sufficiently smooth, then for the fully-discrete scheme (<ref>)-(<ref>), we have
E_ϕ^m+1^2_M + E_u^m+1_TM^2 + E_p^m+1_M^2 + E_c_f^m+1^2_M + E_T^m+1^2_M
≤ C ( (Δ t)^2+h^4+k^4), m≤ N-1,
where the positive constant C is independent of h,k and Δ t.
Combining Lemmas <ref>-<ref> leads to
E_ϕ^m+1^2_M + E_u^m+1_TM^2 + Δ t ∑_n=0^md_t E_p^n+1_M^2 + E_c_f^m+1^2_M
+ Δ t ∑_n=0^m ( d_x E_c_f^n+1_x^2 + d_y E_c_f^n+1_y^2 )+ E_T^m+1^2_M
+ Δ t ∑_n=0^m ( d_x E_T^n+1_x^2 + d_y E_T^n+1_y^2 )
≤ C Δ t ∑_n=0^m E_c_f^n+1_M^2+ C Δ t ∑_n=0^m E_T^n+1_M^2 + C Δ t∑_n=0^m E_ϕ^n+1_M^2
+ C Δ t ∑_n=0^m E_u^n+1_TM^2 + O( (Δ t)^2+h^4+k^4), m≤ N-1.
Then supposing Δ t sufficiently small and applying the discrete Gronwall inequality, we have
E_ϕ^m+1^2_M + E_u^m+1_TM^2 + Δ t ∑_n=0^md_t E_p^n+1_M^2 + E_c_f^m+1^2_M
+ Δ t ∑_n=0^m ( d_x E_c_f^n+1_x^2 + d_y E_c_f^n+1_y^2 )+ E_T^m+1^2_M
+ Δ t ∑_n=0^m ( d_x E_T^n+1_x^2
+ d_y E_T^n+1_y^2 )
≤ C ( (Δ t)^2+h^4+k^4), m≤ N-1.
Furthermore, noting
E_p^m+1=Δ t ∑_n=0^m d_t E_p^n+1 + E_p^0,
we have
E_p^m+1_M^2 ≤ 2TΔ t ∑_n=0^m d_t E_p^n+1_M^2 + 2 E_p^0_M^2 ≤ C ( (Δ t)^2+h^4+k^4),
which leads to the desired result (<ref>).
§ NUMERICAL SIMULATION
In this section we provide some two- and three-dimensional numerical experiments to gauge the constructed scheme (<ref>)-(<ref>) and (<ref>).
In our simulation, we set
k_s=k_s0exp( E_g/R_g(1/T_0- 1/T ) ),
where k_s0 is the surface reaction rate at temperature T_0, E_g is the activation energy, and R_g is the molar gas constant <cit.>. The reaction heat
H_r(T)=| -9702 +16.97 T - 0.00234 T^2 |
is generated by reaction when per unit mole of acid is consumed <cit.>.
§.§ Convergence rates for the wormhole model with heat transmission process in 2- and 3-D cases
In this subsection, the domain Ω=(0,1)^d and J=[0,1],
Δ t=h^2.
We set the following parameters:
α = 1; ρ_s = 10; a_0 = 1; k_c = 1;
k_s0 = 1;
E_g = 1; R_g = 1;
γ = 1; μ =1;
D = 1E-2;
ρ_f = 1; θ_s = 1; θ_f = 10;
λ_s = 10; λ_f = 1;
C_I = 1,
and test the following system to verify the convergence rates
∂(ϕc_f)∂t+∇·(uc_f)
-∇·(ϕD ∇c_f)=k_ca_v(c_s-c_f)+f_Pc_f+f_Ic_I+ g,
∂ϕ/∂t= R(c_f,T) a_v α/ρ_s + h,
∂[ ( ρ_s(1-ϕ) θ_s + ρ_f ϕθ_f ) T ]/ ∂t + ∇·(ρ_f θ_f u T) = ∇·( λ∇T ) + a_v(ϕ) H_r(T)R(c_f, T)+ q,
where g, h and q are three introduced functions to satisfy the analytic solutions given in the following two examples.
Example 1 in 2-D case:
Here the initial condition and the right hand side of the equation are computed according to the analytic solution given as below:
{[ p(x,t)=t x^2(1-x)^2y^2(1-y)^2 + 1,; c_f(x,t)=1 + tcos(π x)cos(π y),; T(x,t)=1/2tcos(π x)cos(π y) + 10,; ϕ(x,t)= 1/4tx^2(1-x)^2sin(π y) + 1/4. ].
The numerical results are listed in Tables <ref>-<ref> and give solid supporting evidence for the expected second-order convergence of the constructed scheme in 2-D case for the wormhole model, which are consistent with the error estimates in Theorem <ref>.
Example 2 in 3-D case:
Here the initial condition and the right hand side of the equation are computed according to the analytic solution given as below:
{[ p(x,t)=(e^t-1) x^4(1-x)^4cos(π y)cos(π z) + 1,; c_f(x,t)=1 + tx^3(1-x)^3cos(π y)cos(π z),; T(x,t)=1/2(e^t-1)cos(π x)cos(π y)cos(π z) + 10,; ϕ(x,t)= 1/4(e^t-1)cos(π x)sin(π y)cos(π z) + 1/2. ].
The numerical results are listed in Tables <ref>-<ref> and give solid supporting evidence for the expected second-order convergence of the constructed scheme in 3-D case for the wormhole model, which are consistent with the error estimates in Theorem <ref>.
§.§ Simulation of dissolution patterns
In the following examples, we set Ω=(0,0.2)^d.
Here we give the more realistic physical parameter
α = 5E-2; ρ_s = 2.71E3; a_0 = 5.0E-1; k_c = 1E-3; k_s0 = 2E-3;
E_g = 5.02416E4; R_g = 8.314;
γ = 1E0; μ =1.0E-3;
D = 1E-9; ρ_f = 1.01E3; θ_s = 2.0E2; θ_f = 4.184E3;
λ_s = 5.526; λ_f = 5.8E-1;C_I = 1E3.
Initial conditions are given as below:
T_0 = 2.98E2; p_0 = 1.52E5; c_f0=0 .
Example 3 with Neumann boundary condition for temperature in 2-D case:
In this example, we set J=[0,1×10^7], Δ t=1×10^5 s. The distributions of initial porosity and permeability in 2-D case are listed as follows:
{[ ϕ_0=0.5, K_0=10^-7, (x,y)=(1.25E-3,1.0125E-1),; ϕ_0=0.6, K_0=10^-6, (x,y)=(1.25E-3,5.125E-2),; ϕ_0=0.2, K_0=10^-8, otherwise.; ].
We set following right hand side
f_I={
[ 1E-4 m/s, x=1.25E-3 ,; 0, otherwise.; ]
.
f_P={
[ -1E-4 m/s, x=1.9875E-1,; 0, otherwise.; ]
.
Example 4 with Robin boundary condition for temperature in 2-D case: In this example, we set that the temperature is 298K on the left side and satisfies homogenous Neumann condition on the other boundaries. Here J=[0,1×10^6], Δ t=1×10^4 s.
In this example, the distributions of initial porosity and permeability in 2-D case are listed as follows:
{[ ϕ_0=0.5, K_0=10^-7, (x,y)=(1.25E-3,1.0125E-1),; ϕ_0=0.6, K_0=10^-6, (x,y)=(1.25E-3,5.125E-2),; ϕ_0=0.2, K_0=10^-8, otherwise.; ].
We set following right hand side
f_I={
[ 5E-4 m/s, x=1.25E-3 ,; 0, otherwise.; ]
.
f_P={
[ -5E-4 m/s, x=1.9875E-1,; 0, otherwise.; ]
.
Example 5 with Neumann boundary condition for temperature in 3-D case:
In this example, we set J=[0,1×10^6], Δ t=1×10^4 s. The distributions of initial porosity and permeability in 3-D case are listed as follows:
{[ ϕ_0=0.5, K_0=10^-7, (x,y)=(2.50E-3,1.025E-1,1.025E-1),; ϕ_0=0.6, K_0=10^-6, (x,y)=(2.50E-3,5.25E-2,5.25E-2),; ϕ_0=0.2, K_0=10^-8, otherwise.; ].
We set the following right hand side
f_I={
[ 1E-4 m/s, x=2.50E-3 ,; 0, otherwise.; ]
.
f_P={
[ -1E-4 m/s, x=1.975E-1,; 0, otherwise.; ]
.
The distributions of porosity for Examples 3-5 in 2- and 3-D cases are presented in Figures <ref>-<ref> respectively. These results are computed on the grid of 80 × 80 cells in 2-D case and 40 × 40 cells in 3-D case. It can be easily presented that the heterogeneity of porosity and permeability in wormhole formations have significant influence on the wormhole formation dynamics, which promotes the non-uniformity of the chemical reaction. Besides the average porosities in all frameworks are increasing, which reveals that the matrix is eaten by the acid.
§ CONCLUDING REMARKS
In this paper, we developed a fully decoupled and linear scheme for the wormhole model with heat transmission process on staggered grids, which only requires solving a sequence of linear elliptic equations at each time. An error analysis for the velocity, pressure, concentration, porosity and temperature in different norms is established rigorously. Finally, we presented numerical experiments in two- and three-dimensional cases to
verify the theoretical analysis and effectiveness of the constructed scheme.
siamplain
|
http://arxiv.org/abs/2307.02088v2
|
20230705075648
|
Trust in Software Supply Chains: Blockchain-Enabled SBOM and the AIBOM Future
|
[
"Boming Xia",
"Dawen Zhang",
"Yue Liu",
"Qinghua Lu",
"Zhenchang Xing",
"Liming Zhu"
] |
cs.SE
|
[
"cs.SE"
] |
Imaging of high-frequency electromagnetic field by multipulse sensing using nitrogen vacancy centers in diamond
Satoshi Kashiwaya
August 1, 2023
===============================================================================================================
Software Bill of Materials (SBOM) serves as a critical pillar in ensuring software supply chain security by providing a detailed inventory of the components and dependencies integral to software development. However, challenges abound in the sharing of SBOMs, including potential data tampering, hesitation among software vendors to disclose comprehensive information, and bespoke requirements from software procurers or users. These obstacles have stifled widespread adoption and utilization of SBOMs, underscoring the need for a more secure and flexible mechanism for SBOM sharing.
This study proposes a novel solution to these challenges by introducing a blockchain-empowered approach for SBOM sharing, leveraging verifiable credentials to allow for selective disclosure. This strategy not only heightens security but also offers flexibility. Furthermore, this paper broadens the remit of SBOM to encompass AI systems, thereby coining the term AI Bill of Materials (AIBOM). This extension is motivated by the rapid progression in AI technology and the escalating necessity to track the lineage and composition of AI software and systems.
Particularly in the context of foundational models such as large language models (LLMs), it is crucial to understand their composition and dependencies as these models often form the basis for further development, creating intricate dependencies.
The evaluation of our solution indicates the feasibility and flexibility of the proposed SBOM sharing mechanism, positing a new solution for securing (AI) software supply chains.
§ INTRODUCTION
Software supply chain security is an increasingly critical concerndue to vulnerabilities in both in-house and third-party software components, posing substantial threats to organizations and individuals <cit.>. These vulnerabilities can result in system malfunctions or failures, as exemplified by the ChatGPT outage[<https://openai.com/blog/march-20-chatgpt-outage>] on March 20, 2023, caused by a bug in the open-source library redis-py, which inadvertently exposed some users' chat history.
Moreover, attackers can exploit these vulnerabilities, leading to substantial threats for affected organizations, as demonstrated by the SolarWinds attack <cit.>. In recent years, software supply chain attacks have experienced a significant surge, with an average annual increase of 742% reported from 2019 to 2022, according to Sonatype's report <cit.>. This surge underscores the urgent need to address and manage software supply chain vulnerabilities effectively.
To mitigate these risks and enhance software supply chain security, the implementation of robust software bill of materials (SBOM) practices becomes essential. An SBOM provides a formal, machine-readable inventory of software components used in the production of a software product, including their dependency relationships <cit.>. Recognizing the gravity of the situation, the US government issued an executive order in May 2021, mandating companies trading with the US government to provide SBOMs as a proactive step towards enhancing software supply chain security <cit.>.
Nevertheless, SBOM sharing presents significant challenges, including SBOMs' susceptibility to tampering, the reluctance of software vendors to share complete information, and varying requirements among software procurers and users, as mentioned in <cit.>. Furthermore, despite the ongoing maturation of SBOM adoption efforts, a lack of universally applicable solutions further complicates the landscape. Additionally, a report from the U.S. Cybersecurity and Infrastructure Security Agency (CISA) emphasizes the importance of interoperability among SBOM sharing solutions and the need for automation to facilitate widespread SBOM adoption <cit.>.
In response to these gaps, this study proposes a blockchain-based solution for SBOM sharing via verifiable credentials (VCs). Blockchain offers a secure and transparent method of storing and sharing data, resistant to tampering and fraud, while providing a decentralized system that promotes trust and accountability <cit.>.
By employing VCs, the proposed solution ensures the authenticity and integrity of information and allows for fine-grained selective disclosure and zero-knowledge-proofs (ZKPs) <cit.> of sensitive data, thus enhancing security and flexibility during SBOM sharing <cit.>. The proof-of-concept experiments show that our proposed solution for SBOM sharing is feasible and represents a promising approach to overcome the challenges of tampering and inflexibility in SBOM sharing.
The main contributions of this paper include:
* We undertake a comprehensive examination of the standard scenarios of SBOM sharing. These include secure full disclosure, secure selective disclosure, and secure need-to-know disclosure, each with its unique challenges (see Sections <ref>). This analysis contributes to a deeper understanding of SBOM sharing landscapes and the difficulties therein.
* We introduce a blockchain-based architecture that integrates VCs to support progressive trust mechanisms like selective disclosure and zero-knowledge proofs for SBOM sharing. This solution establishes a chain of trust via VC reference, offering a holistic solution to managing the SBOM sharing lifecycle. By harnessing the potential of blockchain and smart contracts, the proposed architecture enhances interoperability and automation, thus facilitating SBOM sharing and catalyzing wider SBOM adoption.
The remainder of this paper is organized as follows. Section <ref> provides an overview of SBOM, blockchain, and VCs.
Section <ref> details motivating scenarios where our proposed solution stems.
In section <ref> we detail the design of the proposed architecture.
Section <ref> details our proof-of-concept evaluation of the proposed solution.
Section <ref> reviews related work. Section <ref> discuss the evolution from SBOM to AIBOM, draws conclusions and outlines avenues for future work.
§ BACKGROUND
§.§ Software Bill of Materials (SBOM)
At its core, an SBOM represents an inventory of software components. The U.S. National Telecommunications and Information Administration (NTIA) stipulates that an SBOM should contain eight key data fields: component supplier name, component name, component version, unique identifiers (e.g., package url), dependency relationship, SBOM data author, and timestamp <cit.>.
However, as reported by <cit.>, not all software vendors comply with these minimum data field recommendations, and many are hesitant to disclose the complete SBOM information to downstream customers.
Current prevalent SBOM standards, namely SPDX[<https://spdx.dev/>], CycloneDX[<https://cyclonedx.org/>], and SWID Tagging[<https://csrc.nist.gov/projects/Software-Identification-SWID>], may only mandate a subset of the recommended data fields. For instance, CyCloneDX 1.4 Version[<https://cyclonedx.org/docs/1.4/json/#metadata_authors>] does not require the inclusion of SBOM author(s) information.
In light of this, there is a pressing need for an SBOM sharing mechanism that is not only secure but also flexible.
§.§ Blockchain
The unique advantage of blockchain as a decentralized digital ledger lies in the implementation of cryptographic protocols and distributed consensus mechanisms, ensuring the data's immutability and resistance to tampering. This feature renders blockchain ideal for applications demanding trust, transparency, and security, such as financial transactions and supply chain management <cit.>. With the use of smart contracts, self-executing agreements can be achieved, making blockchain more efficient and reducing the need for intermediaries or central authorities <cit.>.
In the context of blockchain, Decentralized Identifiers (DIDs) <cit.> represent a novel identifier class enabling verifiable, self-sovereign digital identity. Unlike traditional identifiers, DIDs are entirely controlled by the subject, independent of any centralized registry or certificate authority. These identifiers, in the form of URLs, link a DID subject to methods for trustworthy interactions. Residing on a distributed ledger like blockchain, DIDs are globally unique, persistent, and do not require a central registration authority. DIDs are integral to blockchain technology, providing a framework for designing and utilizing blockchain-anchored systems.
§.§ Verifiable credential (VC)
VCs are digital credentials that allow individuals to securely share and prove their information. They are designed to be cryptographically secure and tamper-proof, providing a way to authenticate individuals and their data without relying on centralized authorities or third-party verifiers. The W3C VCs Data Model <cit.> provides a common language and structure for issuing, storing, and presenting VCs, making them interoperable across different systems.
Blockchain provides a secure, transparent, and decentralized platform for issuing and verifying VCs. It ensures data security and immutability, while verifiable claims within VCs authenticate issuers, validate information, enable contextual verification, manage revocation and expiry, and promote interoperability and standardization.
Thus, the integration of blockchain technology and verifiable claims enhances the reliability and trustworthiness of the credential sharing and distribution process, providing a comprehensive solution for secure and verifiable digital identity management.
§ PRELIMINARIES AND MOTIVATING SCENARIOS
The motivation behind this work is rooted in several significant findings from a prior empirical study <cit.>. While the market primarily emphasizes SBOM generation, ensuring the integrity of the resulting SBOMs against tampering threats, as well as the subsequent distribution and sharing of SBOMs, present substantial risks to the broader adoption of SBOMs.
§.§ Preliminaries and terminologies
To elucidate the motivation clearly, we introduce the following preliminaries and terminologies. An overview of the Trust Chain based on VC referencing is presented in Fig. <ref>.
§.§.§ VCs
This section explains the different types of VCs. A Trust Chain <cit.> can be established via VC referencing.
* Eligibility VCs: Oversight authorities issue these VCs to software vendors that demonstrate a strong track record in secure software development, adherence to industry standards, and robust security practices, thereby qualifying them for issuing SBOM VCs. These Eligibility VCs are embedded within SBOM VCs by referencing their uniform resource identifiers (URIs). Note that embedding VCs may require an integrity protection mechanism, such as a linking mechanism that cryptographically binds the contents of the target VC document to the URI itself <cit.>, which is beyond the scope of this paper.
* SBOM VCs: These VCs encapsulate the SBOM metadata and are linked to SBOMs (e.g., via hash values or URIs) without revealing the actual SBOM details.
This allows for selective disclosure, where only certain parts of the SBOM are revealed depending on the requirements of the software procurer or the policies of the software vendor. Given the complex components and dependencies in a software product, we further categorize SBOM VCs into component-level and system-level (see Fig. <ref>):
* Component SBOM VCs (cSBOM VCs): For in-house components within a software product, the vendor's own Eligibility VC is embedded in the cSBOM VCs. For third-party components, if SBOM VCs embedded with Eligibility VCs are already attached by upstream vendors and no customized modifications are made, downstream vendors may embed these credentials within the System SBOM VC of the software product after successful verification. However, if the third-party components lack SBOM VCs, the downstream vendor can choose to: 1) send a SBOM generation request to the upstream vendor if possible; 2) perform security checks and generate cSBOM VCs embedded with its own Eligibility VC for these components, if permitted (e.g., license permits etc),
thereby increasing trustworthiness; or, 3) leave the credentials blank. It is always recommended to request or generate cSBOM VCs in such cases, or otherwise it could impact the trustworthiness of the downstream vendor's software product, undermine the vendor's reputation, and comprise the trust chain formed via credentials reference.
If a downstream vendor modifies the third-party components, the same processes for in-house components apply.
* System SBOM VCs (sSBOM VCs): System-level SBOM VCs incorporate all available cSBOM VCs, providing a holistic view of the entire software system.
§.§.§ Stakeholders
Each stakeholder plays a critical role in ensuring the integrity and trustworthiness of the SBOM sharing chain:
* Independent Oversight (Trust Anchor): Oversight authorities, such as government agencies, accreditation authorities, or industry-standard certification bodies, are responsible for verifying the identity and eligibility of software vendors and issuing Eligibility VCs to qualifying vendors, maintaining the integrity of the system, setting the standards for eligibility, and handling disputes. They also conduct regular eligibility audits and impose penalties in cases of violations, such as falsifying SBOM VCs or failing to adhere to the eligibility criteria, which could result in penalties ranging from fines to the revocation of Eligibility VCs, which could have significant implications for the vendor, including loss of trust and potential legal consequences.
Independent oversight is the issuer of Eligibility VCs.
* Software Vendors: Vendors must obtain Eligibility VCs from oversight authorities to be eligible for credential issuance. Qualified vendors with valid Eligibility VCs self-issue SBOM VCs for their software systems or products, embedding their Eligibility VCs within.
Software vendors are the issuer of SBOM VCs, the holder of Eligibility VCs, and the verifier of both SBOM VCs and Eligibility VCs from upstream vendors.
* Software Procurers: Procurers, including downstream software vendors in some cases, verify credentials when purchasing or importing third-party software or components, ensuring that only trustworthy software is used in their operations or further development processes.
Software procurers are the verifier of the SBOM VCs and Eligibility VCs.
Note that procurers may only verify the sSBOM VCs (and the vendor's Eligibility VC), or, in the cases of downstream vendors, the cSBOM VCs of the components they procure/introduce although these components can contain sub-components with cSBOM VCs. It is the responsibility of each down-stream stakeholder to verify the VCs from the last up-stream vendor, thus forming a trust chain.
§.§.§ Scenario overview
Multiple software vendors participate in a permissioned blockchain network, each responsible for their own software products. The blockchain network serves as a secure and transparent platform for sharing and verifying SBOM VCs. Each vendor joins the network by obtaining Eligibility VCs from oversight authorities, which certify their qualification and eligibility for issuing SBOM VCs.
Scenario 1: Secure Full Disclosure. The software vendor is willing to share the complete SBOMs for a purchased software product. Selective disclosure is optional, depending on whether the software procurer requires a complete or partial SBOM. However, since malicious parties may tamper with the generated SBOMs, secure SBOM sharing mechanisms are necessary to ensure the integrity and authenticity of the SBOMs.
Scenario 2: Secure Selective Disclosure. The software vendor is only willing to share partial SBOMs as the SBOMs contain sensitive information (e.g., proprietary algorithms or trade secrets) that the vendor prefers not to disclose. In this case, the provided SBOMs need to be selectively disclosed based on the vendor's and procurers' specifications and requirements. Common techniques supporting selective disclosure include atomic credentials, selective disclosure signatures, hashed values, and attribute-based encryption (ABE) <cit.>. Similar to scenario 1, SBOMs need to be securely shared.
Scenario 3: Secure Need-to-Know Disclosure. This can be considered as an special extension of scenario 2.
In cases where the procurer has access to only limited or no SBOM data, as the vendor could be a highly confidential corporate with strict privacy policies,
the vendor could be obliged to provide certain information to its customers under critical situations. For example, when a critical vulnerability is newly discovered, software procurers may want to determine if the purchased software is affected. Although the vendor is unwilling to disclose SBOM data, they can inform the customers whether the vulnerable component is included after verifying the validity of the concern using ZKPs <cit.>, without disclosing further information such as component version etc.
While no SBOM will be shared in this scenario, the inclusion information of the vulnerable component also needs to be securely communicated.
Based on the motivating scenarios above, we propose to encompass SBOM information into corresponding SBOM VCs, and utilize blockchain as the secure underlying SBOM VCs sharing infrastructure. Together, these techniques provide a secure and flexible mechanism for SBOM sharing.
§ ARCHITECTURE DESIGN
In this section, we delineate the architectural design of our proposed solution. The architecture, as depicted in Fig. <ref>, comprises three layers:
Service Layer: This layer serves as a crucial interface for stakeholders to interact with the blockchain network and access the system's services. It encompasses DID Services and Credential Services, which include SBOM VC Services and Eligibility VC Services. The inclusion of these components within the Service Layer stems from the objective of facilitating stakeholder engagement and ensuring a seamless user experience.
Off-Chain Data Layer: The Off-Chain Data Layer stores essential operational data such as SBOM data and eligible vendor information. Storing raw data in off-chain repositories can optimize on-chain storage capacity, leading to improved system efficiency and cost reduction. Besides, keeping data in stakeholders' respective off-chain repositories helps preserve data privacy considering SBOM information sensitivity in certain cases.
On-Chain Data Layer: The On-Chain Data Layer encompasses data directly stored on the blockchain, including verifiable credentials (VCs) and decentralized identifiers (DIDs), as well as a suite of smart contracts for the execution of predefined business logic. Blockchain guarantees the immutability and transparency of both on-chain data and operation logs via the underlying distributed ledger, consequently ensuring robust data integrity and provenance to enhance the overall system trustworthiness.
The allocation of components to either the on-chain or off-chain data layers is based on a careful balance between the benefits of decentralization, transparency, and immutability offered by the blockchain, and the need and capability for efficient and private data storage. By strategically distributing the components across the layers, our design achieves a well-rounded solution that ensures the qualities of security, efficiency, transparency, and integrity.
§.§ DID Services Module
DIDs are integral to stakeholder identity management. Stakeholders register their unique DIDs upon joining the network, which are stored on-chain via DID Registry. These DIDs can later be used to authenticate their actions and communications. The DID services facilitate the resolution of a DID to its associated DID Document, enabling stakeholders to verify each other's identities and establish secure channels. Stakeholders can update their DID Documents as needed, such as when adding a new public key or changing a service endpoint, ensuring the continued accuracy and relevance of their identity information. In the event of a private key compromise or when a DID is no longer required, stakeholders can revoke their DIDs, preventing misuse and maintaining the integrity of the blockchain network.
§.§ Credential Services: Eligibility VC Services Module
Eligibility VC are issued by oversight authorities (i.e., Trust Anchors) to certify that a software vendor has met certain criteria and is therefore eligible to issue SBOM VCs.
§.§.§ Components
Eligibility VC Issuance: Oversight authorities issue Eligibility VCs to software vendors who demonstrate adherence to secure software development standards and robust security practices.
These VCs are then stored on-chain, providing a transparent and immutable record of the vendor's eligibility status.
Eligibility VC Verification: When a software vendor presents an Eligibility VC, other stakeholders (like software procurers or its downstream vendors) can verify its authenticity. This involves checking the digital signature on the VC using the public key of the issuing authority, and confirming that the VC is valid. The on-chain record ensures that the verification process is reliable and secure.
Eligibility VC Revocation: If a software vendor violates the terms of their eligibility, such as by failing to adhere to the required standards or by falsifying SBOM VCs, the oversight authority can revoke the vendor's Eligibility VC. This involves adding the VC to a revocation list, which stakeholders can check when verifying an Eligibility VC. If a VC is on the revocation list, it is no longer valid and the vendor is no longer considered eligible.
§.§.§ Protocol
The detailed protocol of Eligibility VC Services is presented in Fig. <ref>. The sequence of interactions commences with the software vendor applying for an Eligibility VC from the oversight authority. The oversight authority verifies the vendor's identity and eligibility, which may encompass assessing the vendor's track record in secure software development, adherence to industry standards, and robust security practices. Once verified, the oversight authority issues an Eligibility VC to the vendor, while the related information is stored on the blockchain and managed by the Eligibility VC registry Smart Contract.
The vendor can then embed this Eligibility VC within their SBOM VCs. Software procurers, when considering a purchase, can verify the vendor's Eligibility VC by checking its validity. This verification process involves checking the digital signature on the VC using the public key of the issuing authority, and confirming that the VC is valid and has not been revoked.
In case of any violations by the vendor, the oversight authority can invoke penalty or even revoke the Eligibility VC. The revocation process involves adding the VC to a revocation list, which stakeholders can check when verifying an Eligibility VC. If a VC is on the revocation list, it is no longer valid and the vendor is no longer considered eligible. These processes are managed by the Penalty Registry and Eligibility VC registry Smart Contracts on the blockchain, respectively.
Detailed information about the eligible vendors is stored off-chain and linked to the Eligibility VCs. Although penalty and Eligibility VC revocation can be invoked for various reasons, we detail this step in SBOM VC Service protocols in Section <ref> as falsifying SBOM VCs is considered a typical reason where penalty and VC revocation are invoked.
§.§ Credential Services: SBOM VC Services Module
The SBOM VC Services module is primarily managed by the SBOM VC registry Smart Contract on-chain and plays a crucial role in overseeing the lifecycle of SBOM VCs.
§.§.§ Components
SBOM VC Issuance: The vendor initiates the SBOM generation process. Depending on the vendor's policies and the software procurer's requirements, the vendor can choose to fully or partially disclose the SBOM.
The SBOM VC contains the metadata of the SBOM while excluding the actual composition details. It is then issued and stored on the blockchain, with the actual SBOM data being stored off-chain and linked to the SBOM VC.
n the case of full disclosure, the vendor encapsulates the SBOM metadata within the VC and signs it using their private key, without further encryption of the SBOM data.
In the case of selective disclosure, various techniques can be utilized to encrypt or hash the SBOM data.
For instance, to address Scenario 2 (see Section <ref>), selective disclosure can be achieved through techniques such as ABE <cit.> and hashed values <cit.> etc, enabling fine-grained encryption of specific (sub-)attributes. This allows for the selective disclosure of authorized attributes while maintaining the privacy of others.
Such granularity and flexibility are vital in the context of SBOMs, where component details are presented as sub-attributes of the “Components" (CycloneDX) or “packages" (SPDX) attribute. A coarse-grained solution that only supports attribute-level disclosure would not suffice to address the need for partial disclosure of components.
As for Scenario 3, ZKPs can assist stakeholders in verifying the inclusion of specific data pieces within the SBOM.
SBOM VC Verification: Upon issuance, any receiving party can verify the SBOM VC. The verification process involves checking the VC's signature to ensure it was issued by a trusted vendor and has remained unaltered.
Furthermore, in the case of selective disclosure, the verification process also includes verifying the ZKPs or decrypting the VC using the decryption key provided by the vendor. This ensures that the disclosed information remains verifiable and trustworthy, while the undisclosed parts remain confidential.
SBOM VC Update: As the software product evolves over time, updates to the SBOM VC may be necessary. The vendor generates a new SBOM for the updated product and issues a new VC accordingly.
SBOM VC Revocation:If the vendor determines that a previously issued SBOM VC is no longer valid, they can revoke it. This revocation process entails adding the VC to a revocation list, which is subsequently checked during the verification process. Once a VC is included in the revocation list, it will fail the verification process and be considered invalid.
§.§.§ Protocol
As illustrated in Fig. <ref>, the sequence of SBOM VC Services begins when the software procurer requests the SBOM VC from the software vendor or when the vendor proactively issues an SBOM VC for a software product if one has not been issued yet. The vendor generates the SBOM and determines the level of information disclosure, whether it is full or selective, based on their policies and the procurer's requirements. Subsequently, the SBOM VC, which contains the SBOM metadata without the actual composition details, is stored on the blockchain. The issuance of the requested SBOM VC is confirmed by the SBOM VC registry Smart Contract in coordination with the vendor. The vendor can then transmit the requested SBOM VC to the procurer.
Upon receiving the SBOM VCs, software procurers can verify the vendor's SBOM VC as well as the embedded Eligibility VC by checking their validity. The verification processes include:
* Credential verification: This involves verifying the VC signature and checking the VC status on the blockchain. The VC signature is verified using the public key of the issuer (software vendor) to ensure its authenticity and integrity. The VC status is verified to ensure that the VC has not been revoked and remains valid.
* Selective disclosure verification: In the case of selective disclosure, the software procurer may need to decrypt the VC or verify the ZKPs off-chain. This is necessary because the actual SBOM data is stored off-chain and linked to the SBOM VC. The decryption key or the information required for ZKP verification would be provided by the vendor.
In the event of reported violations, such as falsified SBOM VCs that are confirmed by the oversight authority, penalties can be imposed using the Penalty Registry Smart Contract. In severe cases, the Penalty Registry can initiate the revocation of Eligibility VCs by invoking the Eligibility VC registry Smart Contract.
In the event of errors or updates to the SBOM, the vendor has the ability to revoke the SBOM VC. Additionally, if the software product undergoes updates over time, the vendor would generate a new SBOM for the updated product and issue a new VC accordingly. The old SBOM VCs can be revoked or retained for versioning purposes.
§ EVALUATION
To demonstrate the feasibility of our proposed architecture, we conducted a series of proof-of-concept evaluations focusing on the core components of the system. The evaluation metrics include throughput and the execution time of various operations credential issuance and verification. In addition, we discuss the privacy and security properties of the architecture.
§.§ Implementation
For the purpose of proof-of-concept and evaluating feasibility and performance, we implemented a minimal viable prototype. The prototype was built using Node.js v16.15.1, with Web3.js v1.3.6 and Solidity v0.8.2. The implementation supports the SPDX-2.2 JSON format. As described in Section <ref>, the signatures of VCs are stored on-chain in the smart contracts, while the VC issuance, selective disclosure proof, and verification processes are assumed to occur off-chain.
To achieve selective disclosure, we employed the hashed values technique from <cit.> in the form Merkle Hashed Tree, similar to <cit.>.
The format of issued SBOM VCs and selective disclosure proofs are also in JSON formats, containing the relevant Merkle Trees.
§.§ Feasibility and Performance Evaluation
We deployed the on-chain components on a local Ganache[<https://github.com/trufflesuite/ganache>] blockchain network and the user application on a Google Cloud virtual machine with Ubuntu 20.04 LTS, intel Xeon E5, and 8GB RAM.
The Ganache network was configured to simulate the Ethereum Mainnet[<https://ethereum.org/en/developers/docs/networks/#ethereum-mainnet>] with a Muir Glacier hardfork and an inter-block time of 12 seconds.
This setup allowed us to assess the performance of our system in a controlled environment.
To evaluate the overall responsiveness and efficiency of the deployed blockchain, we utilized Apache JMeter[<https://jmeter.apache.org>] to generate a load of 100 back-to-back requests. We measured the average transaction confirmation time, which yielded a value of 13.652 seconds.
Specifically, we conducted performance tests on SBOM VC generation, selective disclosure proof generation, and verification with varying numbers of corresponding attributes in the actual SBOMs. The results presented here are averaged over 20 runs. As the number of attributes increases, there is a corresponding increase in the VC generation time (see Fig. <ref>). This is expected, as a larger number of attributes naturally require more computational resources to process.
Similarly, the time required for selective disclosure proof generation and verification processes increases with the number of attributes included in the proof, as shown in Fig. <ref> and Fig. <ref>, respectively. However, it is important to note that the time used is not directly proportional to the number of attributes or the size of the Merkle Tree representing the complete VC. This reflects the advantages of the Merkle Tree structure, where the time complexity for proof and validation operations is logarithmic (𝒪(log n_leaf nodes)) <cit.>.
The transaction throughput is demonstrated in Fig. <ref>, indicating the number of credential generations completed per second with a total of 10,000 requests sent per data point. The results are averaged over 20 runs. The system consistently achieves a throughput range of 37-40 transactions per second (TPS) when the requests are sent over more than 10 threads. This highlights the system's ability to handle a substantial number of requests and process them efficiently.
These findings provide valuable insights into the scalability of our system, demonstrating its capability to handle VCs of varying complexity.
It is important to note that these results serve as an initial benchmark of our system's performance under a controlled load, for the purpose of proof-of-concept. However, in a real-world scenario, the performance may vary due to factors such as network conditions, transaction volume, and system usage patterns.
§.§ Security Analysis
The solution we propose for SBOM sharing harnesses the power of blockchain technology, bolstering the authenticity and integrity of data while preserving confidentiality. This approach augments the security of SBOM sharing. Blockchain technology offers a secure, transparent mechanism for data storage and sharing, and provides robust resistance against tampering and fraudulent activities. It also creates a decentralized system that promotes trust and accountability. We analyze the security of our proposed architecture based on the principles of confidentiality, integrity, and availability <cit.>.
Confidentiality: The architecture safeguards SBOMs from unauthorized access by employing a permissioned blockchain with stringent access controls. Furthermore, the use of SBOM VCs allows for fine-grained, selective disclosure of SBOM data, thereby protecting sensitive information.
Integrity: The architecture ensures the integrity of SBOM VCs via the blockchain and the vendor's signature. When a credential is evaluated, the system verifies the embedded signature using the owner's public key stored in the DID record. The blockchain-based storage of DIDs, DID documents, and credentials, managed by smart contracts, forestalls manipulation. Any adversarial attempt to alter the credentials would necessitate control over the vendor's blockchain account and private key. The inclusion of Trust Anchors and Eligibility VCs, and the implementation of penalty and credential revocation mechanisms, all contribute to the overall trustworthiness and integrity of the proposed architecture.
Availability: The inherent design of blockchain ensures that all transactions are replicated on full nodes, thereby guaranteeing availability. In case of system failures, recovery is possible using these replicas, effectively removing single points of failure. Moreover, the immutability and auditability of all historical transactions ensure that stakeholders are accountable for their actions.
§ RELATED WORK
The application of blockchain and VCs spans a variety of contexts. For example, <cit.> propose CredenceLedge, a permissioned blockchain system for decentralized verification of academic credentials. Similarly, Mukta et al. <cit.> introduce CredChain, a blockchain-based Self-Sovereign Identity (SSI) platform that allows secure creation, sharing, and verification of credentials. They also propose a flexible selective disclosure solution using redactable signatures, emphasizing the importance of privacy in credential sharing.
In the context of SBOM sharing, several initiatives and tools have been developed. Sigstore's Cosign[<https://docs.sigstore.dev/cosign/overview/>] enables signing and verification using a transparency log, but it does not support selective disclosure of SBOM data. Similarly, the CycloneDX SBOM standard and its exchange API[<https://github.com/CycloneDX/cyclonedx-bom-exchange-api>] provide a standardized method, but they do not inherently offer the transparency and accountability provided by a shared ledger, nor do they support selective disclosure. On the other hand, the SBOM360 Hub[<https://www.sbom360hub.com/>] allows the publication, sharing, and utilization of SBOMs in a private and collaborative manner, but it also lacks the inherent transparency of a shared ledger and does not support selective disclosure. Finally, RKVST[<https://www.rkvst.com/share-sboms/>] offers access control and verification through a continuously auditable shared ledger. While role-based access control is supported by the platform, the support for selective disclosure of specific SBOM data still lacks.
Our blockchain-empowered approach enhances security, privacy, and usability in SBOM sharing. Leveraging a decentralized trust model, we ensure robust data integrity and afford stakeholders granular control over SBOM data visibility through VCs. Aligned with the CISA and CESER report's recommendations <cit.>, our proposal heightens interoperability and encourages automation via a (potentially) unifying blockchain platform and smart contracts. Hence, we present an encompassing solution that effectively addresses SBOM sharing challenges, demonstrating superiority over existing frameworks and tools.
§ DISCUSSION AND CONCLUSION
§.§ From SBOM to AI Bill of Materials (AIBOM)
The increasing prevalence of AI systems necessitates the evolution of the traditional SBOM to AIBOM. AIBOMs extend the scope of SBOMs to incorporate AI-specific components and dependencies, providing a comprehensive view of the AI system. We outline the additional considerations for the AIBOM as follows:
Extended AI Components and Specifications: AIBOM requires additional fields to incorporate AI-specific components, such as models, as well as the associated training and testing data. Leveraging existing methods like datasheets for datasets <cit.> and model cards for model reporting <cit.> can provide information about the data and models, enhancing transparency and traceability in AI systems. Furthermore, the AIBOM can incorporate use restrictions (e.g., via Responsible AI Licenses <cit.>) that are beyond the traditional scope of open-source and creative common licenses in SBOMs.
These restrictions may extend beyond mere legal obligations to more comprehensive ethical and societal considerations.
Examples include limitations on deploying AI models for military activities or surveillance applications, stipulations against the provision of medical advice, prohibitions on the misrepresentation of an AI entity as a human, or regulations preventing bias based on non-legally protected attributes (e.g., geographic location, height, cognitive traits etc).
By integrating such comprehensive restrictions, the AIBOM serves as a conduit for communicating the intended usage and the scope of responsible AI practices associated with a given AI system.
At last, much like SBOMs can accommodate vulnerability communications, AIBOMs can encompass potential risks and quality issues inherent to AI systems.
The presence of risks in an AIBOM doesn't indicate system faults, but rather reflects trade-offs in development. For instance, enhancing privacy could inadvertently raise fairness-related risks. Some risks might persist due to their minor impact or the need for context-specific risk assessment, which may be incomplete at release. Including these in the AIBOM gives a holistic system view, alerts stakeholders to potential issues, and supports informed decision-making.
Implications for Foundation Models (e.g., LLMs): In the era of foundation models such as Large Language Models (LLMs), managing model and data dependencies becomes intricate. LLMs often connect various smaller models <cit.> and services (e.g., ChatGPT Plugins), each with their own dependencies and potential risks.
For instance, the input of one model may originate from the output of another <cit.>, leading to potential risks such error propagation and bias amplification.
That is, a provider offering an LLM-based application or an enterprise building one internally would need to manage not only the dependencies of the LLM itself but also those of the integrated models and services.
AIBOM can provide a mechanism for tracking these dependencies and risks, enhancing the security and reliability of LLM-based applications. It offers insights into the functioning of the AI system and aids in identifying potential points of failure or vulnerability.
While AIBOMs confer numerous advantages, they also bring potential security and trustworthiness concerns to the fore, such as the threat of data tampering. Additionally, AIBOMs, similar to SBOMs, are typically static in nature, which raises important questions around the dynamic runtime governance of AI systems. As such, concepts like “bill of lots" <cit.> that provide dynamic governance solutions merit further exploration.
§.§ Conclusion and Future Work
This paper introduces an innovative blockchain-empowered solution for SBOM sharing by leveraging VCs. Our approach offers an enhancement over traditional methods, delivering a secure and adaptable mechanism for SBOM dissemination. The decentralized attribute of the blockchain fortifies the integrity of SBOM sharing, while its immutable characteristic ensures data integrity. The distributed ledger technology of the blockchain, in tandem with VCs which facilitate selective disclosure (and ZKPs), grants software vendors control over their SBOM data visibility.
Our proposed solution aligns with the recommendations of the CISA report <cit.>, fostering interoperability and automation through the use of blockchain technology and smart contracts. Additionally, we have extended the traditional SBOM concept to AIBOM, incorporating AI-specific components and considerations. The integration of our perspectives on AIBOMs, especially in the era of foundation models, offers a crucial trajectory for future exploration.
unsrt
|
http://arxiv.org/abs/2307.01773v1
|
20230704152529
|
Focus-style proofs for the two-way alternation-free $μ$-calculus
|
[
"Jan Rooduijn",
"Yde Venema"
] |
cs.LO
|
[
"cs.LO",
"math.LO"
] |
J.M.W. Rooduijn and Y. Venema
ILLC, University of Amsterdam, The Netherlands
Focus-style proofs for the two-way alternation-free μ-calculus
Jan RooduijnThe research of this author has been made possible by a grant from the Dutch Research Council NWO, project number 617.001.857. Yde Venema
August 1, 2023
=========================================================================================================================================================
We introduce a cyclic proof system for the two-way alterna-tion-free modal μ-calculus. The system manipulates one-sided Gentzen sequents and locally deals with the backwards modalities by allowing analytic applications of the cut rule. The global effect of backwards modalities on traces is handled by making the semantics relative to a specific strategy of the opponent in the evaluation game. This allows us to augment sequents by so-called trace atoms, describing traces that the proponent can construct against the opponent's strategy. The idea for trace atoms comes from Vardi's reduction of alternating two-way automata to deterministic one-way automata. Using the multi-focus annotations introduced earlier by Marti and Venema, we turn this trace-based system into a path-based system. We prove that our system is sound for all sequents and complete for sequents not containing trace atoms.
§ INTRODUCTION
The modal μ-calculus, introduced in its present form by Kozen <cit.>, is an extension of modal logic by least and greatest fixed point operators. It retains many of the desirable properties of modal logic, such as bisimulation invariance, and relatively low complexity of the model-checking and satisfiability problems. Nevertheless, the modal μ-calculus achieves a great gain in expressive power, as the fixed point operators can be used to capture a form of recursive reasoning. This is illustrated by the fact that the modal μ-calculus embeds many well-known extensions of modal logic, such as Common Knowledge Logic, Linear Temporal Logic and Propositional Dynamic Logic.
A natural further extension is to add a converse modality ă for each modality a. The resulting logic, called two-way modal μ-calculus, can be viewed as being able to reason about the past. As such, it can interpret the past operator of Tense Logic, and moreover subsumes 𝖯𝖣𝖫 with converse. In this paper we are concerned with the proof theory of the two-way modal μ-calculus.
Developing good proof systems for the modal μ-calculus is notoriously difficult. In <cit.>, Kozen introduced a natural Hilbert-style axiomatisation, which was proven to be complete only more than a decade later by Walukiewicz <cit.>. Central to this proof is the use of tableau systems introduced by Niwiński and Walukiewicz in <cit.>. One perspective on these tableau systems is that they are cut-free Gentzen-style sequent systems allowing infinite branches. A proof in such a system, called a non-well-founded proof, is accepted whenever every infinite branch satisfies a certain progress condition. In case this progress condition is ω-regular (as it is in the case of the modal μ-calculus), automata-theoretic methods show that for every non-well-founded proof there is a regular proof, i.e. a proof tree containing only finitely many non-isomorphic subtrees. Since these kind of proofs can be naturally presented as finite trees with back edges, they are called cyclic proofs. As an alternative to non-well-founded proofs, one can use proof rules with infinitely many premisses. We will not take this route, but note that it has been applied to the two-way modal μ-calculus by Afshari, Jäger and Leigh in <cit.>.
In <cit.> Lange and Stirling, for the logics 𝖫𝖳𝖫 and 𝖢𝖳𝖫, annotate formulas in sequents with certain automata-theoretic information. This makes it possible to directly construct cyclic proof systems, without the detour through automata theory. This technique has been further developed by Jungteerapanich and Stirling <cit.> for the modal μ-calculus. Moreover, certain fragments of the modal μ-calculus, such as the alternation-free fragment <cit.> and modal logic with the master modality <cit.> have received the same treatment. Encoding automata-theoretic information in cyclic proofs, through annotating formulas, makes them more amenable to proof-theoretic applications, such as the extraction of interpolants from proofs <cit.>.
The logic at hand, the two-way modal μ-calculus, poses additional difficulties. Already without fixed point operators, backwards modalities are known to require more expressivity than offered by a cut-free Gentzen system <cit.>. A common solution is to add more structure to sequents, as e.g. the nested sequents of Kashima <cit.>. This approach, however, does not combine well with cyclic proofs, as the number of possible sequents in a given proof becomes unbounded. We therefore opt for the alternative approach of still using ordinary sequents, but allowing analytic applications of the cut rule (see <cit.> for more on the history of this approach). The combination of analytic cuts and cyclic proofs has already been shown to work well in the case of Common Knowledge Logic <cit.>. Choosing analytic cuts over sequents with extended structure has recently also been gaining interest in the proof theory of logics without fixed point operators <cit.>.
Although allowing analytic cuts handles the backwards modalities on a local level, further issues arise on a global level in the combination with non-well-founded branches. The main challenge is that the progress condition should not just hold on infinite branches, but also on paths that can be constructed by moving both up and down a proof tree. Our solution takes inspiration from Vardi's reduction of alternating two-way automata to deterministic one-way automata <cit.>. Roughly, the idea is to view these paths simply as upwards paths, only interrupted by several detours, each returning to the same state as where it departed. One of the main insights of the present research is that such detours have a natural interpretation in terms of the game semantics of the modal μ-calculus. We exploit this by extending the syntax with so-called trace atoms, whose semantics corresponds with this interpretation. Our sequents will then be one-sided Gentzen sequents containing annotated formulas, trace atoms, and negations of trace atoms.
For the sake of simplicity we will restrict ourselves to the alternation-free fragment of the modal μ-calculus. This roughly means that we will allow no entanglement of least and greatest fixed point operators. In this setting it suffices to annotate formulas with just a single bit of information, distinguishing whether the formula is in focus <cit.>. This is a great simplification compared to the full language, where annotations need to be strings and a further global annotation, called the control, is often used <cit.>. Despite admitting simple annotations, the trace structure of the alternation-free modal μ-calculus remains intricate. This is mainly caused by the fact that disjunctions may still appear in the scope of greatest fixed point operators, causing traces to split.
While this paper was under review, the preprint <cit.> by Enqvist et al. appeared, in which a proof system is presented for the two-way modal μ-calculus (with alternation). Like our system, their system is cyclic. Moreover, they also extend the syntax in order to apply the techniques from Vardi in a proof-theoretical setting. However, their extension, which uses so-called ordinal variables, is substantially different from ours, which uses trace atoms. It would be interesting to see whether the two approaches are intertranslatable.
In Section 2 we define the two-way alternation-free modal μ-calculus. Section 3 is devoted to introducing the proof system, after which in Section 4 we show that proofs correspond to winning strategies in a certain parity game. In Section 5 we prove soundness and completeness. The concluding Section 6 contains a short summary and some ideas for further research.
§ THE (ALTERNATION-FREE) TWO-WAY MODAL Μ-CALCULUS
For the rest of this paper we fix the countably infinite sets 𝖯 of propositional variables and 𝖣 of actions. Since we want our modal logic to be two-way, we define an involution operation ·̆: 𝖣→𝖣 such that for every a ∈𝖣 it holds that ă≠ a and ă̆ = a.
We work in negation normal form, where the language ℒ_2μ of the two-way modal μ-calculus is generated by the following grammar:
φ ::= p |p|φψ|φψ|aφ|aφ|μ x φ|ν x φ
where p, x ∈𝖯, a ∈𝖣 and in the formation of η x φ (η∈{μ, ν}) the formula x does not occur in φ. The language ℒ_2 μ expresses ⊤ and , e.g. as ν x .x and μ x .x. For the reader familiar with the ordinary modal μ-calculus, note that the only distinctive feauture of ℒ_2 μ is the assumed involution operator on 𝖣.
We use standard terminology for the binding of variables by a fixpoint operator η. In particular, we write FV(φ) for the set of variables x ∈𝖯 that occur freely in φ and BV(φ) for the set of those that are bound by some fixpoint operator. Note that for every x occurring in φ, we have x ∈ FV(φ). For technical convenience, we assume that each formula φ is tidy, i.e. that FV(φ) ∩ BV(φ) = ∅. The unfolding of a formula ψ = η x φ is the formula φ[ψ/x], obtained by substituting every free occurrence of x in φ by ψ. No free variables of ψ are captured by this procedure, because FV(ψ) ∩ BV(φ) ⊆ FV(φ) ∩ BV(φ) = ∅. The closure of a formula ξ∈ℒ_2 μ is the least set 𝖢𝗅𝗈𝗌(ξ) ⊆ℒ_2 μ such that ξ∈𝖢𝗅𝗈𝗌(ξ) and:
* φ∘ψ∈𝖢𝗅𝗈𝗌(ξ) implies φ, ψ∈𝖢𝗅𝗈𝗌(ξ) for each ∘∈{, };
* φ∈𝖢𝗅𝗈𝗌(ξ) implies φ∈𝖢𝗅𝗈𝗌(ξ) for every ∈{⟨ a ⟩, [a] | a ∈𝖣};
* η x φ∈𝖢𝗅𝗈𝗌(ξ) implies φ[η x φ /x] ∈𝖢𝗅𝗈𝗌(ξ) for every η∈{μ, ν}.
It is well known that 𝖢𝗅𝗈𝗌(ξ) is always finite and that all formulas in 𝖢𝗅𝗈𝗌(ξ) are tidy if ξ is so (see e.g. <cit.>).
Formulas of ℒ_2μ are interpreted in Kripke models 𝕊 = (S, (R_a)_a ∈𝖣, V), where S is a set of states, for each a ∈𝖣 we have an accessibility relation R_a ⊆ S × S, and V : 𝖯→𝒫(S) is a valuation function. We assume that each model is regular, i.e. that R_a is the converse relation of R_ă for every a ∈𝖣. Recall that the converse relation of a relation R consists of those (y, x) such that (x, y) ∈ R.
We set R_a[s] := {t ∈ S : sR_at} and let 𝕊[x ↦ X] be the model obtained from 𝕊 by replacing the valuation function V by V[x ↦ X], defined by setting V[x ↦ X](x) = X and V[x ↦ X](p) = V(p) for every p ≠ x. The meaning φ^𝕊⊆ S of a formula ξ∈ℒ_2μ in 𝕊 is inductively on the complexity of ξ:
p ^𝕊 := V(p) p ^𝕊 := S ∖V(p)
φψ^𝕊 := φ^𝕊 ∪ψ^𝕊 φψ^𝕊 := φ^𝕊 ∩ψ^𝕊
⟨a ⟩φ^𝕊 := {s ∈S |R_a[s] ∩φ^𝕊 ≠ ∅} [a] φ^𝕊 := {s ∈S |R_a[s] ⊆φ^𝕊}
μx φ^𝕊 := ⋂{X ⊆S |φ^𝕊[x ↦X] ⊆X} νx φ^𝕊 := ⋃{X ⊆S |X ⊆φ^𝕊[x ↦X]}
We will use the definable (see <cit.>) negation operator · on ℒ_2 μ, for which it holds that ξ^𝕊 = S ∖ξ^𝕊.
In this paper we shall only work with an alternative, equivalent, definition of the semantics, given by the evaluation game ℰ(ξ, 𝕊). We refer the reader to the appendix below for the basic notions of (parity) games. The game ℰ(ξ, 𝕊) is played on the board 𝖢𝗅𝗈𝗌(ξ) × S, and its ownership function and admissible moves are given in the following table.
Position Owner Admissible moves
(p, s), s ∈ V(p) ∀ ∅
(p, s), s ∉ V(p) ∃ ∅
(φψ, s) ∃ {(φ, s), (ψ, s)}
(φψ, s) ∀ {(φ, s), (ψ, s)}
(⟨ a ⟩φ, s) ∃ {φ}× R_a[s]
([a]φ, s) ∀ {φ}× R_a[s]
(η x φ, s) - {(φ [η xφ / x], s)}
The following proposition is standard in the literature on the modal μ-calculus. See <cit.> for a proof.
For every infinite ℰ(ξ, 𝕊)-match ℳ = (φ_n, s_n)_n ∈ω, there is a unique fixpoint formula η x χ which occurs infinitely often in ℳ and is a subformula of φ_n for cofinitely many n.
The winner of an infinite match ℰ(ξ, 𝕊)-match is ∃ if in the previous proposition η = ν, and ∀ if η = μ. It is well known that ℰ(ξ, 𝕊) can be realised as a parity game by defining a suitable priority function on 𝖢𝗅𝗈𝗌(ξ) × S (we again refer the reader to <cit.> for a detailed proof of this fact). Because of this we may, by Theorem <ref> in Appendix <ref>, assume that winning strategies are optimal and positional. Finally, we state the known fact that the two approaches provide the same meaning to formulas. For every φ∈𝖢𝗅𝗈𝗌(ξ): (φ, s) ∈Win_∃(ℰ(ξ, 𝕊))@(φ, s) if and only if s ∈φ^𝕊. If either is side of the bi-implication holds, we say that φ is satisfied in 𝕊 at s and write 𝕊, s ⊩φ.
In this paper we are concerned with a fragment of ℒ_2 μ containing only those formulas ξ which are alternation free, i.e. such that for every subformula η x φ of ξ it holds that no free occurrence of x in φ is in the scope of an η-operator in φ (where η denotes the opposite fixed point operator of η). This fragment is called the alternation-free two-way modal μ-calculus and denoted by ℒ_2 μ^af. We close this section by stating some typical properties of the alternation-free fragment. For η∈{μ, ν} we use the term η-formula for a formula of the form η x φ.
Let ξ∈ℒ^af_2 μ be an alternation-free formula. Then:
* Every formula φ∈𝖢𝗅𝗈𝗌(ξ) is alternation free.
* The negation ξ is alternation free.
* An infinite ℰ(ξ, 𝕊)-match is won by ∃ precisely if it contains infinitely many ν-formulas, and by ∀ precisely if it contains infinitely many μ-formulas.
§ THE PROOF SYSTEM
We will call a set Σ of formulas negation-closed if for every ξ∈Σ it holds that ξ∈Σ and 𝖢𝗅𝗈𝗌(ξ) ⊆Σ. For the remainder of this paper we fix a finite and negation-closed set Σ of ℒ_2 μ^af-formulas. For reasons of technical convenience, we will assume that every formula is drawn from Σ. This does not restrict the scope of our results, as any formula is contained in some finite negation-closed set.
§.§ Sequents
§.§.§ Syntax
Inspired by <cit.>, we annotate formulas by a single bit of information.
An annotated formula is a formula with an annotation in {,̆}.
The letters b, c, d, … are used as variables ranging over the annotations $̆ and. An annotated formulaφ^bis said to be out of focus ifb = $̆, and in focus if b =. The focus annotations will keep track of so-called traces on paths through proofs. Roughly, a trace on a path is a sequence of formulas, such that the i-th formula occurs in the i-th sequent on the path, and the i + 1-th formula `comes from' the i-th formula in a way which we will define later. In Section <ref> we will construct a game in which the winning strategies of one player correspond precisely to the proofs in our proof system. The focus mechanism enables us to formulate this game as a parity game. This is essentially also the approach taken in <cit.>.
Where traces usually only moves upwards in a proof, the backwards modalities of our language will be enable them to go downwards as well. We will handle this in our proof system by further enriching our sequents with the following additional information.
For any two formulas φ, ψ, there is a trace atom φψ and a negated trace atom φψ.
The idea for trace atoms will become more clear later, but for now one can think of φψ as expressing that there is some kind of trace going from φ to ψ, and of φψ as its negation. Finally, our sequents are built from the above three entities.
A sequent is a finite set consisting of annotated formulas, trace atoms, and negated trace atoms.
Whenever we want to refer to general elements of a sequent Γ, without specifying whether we mean annotated formulas or (negated) trace atoms, we will use the capital letters A, B, C, ….
§.§.§ Semantics
We will now define the semantics of sequents. Unlike annotations, which do not affect the semantics but only serve as bookkeeping devices, the trace atoms have a well-defined interpretation. We will work with a refinement of the usual satisfaction relation that is defined with respect to a strategy for ∀ in the evaluation game. Most of the time, this strategy will be both optimal and positional (see Appendix <ref> for the precise definition of these terms). Because we will frequently need to mention such optimal positional strategies, we will refer to them by the abbreviation ops. We first define the interpretation of annotated formulas. Note that the focus annotations play no role in this definition.
Let 𝕊 be a model, let f be an ops for ∀ in ℰ@(⋀Σ, 𝕊) and let φ^b be an annotated formula. We write 𝕊, s ⊩_f φ^b if f is not winning for ∀ at (φ, s).
The following proposition, which is an immediate consequence of Theorem <ref> of the appendix, relates ⊩_f to the usual satisfaction relation ⊩.
𝕊, s ⊩φ iff for every ops f for ∀ in ℰ(⋀Σ, 𝕊): 𝕊, s ⊩_f φ^b.
The semantics of trace atoms is also given relative to an ops for ∀ in the game ℰ(⋀Σ, 𝕊) (in the following often abbreviated to ℰ).
Given an ops f for ∀ in ℰ, we say that φψ is satisfied in 𝕊 at s with respect to f (and write 𝕊, s ⊩_f φψ) if there is an f-guided match
(φ, s) = (φ_0, s_0) · (φ_1, s_1) ⋯ (φ_n, s_n) = (ψ, s) (n ≥ 0)
such that for no i < n the formula φ_i is a μ-formula. We say that 𝕊 satisfies φψ at s with respect to f (and write 𝕊, s ⊩_f φψ) iff 𝕊, s ⊮_f φψ.
The idea behind the satisfaction of a trace atom φψ at a state s is that ∃ can take the match from (φ, s) to (ψ, s) without passing through a μ-formula. This is good for the player ∃. For instance, if φψ and ψφ are satisfied at s with respect to f for some φ≠ψ, then f is necessarily losing for ∀ at the position (φ, s). We will later relate trace atoms to traces in infinitary proofs.
We interpret sequents disjunctively, that is: 𝕊, s ⊩_f Γ whenever 𝕊, s ⊩_f A for some A ∈Γ. The sequent Γ is said to be valid whenever 𝕊, s ⊩_f Γ for every model 𝕊, state s of 𝕊, and ops f for ∀ in ℰ.
There is another way in which one could interpret sequents, which corresponds to what one might call strong validity, and which the reader should note is different from our notion of validity. Spelling it out, we say that Γ is strongly valid if for every model 𝕊 and state s there is an A in Γ that such that for every ops f for ∀ in ℰ it holds that 𝕊, s ⊩_f A. While these two notions coincide for sequents containing only annotated formulas, the sequent given by {φψφ, φψψ} shows that they do not in general.
If Γ consists of only annotated formulas and Γ is valid, then Γ is strongly valid.
Suppose Γ is valid. We claim that 𝕊, s ⊩⋁Γ. Indeed, suppose that 𝕊, s ⊮⋁Γ. Then there is a positional strategy f_0 for ∀ in ℰ(⋁Γ, 𝕊) such that 𝕊, s ⊮_f ⋁Γ. From f_0 we can easily obtain a positional strategy f for ∀ in ℰ for which there is no φ∈Γ such that 𝕊, s ⊩_f φ. Hence 𝕊, s ⊮_f Γ. So Γ is not valid, a contradiction.
So now we have 𝕊, s ⊩⋁Γ, from which it follows that there is a φ∈Γ such that 𝕊, s ⊩φ, whence 𝕊, s ⊩_f φ for every positional strategy f for ∀ in ℰ. Hence Γ is strongly valid.
We finish this subsection by defining three operations on sequents that, respectively, extract the formulas contained annotated in some sequent, take all annotated formulas out of focus, and put all formulas into focus.
Γ^- := {χ|χ^∈Γ for some b ∈{,̆}},
Γ^ := {φψ|φψ∈Γ}∪{φψ|φψ∈Γ}∪{χ^|̆χ∈Γ^-},
Γ^ := {φψ|φψ∈Γ}∪{φψ|φψ∈Γ}∪{χ^|χ∈Γ^-}.
§.§ Proofs
In this subsection we give the rules of our proof system. Because the rule for modalities is quite involved, its details are given in a separate definition.
Let Γ be a sequent and let [a]φ^b be an annotated formula. The jump Γ^[a] φ^b of Γ with respect to [a] φ^b consists of:
*
* φ^s([a]φ, Γ);
* ψ^s(⟨ a ⟩ψ, Γ) for every ⟨ a ⟩ψ^c∈Γ;
* [ă] χ^ for every χ^d∈Γ such that [ă]χ∈Σ;
*
* φ⟨ă⟩χ for every [a]φχ∈Γ such that ⟨ă⟩χ∈Σ;
* ⟨ă⟩χφ for every χ[a]φ∈Γ such that ⟨ă⟩χ∈Σ;
* ψ⟨ă⟩χ for every ⟨ a ⟩ψχ∈Γ such that ⟨ă⟩χ∈Σ;
* ⟨ă⟩χψ for every χ⟨ a ⟩ψ∈Γ such that ⟨ă⟩χ∈Σ,
where s(ξ, Γ) is defined by:
s(ξ, Γ) = if ξ^∈Γ,
if θξ∈Γ for some θ^∈Γ,
otherwise.
Before we go on to provide the rest of the proof system, we will give some intuition for the modal rule, by proving the lemma below. This lemma essentially expresses that the modal rule is sound. Since the annotations play no role in the soundness of an individual rule, we suppress the annotations in the proof below for the sake of readability. Intuition for the annotations in the modal rule, and in particular for the function s, is given later.
Given a model 𝕊, a state s of 𝕊, and an ops f for ∀ in ℰ such that 𝕊, s ⊮_f [a]φ^b, Γ, there is an a-successor t of s, such that 𝕊, t ⊮_f Γ^[a]φ^b.
Let t be the state chosen by f([a]φ, s). We claim that 𝕊, t ⊮_f Γ^[a] φ^b. To start with, since f is winning, we have 𝕊, t ⊮_f φ. Moreover, if ⟨ a ⟩ψ^c belongs to Γ, then 𝕊, s ⊮_f ⟨ a ⟩ψ and thus 𝕊, t ⊮_f ψ. Thirdly, if χ^d belongs to Γ and ăχ∈Σ, then 𝕊, s ⊮[ă] χ, whence by the optimality of f, we have 𝕊, t ⊮_f ăχ.
The above shows all conditions under item 1. For the conditions under item 2, suppose that ăχ∈Σ. We only show 2(d), because the others are similar. Suppose that χaψ∈Γ. Then 𝕊, s ⊮_f χaψ, whence 𝕊, s ⊩_f χaψ. That means that there is an f-guided ℰ-match
(χ, s) = (φ_0, s_0) · (φ_1, s_1) ⋯ (φ_n, s_n) = (aψ, s) (n ≥ 0)
such that none of the φ_i's is a μ-formula. But then the f-guided ℰ-match
(ăχ, t) · (φ_0, s_0) ⋯ (φ_n, s_n) · (ψ, t)
witnesses that 𝕊, t ⊮_f ăχψ, as required.
The rules of the system 𝖥𝗈𝖼𝗎𝗌^2 are given in Figure <ref>. In each rule, the annotated formulas occurring in the set Γ are called side formulas. Moreover, the rules in {𝖱_, 𝖱_, 𝖱_μ, 𝖱_ν, 𝖱_a} have precisely one principal formula, which by definition is the annotated formula appearing to the left of Γ in the conclusion. Note that, due to the fact that sequents are taken to be sets, an annotated formula may at the same time be both a principal formula and a side formula.
We will now define the relation of immediate ancestry between formulas in the conclusion and formulas in the premisses of some arbitrary rule application. For any side formula in the conclusion of some rule, we let its immediate ancestors be the corresponding side formulas in the premisses. For every rule except 𝖱_[a], if some formula in the conclusion is a principal formula, its immediate ancestors are the annotated formulas occurring to the left of Γ in the premisses. Finally, for the modal rule 𝖱_[a], we stipulate that φ^s([a]φ, Γ) is an immediate ancestor of the principal formula [a]φ^, and that each ψ^s(⟨ a ⟩ψ, Γ) contained in Γ^[a] φ^ due to clause 1(b) of Definition <ref> is an immediate ancestor of ⟨ a ⟩ψ^∈Γ.
As mentioned before, the purpose of the focus annotations is to keep track of traces of formulas on branches. Usually, a trace is a sequence of formulas (φ_n)_n < ω such that each φ_k is an immediate ancestor of φ_k+1. The idea is then that whenever an infinite branch has cofinitely many sequents with a formula in focus, this branch contains a trace on which infinitely many formulas are ν-formulas. Disregarding the backwards modalities for now, this can be seen as follows. As long as the focus rule is not applied, any focussed formula is an immediate ancestor of some earlier focussed formula. Since the principal formula of 𝖱_μ loses focus, while the principal formula of 𝖱_ν preserves focus, a straightforward application of Kőnig's Lemma shows that every infinite branch contains a trace with infinitely many ν-formulas. We refer the reader to <cit.> for more details.
Our setting is slightly more complicated, because the function s in Definition <ref> additionally allows the focus to transfer along negated trace atoms, rather than just from a formula to one of its immediate ancestors. This is inspired by <cit.>, as are the conditions in the second part of Definition <ref>. The main idea is that, because of the backwards modalities, traces may move not only up, but also down a proof tree. To get a grip on these more complex traces, we cut them up in segments consisting of upward paths, which are the same as ordinary traces, and loops, which are captured by the negated trace atoms. This intuitive idea will become explicit in the proof of completeness in Section <ref>.
We are now ready to define a notion of infinitary proofs in 𝖥𝗈𝖼𝗎𝗌^2.
A 𝖥𝗈𝖼𝗎𝗌_∞^2-proof is a (possibly infinite) derivation in 𝖥𝗈𝖼𝗎𝗌^2 with:
* All leaves are axioms.
* On every infinite branch cofinitely many sequents have a formula in focus.
* Every infinite branch has infinitely many applications of 𝖱_[a].
As mentioned above, conditions 2 and 3 are meant to ensure that every infinite trace contains infinitely many ν-formulas. We will use this in Section <ref> to show that infinitary proofs are sound. The key idea is to relate the traces in a proof to matches in a purported countermodel of its conclusion.
We leave it to the reader to verify that each rule, apart from the modal rule, is truth-preserving with respect to a given model 𝕊, state s of 𝕊, and ops f for Refuter in ℰ(⋀Σ, 𝕊). Since Lemma <ref> already showed the soundness of the modal rule, we obtain:
Well-founded 𝖥𝗈𝖼𝗎𝗌_∞^2-proofs are sound.
We close this section with two examples of 𝖥𝗈𝖼𝗎𝗌^2_∞-proofs. The first example demonstrates 𝖼𝗎𝗍 and item 1(c) of Definition <ref>. The second example demonstrates trace atoms.
Define the following two formulas:
φ := μ x (⟨ă⟩ x p), ψ := ν y ([a] x φ).
The formula φ expresses `there is a backwards a-path to some state where p holds'. The formula ψ expresses `φ holds at every state reachable by a forwards a-path'. As our context Σ we take least negation-closed set containing φ and ψ:
{φ, ăφ p, ăφ, p, ψ, aψφ, aψ, φ, ăφp, p, ăφ, ψ, aψφ, aψ}.
The implication p →ψ is valid, and below we give a 𝖥𝗈𝖼𝗎𝗌^2_∞-proof. As this particular proof does not rely on trace atoms, we omit them for readability.
𝖠𝗑1p^, ψ^, ăφ^,̆ p^$̆𝖱_p^, ψ^, ăφ p^$̆𝖱_μp^, ψ^, φ^$̆πψ^, ăφ^$̆𝖱_[a]p^, aψ^, φ^$̆𝖠𝗑1p^, φ^, φ^$̆𝖱_p^, aψφ^, φ^$̆𝖱_νp^, ψ^, φ^$̆𝖼𝗎𝗍p^, ψ^
In the above proof, the proofπis given by
𝖠𝗑1φ^,̆φ^$̆𝖱_ă[a]ψ^, ăφ^,̆ăφ^,̆ p^$̆𝖱_[a]ψ^, ăφ^,̆ăφ p^$̆𝖱_μ[a]ψ^, ăφ^,̆φ^$̆⋮ψ^, ăφ^$̆𝖱_a[a]ψ^, ăφ^,̆φ^$̆𝖼𝗎𝗍[a]ψ^, ăφ^$̆𝖠𝗑1φ^,̆φ^$̆𝖱_ăăφ^,̆ p^,̆ăφ^$̆𝖱_ăφ p^,̆ăφ^$̆𝖱_μφ^, ăφ^$̆𝖱_[a]ψφ^, ăφ^$̆𝖱_νψ^, ăφ^$̆
where the vertical dots indicate that the proof continues by repeating what happens at the root ofπ. The resulting proof ofp^, ψ^has a single infinite branch, which can easily be seen to satisfy the conditions of Definition <ref>.
Define φ := ν x aă x, i.e.φ expresses that there is an infinite path of alternating a and ă transitions. Clearly this holds at every state with an a-successor. Hence the implication a p →φ is valid. As context Σ we consider the least negation-closed set containing both a p and φ, i.e.,
{a p, p, φ, aăφ, ăφ, [a] p, p, φ, aăφ, ăφ}.
The following is a 𝖥𝗈𝖼𝗎𝗌^2_∞-proof of a p →φ.
𝖠𝗑2p^, ăφ^, ăφăφ, ăφăφ𝖱_aap^, aăφ^, φaăφ, aăφφ𝖱_νap^, φ^
Note that it is also possible to use 𝖠𝗑3 instead of 𝖠𝗑2 in the above proof.
§ THE PROOF SEARCH GAME
We will define a proof search game𝒢(Σ)for the proof system𝖥𝗈𝖼𝗎𝗌_∞^2in the standard way. First, we require a slightly more formal definition of the notion of a rule instance.
A rule instance is a triple (Γ, 𝗋, ⟨Δ_1, …, Δ_n ⟩) such that
Δ_1 ⋯Δ_n𝗋Γ
is a valid rule application in 𝖥𝗈𝖼𝗎𝗌^2.
The set of positions of𝒢(Σ)is𝖲𝖾𝗊_Σ∪𝖨𝗇𝗌𝗍_Σ, where𝖲𝖾𝗊_Σis the set of sequents and𝖨𝗇𝗌𝗍_Σis the set of valid rule instances (containing only formulas inΣ). SinceΣis finite, the game𝒢(Σ)has only finitely many positions. The ownership function and admissible moves of𝒢(Σ)are as in the following table:
Position Owner Admissible moves
Γ∈𝖲𝖾𝗊_Σ Prover {i ∈𝖨𝗇𝗌𝗍_Σ|𝖼𝗈𝗇𝖼(i) = Γ}
(Γ, 𝗋, ⟨Δ_1, …, Δ_n ⟩) ∈𝖨𝗇𝗌𝗍_Σ Refuter {Δ_i | 1 ≤ i ≤ n}
In the above table, the expression𝖼𝗈𝗇𝖼(i)stands for the conclusion (i.e. the first element of the triple) of the rule instancei. As usual, a finite match is lost by the player who got stuck. An infinite𝒢(Σ)-match is won by Prover if and only it has a final segment
Γ_0 · i_0 ·Γ_1 · i_1 ⋯
on which eachΓ_khas at least one formula in focus and the instancei_kis an application of𝖱_[a]for infinitely manyk. The two main observations about𝒢(Σ)that we will use are the following:
* A 𝖥𝗈𝖼𝗎𝗌_∞^2-proof of Γ is the same as a winning strategy for Prover in 𝒢(Σ)@Γ.
* 𝒢(Σ) is a parity game, whence positionally determined.
The first observation is immediate when viewing a winning strategy as a subtree of the full game tree. To make the second observation more explicit, we give the parity functionΩfor𝒢(Σ). On𝖲𝖾𝗊_Σ, we simply setΩ(Γ) := 0for everyΓ∈𝖲𝖾𝗊_Σ. On𝖨𝗇𝗌𝗍_Σ, we define:
Ω(Γ, 𝗋, ⟨Δ_1, …, Δ_n ⟩) := 3 if Γ has no formula in focus,
2 if Γ has a formula in focus and 𝗋 = 𝖱_[a],
1 if Γ has a formula in focus and 𝗋≠𝖱_[a].
As a result we immediately obtain a method to reduce general non-well-founded proofs to cyclic proofs. Indeed, if Prover has a winning strategy, she also has positional winning strategy, which clearly corresponds to a regular𝖥𝗈𝖼𝗎𝗌_∞^2-proof (that is, a proof containing only finitely many non-isomorphic subtrees.)
§ SOUNDNESS AND COMPLETENESS
In this section we will prove the soundness and completeness of the system𝖥𝗈𝖼𝗎𝗌_∞^2. More specifically, for soundness we will show that ifΓis invalid, then Refuter has a winning strategy in𝒢(Σ)@Γ. Our completeness result is slightly less wide in scope, showing only that if Refuter has a winning strategy in𝒢(Σ)@Γ, thenΓ^-is invalid.
§.§ Soundness
For soundness, we assume an opsffor∀inℰ := ℰ(⋀Σ, 𝕊)for some𝕊andssuch that𝕊, s ⊮_f Γ. The goal is to construct fromfa strategyT_ffor Refuter in𝒢 := 𝒢(Σ). The key idea is to assign to each positionpreached in𝒢a statessuch that wheneverp = Δ∈𝖲𝖾𝗊_Σit holds that𝕊, s ⊮_f Δ. Forp ∈𝖨𝗇𝗌𝗍_Σ, the choice ofT_fis then based onf(φ, s)whereφis a formula determined by the rule instancep. The existence of such ansimplies thatpcannot be an axiom and thus that Refuter never gets stuck. For infinite matches, the proof works by showing that aT_f-guided𝒢@Γ-match lost by Refuter induces anf-guidedℰ@φ-match lost by∀. As mentioned above, the key idea here is to relate anf-guidedℰ@φ-match to a trace through theT_f-guided𝒢@Γ-match. If the𝒢@Γ-match is losing for Refuter, it must contain a trace with infinitely manyν-formulas, which gives us anℰ@φ-match lost by∀. A novel challenge here is that not all steps in a trace necessarily go from a formula to one of its immediate ancestors, but may instead transfer along a negated trace atom. When this happens, say fromφ_ntoφ_n+1, it holds forΔas above that bothφ_n^andφ_n φ_n+1belong toΔ. Since, by the above, it holds that𝕊, s ⊮_f Δ, we use the fact that𝕊, s ⊩_f φ_n φ_n+1to take theℰ@φ-match from(φ_n, s)to(φ_n+1, s). In the end, we obtain:
If Γ is the conclusion of a 𝖥𝗈𝖼𝗎𝗌_∞^2-proof, then Γ is valid.
§.§ Completeness
For completeness we conversely show that from a winning strategyTfor Refuter in𝒢@Γ, we can construct a model𝕊^Tand a positional strategyf_Tfor∀inℰ(⋀Σ, 𝕊^T)such that𝕊^TfalsifiesΓ^-with respect tof_T. The strategyf_Twe construct will not necessarily be optimal, but by Theorem <ref> of Appendix <ref> it follows that there must also be an opsfsuch that𝕊^T ⊮_f Γ^-. We will viewTas a tree, and restrict attention a certain subtree. We first need to define two relevant properties of rule applications.
A rule application is cumulative if all of the premisses are supersets of the conclusion. A rule application is productive if all of the premisses are distinct from the conclusion.
Without renamingT, we restrictTto its subtree where Prover adheres to the following (non-deterministic) strategy:
* Exhaustively apply productive instances of 𝖼𝗎𝗍 and 𝗍𝖼.
* If applicable, apply the focus rule.
* Exhaustively take applications of 𝖱_, 𝖱_, 𝖱_μ, 𝖱_ν, 𝗍𝗋𝖺𝗇𝗌 that are both cumulative and productive.
* If applicable, apply an axiom.
* If applicable, apply a modal rule and loop back to stage (1).
It is not hard to see that each of the above phases terminates. More precisely, phases (2), (4) and (5) either terminate immediately or after applying a single rule. By the productivity requirement and the finiteness ofΣ, phases (1) and (3) must terminate after a finite number of rule applications as well. Note also that non-cumulative rule applications can only happen in phases (2) or (5).
We will now define the model𝕊^T. The setS^Tof states consists of maximal paths inTnot containing a modal rule. We writeΓ(ρ)for⋃{Γ : Γ occurs in ρ}. Note that, since the only possibly non-cumulative rule application inρis the focus rule,Γ(ρ)^ = 𝗅𝖺𝗌𝗍(ρ)^for every stateρof𝕊^T. Moreover, we writeρ_1 ρ_2ifρ_2is directly aboveρ_1inT, separated only by an application of𝖱_[a](we assume that trees grow upwards). We write→for the union⋃{ : a ∈𝖣}. Clearly, under the relation→the states of𝕊^Tform a forest (not necessarily a tree!). We writeρ≤τifτis a descendant ofρin this forest, i.e.≤is the reflexive-transitive closure of→. The relationsR_a^Tof𝕊^Tare defined as follows:
ρ_1 R_a^T ρ_2 if and only if ρ_1 ρ_2 or ρ_2 ρ_1.
Note that𝕊^Tis clearly regular. We define the valuationV^T :S^T →𝒫(𝖯)by
V^T(ρ) := {p : p∈Γ(ρ)^-}.
The restriction onT, together with the fact that it is winning for Refuter, guarantees that eachΓ(ρ)satisfies certain saturation properties, which are spelled out in the following lemma. We will later use these saturation conditions to construct our positional strategyf_Tfor∀inℰ(⋀Σ, 𝕊^T)and to show that𝕊^TfalsifiesΓwith respect tof_T.
For every state ρ of 𝕊^T, the set Γ(ρ) is saturated. That is, it satisfies all of the following conditions:
* For no φ it holds that φ, φ∈Γ(ρ)^-.
* For all φ it holds that φ^∈̆Γ(ρ) if and only if φ^∉̆Γ(ρ)
* For all φ it holds that φψ∈Γ(ρ) if and only if φψ∉Γ(ρ).
* For no φ it holds that φφ∈Γ(ρ).
* If ψ_1 ψ_2 ∈Γ(ρ)^-, then for both i: ψ_1 ψ_2 ψ_i ∈Γ(ρ) and ψ_i ∈Γ(ρ)^-.
* If ψ_1 ψ_2 ∈Γ(ρ)^-, then for some i: ψ_1 ψ_2 ψ_i ∈Γ(ρ) and ψ_i ∈Γ(ρ)^-.
* If μ x φ∈Γ(ρ)^-, then φ[μ xφ/x] ∈Γ(ρ)^-.
* If ν x φ∈Γ(ρ)^-, then ν x φφ[ν x φ/x] ∈Γ(ρ) and φ[ν x φ/x] ∈Γ(ρ)^-.
* If ν x φ∈Γ(ρ)^-, then φ[ν x φ/ x] ν x φ∈Γ(ρ).
* If φψ, ψχ∈Γ(ρ), then φχ∈Γ(ρ).
For every state ρ of 𝕊^T, the set Γ(ρ) is saturated. That is, it satisfies all of the following conditions:
* For no φ it holds that φ, φ∈Γ(ρ)^-.
* For all φ it holds that φ^∈̆Γ(ρ) if and only if φ^∈̆Γ(ρ)
* For all φ it holds that φψ∈Γ(ρ) if and only if φψ∈Γ(ρ).
* For no φ it holds that φφ∈Γ(ρ) and φ∈Γ(ρ)^-.
* If ψ_1 ψ_2 ∈Γ(ρ)^-, then for all i: ψ_1 ψ_2 ψ_i ∈Γ(ρ) and ψ_i ∈Γ(ρ)^-.
* If ψ_1 ψ_2 ∈Γ(ρ)^-, then for some i: ψ_1 ψ_2 ψ_i ∈Γ(ρ) and ψ_i ∈Γ(ρ)^-.
* If μ x φ∈Γ(ρ)^-, then φ[μ xφ/x] ∈Γ(ρ)^-.
* If ν x φ∈Γ(ρ)^-, then ν x φφ[ν x φ] ∈Γ(ρ)^- and φ[νφ/x] ∈Γ(ρ)^-.
* If φψ, ψχ∈Γ(ρ)^-, then φχ∈Γ(ρ)^-.
Now letρ_0be a state of𝕊^Tcontaining the rootΓand letφ_0be some formula such thatφ_0 ∈Γ^-. We wish to show thatφ_0is not satisfied atρ_0in𝕊^T. To this end, we will construct a winning strategyf_Tfor∀in the gameℰ := ℰ(⋀Σ, 𝕊^T)initialised at(φ_0, ρ_0). The strategyf_Tis defined as follows:
* At (ψ_1 ψ_2, ρ), pick a conjunct ψ_i ∈Γ(ρ)^- such that ψ_1 ψ_2 ψ_i ∈Γ(ρ).
* At ([a]φ, ρ), choose (φ, τ) for some τ such that ρτ by virtue of some application of 𝖱_[a] with [a] φ^ principal for some b ∈{,̆}.
Before we show thatf_Tis winning for∀, we must first argue that it is well defined. By saturation, for every formulaψ_1 ψ_2contained inΓ(ρ)^-, there is aψ_i ∈Γ(ρ)^-withψ_1 ψ_2 ψ_i ∈Γ(ρ). Likewise, for every formula[a] φ^∈Γ(ρ), there is aτdirectly aboveρinT, separated only by an application of𝖱_[a]with[a] φ^principal. The following lemma therefore suffices. Its proof is by induction on the length ofℳand heavily relies on the saturation properties of Lemma <ref>.
Let ℳ be an f_T-guided ℰ-match initialised at (φ_0, ρ_0). Then for any position (φ, ρ) occurring in ℳ it holds that φ∈Γ(ρ)^-. Moreover, if (φ, ρ) comes directly after a modal step and the focus rule is applied in ρ, then φ^∈Γ(ρ).
The following lemma is key to the completeness proof. It shows that if anf_T-guidedℰ@(φ_0, ρ_0)-match loops from some stateρto itself, without passing through aμ-formula, then this information is already contained inρin the form of a negated trace atom. The proof goes by induction on the number of distinct states ofS^Toccurring in𝒩. The base case, where onlyρis visited, can be shown by applying several instances of Lemma <ref>. For the inductive step, we crucially rely on the conditions 2(a) – 2(d) of Definition <ref> to relate the trace atoms in two statesτandτ'such thatτ R^T_a τ'.
Let ρ∈ S^T. Suppose that an f_T-guided ℰ@(φ_0, ρ_0)-match ℳ has a segment 𝒩 of the form:
(φ, ρ) = (ψ_0, s_0) · (ψ_1, s_1) ⋯ (ψ_n, s_n) = (ψ, ρ) (n ≥ 0)
such that for no i < n the formula ψ_i is a μ-formula. Then φψ∈Γ(ρ).
With the above lemmata in place, we are ready to prove that∀wins every fullf_T-guidedℰ@(φ_0, ρ_0)-matchℳ. Ifℳis finite, it is not hard to show that it must be∃who got stuck. Ifℳis infinite, the proof depends on whetherℳvisits some single state infinitely often. If it does, one can show that if∃would win the matchℳ, thenℳwould visit some stateρwithν x φ, φ[ν x φ / x] φ∈Γ(ρ)^-, contradicting saturation. If, on the other hand,ℳvisits each state at most finitely often, the proof works by showing that a win for∃inℳwould imply thatTcontains an infinite branch won by Prover, which is also a contradiction. In the end, we obtain the following proposition.
The strategy f_T is winning for ∀ in ℰ@(φ_0, ρ_0).
Sinceφ_0was chosen arbitrarily fromΓ^-, we find that𝕊^T ⊮_f_TΓ^-. Hence, by Theorem <ref> of Appendix <ref>, we obtain completeness for the formulas in a sequent.
If Γ^- is valid, then Γ has a 𝖥𝗈𝖼𝗎𝗌^2_∞-proof.
§ CONCLUSION
We have constructed a non-well-founded proof system𝖥𝗈𝖼𝗎𝗌^2_∞for the two-way alternation-free modalμ-calculusℒ^af_2μ. This system naturally reduces to a cyclic system when restricting to positional strategies in the proof search game.
Using the proof search game and the game semantics for the modalμ-calculus, we have shown that the system is sound for all sequents, and complete for those sequents not containing trace atoms. A natural first question for future research is to see if a full completeness result can be obtained. For this, a logic of trace atoms would have to be developed. One could for instance think of a rule like
φχ, Γψχ, Γ𝖱_φψχ, Γ
Following on this, we think it would be interesting to properly include trace atoms in the syntax by allowing the Boolean, modal and perhaps even the fixed point operators to apply to trace atoms. An example of a valid formula in this syntax is given by((φaψ) a(ψăφ)) →φ.
Another pressing question is whether our system could be used to prove interpolation, as has been done for language without backwards modalities in <cit.>. To the best of our knowledge it is currently an open question whetherℒ^af_2μhas interpolation. At the same time, it is known that analytic applications of the cut rule do not necessarily interfere with the process of extracting interpolants from proofs <cit.>.
Finally, it would be interesting to see if our system can be extended to the full languageℒ_2μ. The main challenge would be to keep track of the most important fixed point variable being unfolded on a trace. Perhaps this could be done by employing an annotation system such as the one by Jungteerapanich and Stirling <cit.>, together with trace atoms that record the most important fixed point variable unfolded on a loop.
§.§.§ Acknowledgements
We thank Johannes Marti for insightful conversations at the outset of the present research. We also thank the anonymous reviewers for their helpful comments.
§ PARITY GAMES
A (two-player) game is a structure 𝒢 = (B_0, B_1, E, W) where E is a binary relation on B := B_0 + B_1, and W is a map B^ω→{0, 1}.
The setBis called the board of𝒢, and its elements are called positions. Whether a position belongs toB_0orB_1determines which player owns that position. If a playerΠ∈{0, 1}owns a positionq, it is their turn to play and the set of their admissible moves is given by the imageE[q].
A match in 𝒢 = (B_0, B_1, E, W) (or simply a 𝒢-match) is a path ℳ through the graph (B, E). A match is said to be full if it is a maximal path.
Note that a full matchℳis either finite, in which caseE[𝗅𝖺𝗌𝗍(ℳ)] = ∅, or infinite. For aΠ∈{0, 1}, we writeΠfor the other playerΠ + 1 2.
A full match ℳ in 𝒢 = (B_0, B_1, E, W) is won by player Π if either ℳ is finite and 𝗅𝖺𝗌𝗍(ℳ) ∈ B_Π, or ℳ is infinite and W(ℳ) = Π.
If a full matchℳis finite, and𝗅𝖺𝗌𝗍(ℳ)belongs toB_ΠforΠ∈{0, 1}, we say that the playerΠgot stuck. A partial match is a match which is not full.
In the context of a game 𝒢, we denote by PM_Π the set of partial 𝒢-matches ℳ such that 𝗅𝖺𝗌𝗍(ℳ) belongs to the player Π.
A strategy for Π in a game 𝒢 is a map f : PM_Π→ B. Moreover, a 𝒢-match ℳ is said to be f-guided if for any ℳ_0 ⊏ℳ with ℳ_0 ∈PM_Π it holds that ℳ_0 · f(ℳ_0) ⊑ℳ.
For a positionq, the setPM_Π(q)contains allℳ∈PM_Πsuch that𝖿𝗂𝗋𝗌𝗍(ℳ) = q.
A strategy f for Π in 𝒢 is surviving at a position q if f(ℳ) is admissible for every ℳ∈PM_Π(q), and winning at q if in addition all full f-guided matches starting at q are won by Π. A position q is said to be winning for Π if Π has a strategy winning at q. We denote the set of all positions in 𝒢 that are winning for Π by Win_Π(𝒢).
We write𝒢@qfor the game𝒢initialised at the positionqof𝒢. A strategyfforΠis surviving (winning) in𝒢@qif it is surviving (winning) in𝒢atq.
A strategy f is positional if it only depends on the last move, i.e. if f(ℳ) = f(ℳ') for all ℳ, ℳ' ∈PM_Π with 𝗅𝖺𝗌𝗍(ℳ) = 𝗅𝖺𝗌𝗍(ℳ').
We will often present a positional strategy forΠas a mapf : B_Π→ B.
A priority map on some board B is a map Ω : B →ω of finite range. A parity game is a game of which the winning condition is given by W_Ω(ℳ) = max (Inf_Ω(ℳ)) 2, where Inf_Ω(ℳ) is the set of positions occuring infinitely often in ℳ.
The following theorem captures the key property of parity games: they are positionally determined. In fact, each playerΠhas a positional strategyf_Πthat is optimal, in the sense thatf_Πis winning forΠin𝒢@qfor everyq ∈Win_Π(𝒢).
For any parity game 𝒢, there are positional strategies f_Π for each player Π∈{0, 1}, such that for every position q one of the f_Π is a winning strategy for Π in 𝒢@q.
splncs04
§ PROOFS
Proof of Proposition <ref>. Our proof will go by contraposition, so suppose that some sequentΓis invalid. This means that there is a model𝕊with a statesand opsffor∀in the gameℰ := ℰ(⋀Σ, 𝕊), such that𝕊, s ⊮_f Γ. We will construct a (positional) winning strategyT_ffor Refuter in the game𝒢 := 𝒢(Σ)initialised atΓ.
Formally, this strategy is a functionT_f : PM_R(Γ) →𝖲𝖾𝗊_Σ. In addition, we will define a functions_f : PM(Γ) →𝕊, from partial𝒢-matches starting atΓto states of𝕊, such that𝕊, s_f(ℳ) ⊮_f 𝗅𝖺𝗌𝗍(ℳ)for everyT_f-guidedℳ∈PM_P(Γ), and𝕊, s_f(ℳ) ⊮_f T_f(ℳ)for everyT_f-guidedℳ∈PM_R(Γ).
We defineT_fands_fby induction on the length|ℳ|of a matchℳ∈PM(Γ). For the base case, i.e. where|ℳ| = 1, we haveℳ = Γ. Since in this caseℳ∈PM_P(Γ), we only have to defines_f(ℳ)and notT_f(ℳ). We sets_f(ℳ) := s.
Now suppose thatT_fands_fhave been defined for all matches up to lengthn, and that|ℳ| = n + 1. We assume thatℳisT_f-guided, for otherwise we may just assignT_f(ℳ)ands_f(ℳ)some garbage value.
Suppose first thatℳbelongs toPM_P(Γ). Writingℳ_≤ n∈PM_R(Γ)for the initial segment ofℳconsisting of the firstnmoves, we sets_f(ℳ) := s_f(ℳ_≤ n). SinceℳisT_f-guided, we have𝗅𝖺𝗌𝗍(ℳ) = T_f(ℳ_≤ n). Hence it holds by the induction hypothesis that𝕊, s_f(ℳ) ⊮_f 𝗅𝖺𝗌𝗍(ℳ), as required.
Ifℳbelongs toPM_R(Γ), then𝗅𝖺𝗌𝗍(ℳ)is a rule instance and we distinct cases based on the rule𝗋of𝗅𝖺𝗌𝗍(ℳ) ∈𝖨𝗇𝗌𝗍_Σ.
* 𝗋 is an axiom. This can never happen, because then ℳ(n) would have to be valid while we inductively know that s_f(ℳ_≤ n) refutes ℳ(n).
* 𝗋∈{𝖱_, 𝖱_μ, 𝖱_ν, 𝖥, 𝗍𝗋𝖺𝗇𝗌}. In these cases there is only one choice Δ∈𝖲𝖾𝗊_Σ for Refuter. We set T_f(ℳ) := Δ and s_f(ℳ) := s_f(ℳ_≤ n).
* 𝗋 = 𝖱_. We set s_f(ℳ) := s_f (ℳ_≤ n) and let T_f(ℳ) be the premiss corresponding to f(φψ, s_f (ℳ)), where φψ is the principal formula of 𝗅𝖺𝗌𝗍(ℳ).
* 𝗋 = 𝖱_[a]. In this case we let s_f(ℳ) be the state in f([a] φ, s_f(ℳ_≤ n)), where [a] φ is principal in 𝗅𝖺𝗌𝗍(ℳ). For T_f there is only a single choice, say Δ. We set T_f(ℳ) := Δ.
* 𝗋∈{𝖼𝗎𝗍, 𝗍𝖼}. First, we set s_f(ℳ) := s_f(ℳ_≤ n). To define T_f(ℳ), note that there are two premisses: A_1, Γ and A_2, Γ. Moreover, by the optimality of f, we have 𝕊, s_f(Γ) ⊩_f A_1 if and only if 𝕊, s_f (Γ) ⊮_f A_2. We let T_f(ℳ) be the unique A_i, Γ such that 𝕊 does not satisfy A_i at s_f (ℳ) with respect to f.
It is not hard to verify that in each case𝕊indeed falsifiesT_f(ℳ)ats_f(ℳ)with respect tof. Also note thats_f(ℳ)almost always equalss_f (ℳ_≤ n), with as only possible exception the case whereℳbelongs toPM_R(Γ)and the rule application of𝗅𝖺𝗌𝗍(ℳ)is modal.
We will now show thatT_fis indeed a winning strategy for Refuter in𝒢@Γ. To that end, suppose towards a contradiction that Refuter loses aT_f-guided𝒢@Γ-matchℳ. We already know that Refuter does not get stuck, as an axiom is never reached and all other rule instances have a non-zero number of premisses. Hence, the matchℳmust be infinite, and the rules of𝒢dictate that there will be a final segment𝒩 = Γ_0· i_0 ·Γ_1· i_1⋯ofℳon which every sequentΓ_nhas a formula in focus, and the rule instancei_nis modal for infinitely manyn. We use𝒦to denote the initial segment ofℳoccurring before𝒩, i.e. such thatℳ = 𝒦·𝒩. Without loss of generality we assume that|𝒦| > 0. By Kőnig's Lemma, there is a sequence of formulasφ_0, φ_1, …such that for everynit holds thatφ_n^∈Γ_nas well as at least one of following:
* φ_n + 1^∈Γ_n + 1 is an immediate ancestor of φ_n^∈Γ_n;
* i_n = 𝖱_[a] and Γ_n contains some φ_nξ such that φ_n + 1^∈Γ_n+1 is an immediate ancestor of some ξ^∈Γ_n with b ∈{,̆}.
As before, we write𝒩_≤ nfor the initial segment of𝒩up to the firstnmoves. Note thatT_f (𝒦·𝒩_≤ 2n) = Γ_nfor everyn ≥ 0. For convenience we will denote𝒦·𝒩_≤ 2nbyℳ_n. We will reach a contradiction by showing that𝕊, s_f(ℳ_0) ⊩_f φ_0, which contradicts the fact that𝕊, s_f (ℳ_0) ⊮_f T_f(ℳ_0) = Γ_0.
The crucial claim is that for everynthere is anf-guidedℰ-match starting at(φ_n, s_f(ℳ_n))and ending at(φ_n+1, s_f(ℳ_n+1)), without passing through aμ-unfolding. More precisely, we will show that there is anf-guidedℰ-match
(φ_n, s_f(ℳ_n)) = (ψ_0, s_0) ⋯ (ψ_m, s_m) = (φ_n+1, s_f(ℳ_n+1)) (m ≥ 0)
such that for noi < mthe formulaψ_iis aμ-formula. By pasting together these finite segments, it will then follow that the strategyfis not winning for∀inℰ@(φ_0, s_f(ℳ_0)), reaching the desired contradiction.
We will first show the above claim under the assumption thatφ_n+1^is an immediate ancestor ofφ_n^, andφ_n = φ_n+1. In this casei_nis not the modal rule, since the modal rule has no side formulas. Hences_f(ℳ_n) = s_f(ℳ_n+1)and thus(φ_n, s_f (ℳ_n)) = (φ_n + 1, s_f (ℳ_n+1)), by which the result holds vacuously.
Now suppose thatφ_n+1^is an immediate ancestor ofφ_n^andφ_n ≠φ_n+1. We will show, by a case distinction on the main connective ofφ_n, that the match proceeds to the desired position(φ_n+1, s_f(ℳ_n+1))after a single round.
* First note that φ_n cannot be atomic, for atomic formulas can only have immediate ancestors when they are side formulas.
* Suppose φ_n is of the form ψ_1 ψ_2. Then φ_n^ must be principal and we have φ_n+1 = ψ_i for some i ∈{1, 2}. We let ∃ simply choose the appropriate disjunct. Since the rule of i_n must be 𝖱_, we have s_f (ℳ_n) = s_f (ℳ_n+1) and thus reach the desired position in ℰ.
* Suppose φ_n is of the form ψ_1 ψ_2. Again we find that φ_n^ must be principal, the rule of i_n now being 𝖱_. By construction we have φ_n+1 = f(φ_n, s_f(ℳ_n)), hence the the next position in ℰ again suffices.
* Suppose φ_n = ⟨ a ⟩ψ. Then the rule of i_n must be 𝖱_[a] and φ_n + 1 = ψ. By construction, we have that s_f (ℳ_n +1) is the state of f([a]χ, s_f(ℳ_n)), where [a]χ is the principal formula of the rule instance i_n. Since s_f(ℳ_n+1) is an a-successor of s_f(ℳ_n) in 𝕊, we can let ∃ choose (φ_n+1, s_f(ℳ_n+1)), as required.
* If φ_n = [a] χ, then the rule of i_n must be 𝖱_[a] and φ_n must be the principal formula of this rule instance. As s_f (ℳ_n +1) is the state of f([a]χ, s_f(ℳ_n)), the next position in ℰ will be (χ, s_f(ℳ_n+1), as required.
* φ_n = μ x ψ is not possible, because any immediate ancestor of μ x ψ^ that is not a side formula, will be out of focus.
* Finally, suppose that φ_n = ν x ψ. We have that φ_n+1 = ψ[ν x ψ/x] and the rule of i_n is 𝖱_ν. Because s_f(ℳ_n+1) = s_f(ℳ_n), the required position is reached immediately.
Finally, suppose thatφ_n + 1^is not an immediate ancestor ofφ^. Then it must be the case thati_n = 𝖱_[a]andΓ_ncontains someφ_nξsuch thatφ_n + 1^is an immediate ancestor of someξ^∈Γ_n. By assumption𝕊, s_f(ℳ_n) ⊮_f Γ_n, and thus in particular𝕊, s_f(ℳ_n) ⊩_f φ_n ξ. Hence∃can take thef-guided match from(φ_n, s_f(ℳ_n))to(ξ, s_f(ℳ_n))without passing through aμ-unfolding. Sinceξ^has an immediate ancestor (namelyφ_n+1^), we find thatξmust be of the form⟨ a ⟩ψor of the form[a] χ, where[a] χis the principal formula ofi_n. In either case we can ensure that the next position after(ξ, s_f (ℳ_n))is(φ_n+1, s_f(ℳ_n+1))by using the same argument as above for the⟨ a ⟩and[a]cases, respectively.
Since the modal rule is applied infinitely often inℳ, the segments constructed above must infinitely often be nontrivial, i.e. of length> 1. Hence, we obtain an infinitef-guidedℰ@(φ_0, s_f(ℳ_0))-match won by∃, a contradiction.
Proof of Lemma <ref>. Denote then-th position ofℳby(φ_n, ρ_n). We proceed by induction onn. The base case is simply the fact thatφ_0 ∈Γ(ρ_0)^-. For the induction step, suppose(φ_n, ρ_n)is such thatφ_n ∈Γ(ρ_n)^-, and the next position is(φ_n+1, ρ_n+1). We make a case distinction based on the shape ofφ_n. Note thatφ_n ∉{p, p}, for otherwise there would not be a next position(φ_n + 1, ρ_n + 1).
If the main connective ofφ_nis among{, μ, ν}, it follows directly from saturation thatφ_n + 1belongs toΓ(ρ_n+1)^-. Ifφ_nis a conjunction, thenφ_n + 1is the conjunct off_T(φ_n, ρ), which by the definition off_Tbelongs toΓ(ρ_n+1)^-.
Now supposeφ_nis of the form⟨ a ⟩ψ. Thenρ_n R_a^T ρ_n + 1, so eitherρ_n ρ_n + 1orρ_n + 1ρ_n. Ifρ_n R_a^T ρ_n + 1we clearly haveφ_n + 1 = ψ∈Γ(ρ_n+1)^-, by case 1(b) of Definition <ref>. Moreover, since in particularφ_n+1^b ∈𝖿𝗂𝗋𝗌𝗍(ρ_n+1), it follows from the restriction onTthat in case the focus rule is applied inρ_n+1, we haveφ_n+1^∈Γ(ρ_n+1). Ifρ_n + 1ρ_n, we argue by contradiction:
ψ∉Γ(ρ_n + 1)^- ⇒ψ∈Γ(ρ_n + 1)^- (Saturation)
⇒ [a] ψ∈Γ (ρ_n)^- (Case 1(c) of Definition <ref>, [a] ψ= ⟨ a ⟩ψ∈Σ)
⇒⟨ a ⟩ψ∉Γ(ρ_n)^-, (Saturation)
which indeed contradicts the inductive hypothesis that⟨ a ⟩ψ∈Γ(ρ_n)^-. Moreover, if the focus rule is applied inρ_n+1, we again argue by contradiction. Supposeψ^∉Γ(ρ_n+1). Thenρ_n+1^-does not containψ^$̆ after phase (1), whence we must have ψ∈Γ(ρ_n+1)^-. But then saturation gives ψ∉Γ(ρ_n+1)^-, and we can use the same argument as before. Finally, the case where φ_n is of the form [a] φ is similar to the easy part of the previous case and therefore left to the reader.
Proof of Lemma <ref>. We proceed by induction on the number of distinct states occurring in 𝒩.
For the base case, we assume that ρ is the only state visited in 𝒩. We proceed by induction on the length n + 1 of 𝒩. For the (inner) base case, where |𝒩 = 1|, we have 𝖿𝗂𝗋𝗌𝗍(𝒩) = (φ, ρ) = 𝗅𝖺𝗌𝗍(𝒩). By saturation φφ∉Γ(ρ) and thus φφ∈Γ(ρ), as required. For the inductive step, suppose the claim holds for every match up to size n + 1. Suppose |𝒩| = n + 2 and consider the final transition (χ, ρ) · (ψ, ρ) of 𝒩. Since the match proceeds after the position (χ, ρ), but does not move to a new state of 𝕊^T, it follows from the irreflexivity of 𝕊^T that the main connective of χ must be among {, , μ, ν}. Moreover, by Lemma <ref>, we have χ∈Γ(ρ)^-. We claim that χψ∈Γ(ρ)^-. When the main connective of χ is in {, μ, ν}, this follows directly from saturation. If χ is a conjunction, we have, since ℳ is f_T-guided, that (ψ, ρ) = f_T(χ, ρ). By the definition of f_T, it follows that χψ∈Γ(ρ), as required. We finish the proof of this special case of the lemma by applying the induction hypothesis to the initial segment of 𝒩 obtained by removing the last position (ψ, ρ). This gives φχ∈Γ(ρ), hence by saturation φψ∈Γ(ρ).
For the (outer) inductive step, suppose that n > 1 states are visited in 𝒩. We write 𝒩 as 𝒜_1 ·ℬ_1 ·𝒜_2 ·ℬ_2 ⋯𝒜_m,
where for every (χ, τ) in 𝒜_i it holds that τ = ρ and for every (χ, τ) in ℬ_i it holds that τ≠ρ. As 𝕊^T is a forest, there must for each ℬ_i be some γ_i, δ_i, and τ_i such that 𝖿𝗂𝗋𝗌𝗍(ℬ_i) = (γ_i, τ_i) and 𝗅𝖺𝗌𝗍(ℬ_i) = (δ_i, τ_i). Denote 𝖿𝗂𝗋𝗌𝗍(𝒜_i) = (α_i, ρ) and 𝗅𝖺𝗌𝗍(𝒜_i) = (β_i, ρ). Summing up, we will we use the following notation for each i ∈ [1, m):
𝖿𝗂𝗋𝗌𝗍(𝒜_i) = (α_i, ρ), 𝗅𝖺𝗌𝗍(𝒜_i) = (β_i, ρ), 𝖿𝗂𝗋𝗌𝗍(ℬ_i) = (γ_i, τ_i), 𝗅𝖺𝗌𝗍(ℬ_i) = (δ_i, τ_i).
Let i ∈ [1, m) be arbitrary. Since ℬ_i does not visit ρ, it must visit strictly less states than 𝒩. By the induction hypothesis we find that γ_i δ_i ∈Γ(τ_i). We claim that α_i β_i+1∈Γ(ρ). Since the match 𝒩 transitions from the state ρ to the state τ_i, there must be some a ∈𝖣 such that ρ R^T_a τ_i.
We first assume that ρτ_i. Then by the nature of the game, β_i must be of the form β_i = ⟨ a ⟩γ_i or of the form β_i = [a] γ_i, and, since by definition f_T only moves upwards in 𝕊^T, we must have δ_i = ⟨ă⟩α_i+1. We only cover the case where β_i = [a] γ_i (the case where β_i = ⟨ a ⟩γ_i is almost the same, but uses 2(c) instead of 2(a) of Definition <ref>). We indeed find:
γ_i ⟨ă⟩α_i+1∈Γ(τ_i) (Induction hypothesis, δ_i = ăα_i+1)
⇒ γ_i ⟨ă⟩α_i+1∉Γ(τ_i) (Saturation)
⇒ [a]γ_i α_i+1∉Γ(ρ) (Case 2(a) of Definition <ref>)
⇒ β_i α_i+1∈Γ(ρ), (Saturation, β_i = aγ_i)
Now suppose that τ_i ρ. Then β_i must be of the form β_i = ⟨ a ⟩γ_i, because the strategy f_T moves only upwards in 𝕊^T. Moreover, we have δ_i = [ă] α_i+1 or δ_i = ⟨ă⟩α_i + 1. An argument similar to the one above, respectively using cases 2(b) and 2(d) of Definition <ref>, shows that aγ_i α_i+1∈Γ(ρ).
Applying the induction hypothesis to the 𝒜_i, we have α_i β_i ∈Γ(ρ) for every 1 ≤ i ≤ m. Hence, by saturation, we find γ_1 δ_m∈Γ(ρ), as required.
Proof of Proposition <ref>. Let ℳ be an arbitrary f_T-guided and full ℰ-match. By positional determinacy, we may without loss of generality assume that ∃ adheres to some positional strategy in ℳ. First suppose that ℳ is finite. We consider the potential cases one-by-one.
If φ is a propositional letter p, we find:
φ = p ⇒ p ∈Γ(ρ)^- ⇒p∉Γ(ρ)^- ⇒𝕊^T, ρ⊮p,
where the first implication holds due to Lemma <ref>, the second due to saturation, and the third by the definition of the valuation function of 𝕊^T. It follows that in this case ∃ gets stuck.
Similarly, if φ is a negated propositional letter p, we find:
φ = p⇒p∈Γ(ρ)^- ⇒𝕊^T, ρ⊩ p ⇒𝕊^T, ρ⊮p,
hence again ∃ gets stuck.
Finally, we claim that φ is not of the form [a]ψ. Indeed, in that case the fact that [a] ψ∈Γ(ρ)^- would entail that the modal rule is applicable. Hence f_T(φ, ρ) would be defined, contradicting the assumed fullness of ℳ.
Now suppose that ℳ is infinite, say ℳ = (φ_n , ρ_n)_n ∈ω. Suppose first that some state ρ is visited infinitely often in ℳ. By the pigeonhole principle, there must be a formula φ and segment 𝒩 of ℳ such that 𝖿𝗂𝗋𝗌𝗍(𝒩) = 𝗅𝖺𝗌𝗍(𝒩) = (φ, ρ). Since both players follow a positional strategy, we can write the match ℳ as 𝒦𝒩^*, where 𝒦 is some initial segment of ℳ. But this means that only finitely many states of 𝕊^T occur in ℳ. As ℳ is winning for ∃, there must, by Proposition <ref>, be some formula ν x ψ occurring infinitely often in ℳ. Therefore, there must be a position (ν x ψ, τ) occurring infinitely often in ℳ. But then Lemma <ref> gives φ[ν x φ/ x] ν x ψ∈Γ(τ), contradicting saturation.
Hence we may assume that ℳ visits each state ρ at most finitely often. Suppose, towards a contradiction, that ℳ is won by ∃. Let (φ_α(0), ρ_α(0)) be a position of ℳ after which every unfolding is a ν-unfolding, and ρ_n > ρ_α(0) for every n > α(0). Recursively let α(i + 1) be the least index greater than α(i) such that for every m > α(i + 1) it holds that ρ_m > ρ_α(i+1).
It is not hard to see that for each i there is an a_i ∈𝖣 with ρ_α(i)ρ_α(i+1). This gives a T-guided 𝒢-match 𝒦 = ρ_α(0)·𝖱_[a_α(0)]·ρ_α(1)·𝖱_[a_α(1)]·ρ_α(2)·𝖱_[a_α(2)]⋯. Note that 𝒦 is infinite, as ℳ visits infinitely many states. Because T is by assumption winning for Refuter, the focus rule must be applied infinitely often.
Let ρ_α(i) with i > 0 be a segment on which the focus rule is applied. Note that φ_α(i) - 1 is modal, hence we obtain by Lemma <ref> that φ_α(i)^∈Γ(ρ_α(i)). We claim that for every j > i it holds that every sequent in ρ_α(j) has a formula in focus. With this we reach the desired contradiction, because it means that the focus rule cannot be applied on this final segment of 𝒦 after all.
In particular, we will show that φ_α(j)^∈𝖿𝗂𝗋𝗌𝗍(ρ_α(j)) for every j > i, which suffices by the restriction of T to cumulative rule applications. We proceed by induction on j - i. For the base case, we wish to show that φ_α(i+1)^∈𝖿𝗂𝗋𝗌𝗍(ρ_α(i + 1)). To that end, consider 𝒥 = (φ_α(i), ρ_α(i)) ⋯ (φ_α(i+1) - 1, ρ_α(i + 1) -1). Since T is a forest, we have ρ_α(i+1) - 1 = ρ_α(i) and thus either α(i) = α(i + 1) - 1, in which case, by saturation φ_α(i)φ_α(i + 1) - 1∈Γ(ρ_α(i + 1) - 1), or |𝒥| > 1 and we may apply Lemma <ref> to again obtain φ_α(i)φ_α(i+1) - 1∈Γ(ρ_α(i)). Since ρ_α(i)ρ_α(i+1), it follows that φ_α(i+1) - 1 must be of the form ⟨ a_i⟩φ_α(i+1) or of the form [a_i] φ_α(i+1). In either case, Definition <ref>.1 gives φ_α(i+1)^∈𝖿𝗂𝗋𝗌𝗍(ρ_α(i+1)), as required.
For the induction step we can use precisely the same argument.
|
http://arxiv.org/abs/2307.00666v1
|
20230702210156
|
Real-time Vision-based Navigation for a Robot in an Indoor Environment
|
[
"Sagar Manglani"
] |
cs.CV
|
[
"cs.CV"
] |
Spitzer thermal phase curve of WASP-121 b
G. Morellochalmers,iac
Q. Changeatesa,stsci
A. Dyrekcea
P.-O. Lagagecea
J. C. Tanchalmers,uva
August 1, 2023
==================================================================================================
This paper presents a study on the development of an obstacle-avoidance navigation system for autonomous navigation in home environments. The system utilizes vision-based techniques and advanced path-planning algorithms to enable the robot to navigate toward the destination while avoiding obstacles. The performance of the system is evaluated through qualitative and quantitative metrics, highlighting its strengths and limitations. The findings contribute to the advancement of indoor robot navigation, showcasing the potential of vision-based techniques for real-time, autonomous navigation.
§ INTRODUCTION
The objective of this project is to develop a robust obstacle-avoidance navigation system for a low-cost, 3D-printed, four-legged walking robot in home environments. The robot aims to autonomously navigate towards a specified destination point in the lowest amount of time while avoiding obstacles.
Figure <ref> depicts the fundamental components employed in this project, comprising a robot equipped with an RGBD camera for perception and an Nvidia Jetson Xavier NX for onboard computation. Notably, the navigation system solely relies on visual information by utilizing RGB images exclusively to navigate through the environment. As part of the baseline implementation, our aim is to devise an optimal navigation path through the environment illustrated in Figure <ref>, starting from the bottom center of the image and concluding at the top center of the image.
§ LITERATURE REVIEW
Several research projects have focused on the development of autonomous robots capable of navigating diverse environments. However, there is a scarcity of research specifically addressing the navigation challenges faced by legged robots in indoor environments, particularly when relying on single-camera vision. Legged robots possess the ability to traverse uneven surfaces and overcome obstacles such as stairs, which are typically inaccessible to traditional wheeled robots.
While the common approach in the literature relies on LiDAR-based measurements for environment mapping and path planning, there has been limited research exploring vision-only methods for indoor navigation. Existing approaches often rely on preliminary techniques such as image contrast, which have a high probability of failure. In contrast, our paper proposes utilizing image segmentation with deep neural networks, which have shown significantly higher success rates in understanding the environment.
In the paper titled "Indoor Robot Navigation with Single Camera Vision" by Gini et al. <cit.>, the authors explore indoor navigation using a wheeled robot equipped with a single camera. Their approach relies on image contrast to estimate ground and wall regions, employs a grid-based representation, and utilizes the A* search algorithm. Although this method demonstrates commendable progress, it exhibits limitations in accurately differentiating between multiple floor types and adequately perceiving obstacles, thus hindering its ability to assign varying costs for search.
Another notable work, "Development of an Autonomous Navigation System for an Indoor Service Robot Application" by Seo et al. <cit.>, combines odometry and laser measurements to map the environment. Monte Carlo localization is employed for robot localization, and the A* algorithm is utilized for navigation planning. However, a drawback of this method is its incapability to maneuver around obstacles not detected by laser-based measurements, and the use of fixed costs for A* planning, which may not accurately reflect the optimal path in the environment.
§ DATASET
To develop and evaluate our navigation system, we built a dataset comprising images captured by the robot's RGB camera. The dataset encompasses varying home environments, including obstacles of different shapes arranged at different locations. The dataset used in this project includes 10 manually annotated environments, each containing various scenes and obstacles. This dataset is split into 2 logs and each log represents a set of diverse images with varying number of objects in a particular home environment. In addition, we have included 1200 sequential unlabeled images showcasing a moving robot in the given home environment. These images are specifically intended for testing purposes, allowing us to evaluate the navigation system's performance in dynamic scenarios. By incorporating sequential images, we aim to simulate real-world conditions and assess the system's ability to adapt and navigate effectively in changing environments.
§ METHODOLOGY
§.§ Key steps
The implementation involves several key steps for obstacle-avoidance navigation. First, we preprocess the color images by cropping them to focus on the relevant floor area (see Figure <ref>). Subsequently, we employ a state-of-the-art semantic segmentation network, known as Segment-Anything by Meta <cit.>, to obtain accurate floor and obstacle segmentation results (see Figure <ref>). We assign costs to the segmented obstacles based on their characteristics and the robot's ability to traverse them (see Figure <ref>). The selection of costs plays a crucial role in determining the trajectory followed by the robot. As illustrated in Figure <ref>, lower costs are depicted by darker regions, while higher costs are represented by lighter regions. In the context of this project, we have made deliberate choices regarding cost assignment to various elements in the environment.
Specifically, we have assigned a relatively lower cost to walk on the carpeted surface, as the robot exhibits greater stability and maneuverability on this type of terrain. Conversely, a higher cost is allocated to walking on hardwood floors due to the tendency of the 3D-printed foot of the robot to experience slippage in such conditions. Moreover, objects such as books and other obstacles within the environment are treated as impediments and are assigned a significantly higher cost. This strategic cost assignment effectively encourages the robot to circumvent these obstacles during path planning, promoting efficient navigation through the environment.
The obtained cost image is subsequently transformed into a birds-eye view (BEV) perspective, as depicted in Figure <ref>, alongside the corresponding color image shown in Figure <ref>. This transformation process involves calculating the mapping between the perspective view and the BEV space. To achieve this, four points in the perspective image are identified and matched to their corresponding positions in the BEV space. By employing homography, a transformation matrix is then computed to map the two planes. To enable efficient search operations, a pixel-to-millimeter ratio of 1:1 is employed, wherein one pixel in the BEV space corresponds to one millimeter on the ground. Leveraging this transformation matrix, the perspective image is remapped to generate the BEV image, as exemplified in the transition from Figure <ref> to Figure <ref>.
The resulting BEV cost map, presented in Figure <ref>, is then utilized to construct a 19x20 cost grid, as illustrated in Figure <ref>. This involves calculating the mean cost value within each grid cell, which corresponds to a 100x100mm area on the ground. The choice of this cell size is influenced by the size of the cost grid as well as the width of the robot. Subsequently, the A* algorithm is employed to determine the optimal path with the lowest cost, taking into account both obstacle avoidance and efficient navigation toward the destination point.
§.§ A* setup
In the grid-based A* algorithm, the state is represented by the current location, while the available actions correspond to movement in the four cardinal directions. Initially, a cost map is generated, assigning very high costs to all cells. As the algorithm explores the states, these costs are gradually updated. Successor states are determined based on the current state and selected actions, and they are added to a priority queue sorted by their associated costs. To introduce heuristics into the cost calculations for A*, we have incorporated the Manhattan distance between the current state and the destination as the heuristic measure. This choice of heuristic is motivated by its consistency, as it typically underestimates the actual cost required to reach the destination. The algorithm proceeds by exploring the state with the lowest cost in the queue, updating the priority queue and cost map based on the determined successors, and repeating this process until the destination state is reached.
§.§ Real-time optimization
The performance bottleneck of the method lies in the semantic segmentation step, which currently takes over 15 seconds to process each frame. To overcome this limitation and achieve real-time optimization, three modifications were implemented:
Firstly, the model was quantized from FP32 to INT8 precision, resulting in a negligible loss of segmentation performance. Secondly, the input image resolution was reduced from 1280x720 to 640x360, impacting segmentation accuracy at long-range but maintaining effectiveness at shorter distances. Lastly, the vit_b model with the smallest backbone in the Segment Anything models was utilized, causing only a minor reduction in segmentation accuracy.
These optimizations resulted in the model being able to process each image in well under a second on the Nvidia Jetson Xavier NX onboard the robot, significantly improving real-time performance.
§ EVALUATION METRIC
To evaluate the effectiveness of our obstacle-avoidance navigation system, we employ both qualitative and quantitative metrics. Qualitatively, we visually assess the generated navigation path overlaid on the BEV color image to verify its adherence to the desired trajectory and successful obstacle avoidance. Quantitatively, we compare the generated paths with the manually-annotated data to benchmark the system's performance.
To further evaluate the effectiveness of our obstacle-avoidance navigation system, we will also qualitatively evaluate the system's performance on a dataset of 1200 sequential images showing a moving robot in the environment. By visually assessing the inference results on these images, we can gain insights into the system's ability to effectively navigate and avoid obstacles in a dynamic environment. Additionally, this qualitative evaluation will provide valuable feedback on the system's performance in real-world scenarios, complementing the quantitative metrics previously mentioned.
§ RESULTS AND ANALYSIS
At the present stage, we have observed excellent results with the A* navigation system.
Figure <ref> above depicts the outcomes obtained from the A* search implementation. Figures <ref> and <ref> illustrate the cost grid and the corresponding optimal low-cost path generated using the A* algorithm. It is important to note that the origin of navigation in the image is located at the bottom center (indicated by the red marker in <ref>), while the destination is positioned at the top center (indicated by the green marker in <ref>).
Moreover, by computing the inverse transformation matrix, we can convert the bird's-eye view (BEV) to the perspective view. This enables us to transform the results into the perspective view and overlay them onto the original image for visualization. Figure <ref> demonstrates this overlay, showcasing the path computed based on vision-based sensors alone. The visual representation highlights the system's ability to perform accurate obstacle detection and avoidance through qualitative analysis.
§.§ Qualitative Evaluation
Qualitative evaluation involves visually analyzing the navigation paths generated by the obstacle-avoidance system overlaid on environment images. This helps identify strengths while also highlighting any unexpected behaviors during navigation. Figures <ref> and <ref> below represent the dataset used for evaluation, along with their corresponding qualitative assessments.
§.§ Quantitative Evaluation
We performed manual labeling for each step in a dataset comprising 10 images, encompassing the entire 19x20 grid. These labels were then compared against the paths generated by the search algorithm. The results of this comparison are presented in the following tables, with each table representing a specific log.
§.§ Qualitative Evaluation of Test Data
The test data consists of 1200 images, and a qualitative evaluation of the test data is provided in the video (please refer to the code section). Figure <ref> showcases some of the notable moments captured in the video.
Observing the qualitative and quantitative results, it becomes evident that certain paths contain errors, particularly when the walking robot encounters a blocked path. These errors will be further analyzed and discussed in detail in the Error Analysis section.
§ ERROR ANALYSIS
Our experiments demonstrate that the navigation system exhibits several strengths, yet there are notable limitations that need to be addressed. The issues are exemplified by the blocked paths depicted in Figures <ref>, <ref>, and <ref>. In Figure <ref>, the semantic segmentation fails to recognize the carpet beyond the obstacles, resulting in a path planning error where the robot attempts to navigate through the highest-cost route. Figure <ref> demonstrates a situation where the vision system identifies a small gap and plans a path through it, despite the practical impossibility for the robot to traverse this path successfully. In Figure <ref>, a slightly skewed path is taken due to an erratic boundary in the segmentation network's output. Furthermore, Figure <ref> illustrates a scenario where an obstacle near the destination causes the search algorithm to choose a sub-optimal path that traverses the obstacle.
The main takeaway from these examples is that the accuracy of the segmentation system directly influences the accuracy of the robot's navigated path. It is worth noting that the segmentation network tends to have lower accuracy at longer distances. However, this limitation can be mitigated to some extent as robots are expected to continuously re-plan their paths while moving forward. Shorter distances allow for error correction, but it is important to recognize that this may lead to sub-optimal paths overall.
Beyond the analysis discussed above, one of the major challenges here is the robot's difficulty in perceiving and planning paths beyond tall obstacles, which hinders its ability to navigate effectively in such scenarios. The take-away is that to tackle the obstacle of tall obstacles, it is worthwhile to consider investigating partially-observable search methods in future research.
§ FUTURE WORK
Looking beyond the scope of this project, our research will focus on further enhancing the obstacle-avoidance navigation system by investigating advanced path-planning techniques. One such technique is performing search in partially-observable environments to plan paths in scenarios where the environment is not entirely observable to the robot.
Furthermore, our future work will involve expanding the scope of the research to encompass a wider range of home environments, including stairs and uneven surfaces. Additionally, we will explore techniques to optimize the efficiency of the system in real-time scenarios, taking into consideration factors such as robot stability, dynamic obstacle avoidance, and resource constraints.
§ CODE AND VIDEO
Repository: https://github.com/manglanisagar/vision-search-navigationGithub Link
Note: The offline code is currently available for testing purposes, while the online code, which is specifically designed for deployment on the robot, will be released after publication.
Video: https://youtu.be/CTwg6dD-oxIYoutube Link
Dataset Links:
1. Labeled Data: https://drive.google.com/drive/folders/1sxXblBL04injdSfNBE3NMQGg7dtDwJAs?usp=sharingGoogle Drive Link
2. Unlabeled Data: https://drive.google.com/drive/folders/1xe9N7UEEH2GSFKTQ7Z9DMfBM1-183ovO?usp=sharingGoogle Drive Link
unsrt
1
kirillov2023segment
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo and others.
Segment anything.
arXiv preprint arXiv:2304.02643, 2023.
seo2013development
Dong Jin Seo and Jongwoo Kim.
Development of autonomous navigation system for an indoor service robot application.
In 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013), pages 204–206. IEEE, 2013.
gini2002indoor
Giuseppina C Gini, Alberto Marchi and others. Indoor robot navigation with single camera vision. PRIS 2, pages 67–76. 2002.
|
http://arxiv.org/abs/2307.03334v1
|
20230707003016
|
Variational quantum regression algorithm with encoded data structure
|
[
"C. -C. Joseph Wang",
"Ryan S. Bennink"
] |
quant-ph
|
[
"quant-ph",
"cs.LG"
] |
[email protected], [email protected]
Quantum Computational Science Group, Quantum Information Science Section, Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (https://www.energy.gov/doe-public-access-plan).
Variational quantum regression algorithm with encoded data structure
C.-C. Joseph Wang and Ryan S. Bennink
August 1, 2023
====================================================================
Variational quantum algorithms (VQAs) prevail to solve practical problems such as combinatorial optimization, quantum chemistry simulation, quantum machine learning, and quantum error correction on noisy quantum computers.
For variational quantum machine learning, a variational algorithm with model interpretability built into the algorithm is yet to be exploited. In this paper, we construct a quantum regression algorithm and identify the direct relation of variational parameters to learned regression coefficients, while employing a circuit that directly encodes the data in quantum amplitudes reflecting the structure of the classical data table. The algorithm is particularly suitable for well-connected qubits.
With compressed encoding and digital-analog gate operation, the run time complexity is logarithmically more advantageous than that for digital 2-local gate native hardware with the number of data entries encoded, a decent improvement in noisy intermediate-scale quantum computers and a minor improvement for large-scale quantum computing
Our suggested method of compressed binary encoding offers a remarkable reduction in the number of physical qubits needed when compared to the traditional one-hot-encoding technique with the same
input data.
The algorithm inherently performs linear regression but can also be used easily for nonlinear regression by building nonlinear features into the training data.
In terms of measured cost function which distinguishes a good model from a poor one for model training, it will be effective only when the number of features is much less than the number of records for the encoded data structure to be observable.
To echo this finding and mitigate hardware noise in practice, the ensemble model training from the quantum regression model learning with important feature selection from regularization is incorporated and illustrated numerically.
Regression models are predictive models that learn the map between a target continuous variable and predictors (attributes/input variables/features) in training.
The predictor variables can generally be transformed into continuous ones with the appropriate interpretation based on the transformation performed. Regression models are important machine learning models to study due to their wide adoption in industrial applications at scale, as opposed to more complex models such as neural networks, which typically focus on predicted results and less on understanding the correlation between the prediction outcomes and the predictors. Additional features, such as the flexibility to model nonlinear dependencies based on domain expertise and the ability to perform relevant variable selection with regularization techniques, further enhance the utility of regression modeling in statistical learning.
Model interpretability to boost explainability is essential for the wider adoption of machine learning applications, especially in domains where wrong model predictions can cause serious consequences. For example, in healthcare and financial applications, strict regulations require models with clear interpretation to validate model predictions, for the model to be approved/trusted. Model interpret-ability is an equally valid criterion for quantum machine learning but so far has received little attention. While a quantum regression algorithm was proposed a decade ago <cit.> and other approaches based on matrix inversion and quantum kernel methods have been proposed recently <cit.>, these works assumed noise-free quantum hardware and did not address model interpretation issues.
Here we revisit quantum regression from a variational perspective with a known encoded data structure and develop an algorithm that provides interpretive value and prediction power as required to be useful in the noisy intermediate-scale quantum (NISQ) era <cit.>.
In the NISQ era, variational quantum algorithms are undeniably the most feasible method <cit.>. They offer a practical solution to circumvent quantum hardware noises and intricate controls while maintaining their versatility for quantum computation <cit.>.
We provide a different view to mitigating the scaling obstacles due to noise by ensemble regression learning with regularization. Regularization has been useful for resolving the well-known over-fitting problem in classical machine learning. While overfitting will likely not be a problem until quantum hardware noise is greatly suppressed, we show that regularization techniques can be a valid classical strategy for selecting important features in the context of the quantum algorithm.
§ HYBRID QUANTUM REGRESSION ALGORITHM
§.§ Problem statement
Regression is the task of determining the relationship between a set of independent quantities (or “features”) (X_1,…,X_M) and a dependent quantity (or “response”) Y from experimental data. It is one of the most common and important tasks in all science, with particular prevalence in data modeling and machine learning. Usually the relationship is assumed to be linear in X_m, Y = ∑_m=1^M W_m X_m where (W_1,…,W_m) are known as regression coefficients or importance weights.
However, by treating products of independent variables as additional independent variables, linear regression can also be used to model nonlinear relationships. In the typical regression scenario, one has L independent observations (y_0,…,y_L-1) of Y and corresponding observations (x_0m,…,x_(L-1)m) of each variable X_m. The goal of (linear) regression is to determine the coefficients (W_1,…,W_m) that best fit the data.
We propose a method to solve the linear regression problem using variational quantum circuits whose parameters encode the regression coefficients. The best regression coefficients are found by classical optimization with respect to a regularized cost function that furthermore helps to find the subset of features that are most important. A key aspect of our approach is that the structural data are encoded directly in the amplitudes of the quantum state and the regression coefficients are encoded directly in the parameters of the quantum circuit, which leads to optimal interpretability. We note that protocols for implementing quantum amplitude encoding are still under active research <cit.>. Along with our regression algorithm, we provide several state preparation algorithms to facilitate the implementation of regression on near-term quantum computers.
§.§ Quantum amplitude encoding
The first step of our algorithm is to encode the observations y_0,…,y_L-1, x_11,…,x_LM in a quantum state.
For notational convenience we define x_l0≡ y_l and define 𝐗 as the matrix with elements 𝐗_lm = x_lm for l=0,…,L-1 and m=0,…,M.
We begin by shifting and rescaling the data columnwise so that each column of 𝐗 has zero mean and equal variance. This ensures that our algorithm is equally sensitive to all variables for best training. Then the data is globally scaled so that ∑_l,m x_lm^2 = 1.
(The scaling of the data must be accounted for when interpreting the regression coefficients obtained by the algorithm.)
Therefore, the data can in principle be mapped to the amplitudes of a quantum state:
|ψ_D⟩ = ∑_l,m x_lm|lm⟩
where {|lm⟩} are computational basis states of a quantum system having at least L(M+1) orthogonal states.
For now, we do not go into details about possible encoding schemes or methods of preparing |ψ_D⟩ so that we can illustrate the main ideas of the algorithm.
Such details will be discussed in Section 2.
§.§ Mapping of regression coefficients to quantum amplitudes
Our goal is a variational circuit whose structure reflects that of the regression problem at hand and whose output is proportional to regression error
E = ∑_l=0^L-1 ( y_l - ỹ_l)^2
where
ỹ_l = ∑_m=1^M x_lm W_m
is the predicted value of y_l.
We show first how to multiply a given feature (column of 𝐗) by a controllable coefficient.
It will be convenient for exposition to treat the row index l and column index m as separate quantum degrees of freedom, |lm⟩ = |l⟩⊗ |m⟩≡ |l⟩ |m⟩. Consider the operator
U^(m)(ϕ)
= 1⊗ e^-iϕ |m⟩⟨ m|
which acts as identity (1) on the row (observation) register and imparts a phase to a selected element of the column (feature) register. It maps |l⟩ |m⟩ to e^-iϕ |l⟩ |m⟩ and leaves all other basis states unchanged. Thus when applied to |ψ_D⟩ it maps x_lm→ e^-iϕ x_lm for all l. By extension, the sequence ∏_m=1^M U^(m)(ϕ_m) applies a controllable phase ϕ_m to each column m of the data. In this case the resulting state would be
|ψ_D⟩ = ∑_l, m x_lme^-iϕ_m|l⟩ |m⟩.
Notice that the relation between ϕ_m and the coefficient of |l⟩ |m⟩ is not exactly what we are looking for if we were to associate the phase ϕ_m with the real regression parameters. The quantum map would not be real (up to a global phase factor) and is not linear in ϕ_m as expected for conventional linear regression.
Furthermore, the regression coefficients should range between [-∞, +∞] whereas the unique range of ϕ_m is [-π,π). Based on these observations we cannot make the direct association of the phases ϕ_m with regression weights W_m. However if we engineer the circuit in a target code space to yield
|ψ_l⟩∝∑_m x_lm ( e^-iϕ_m+e^+iϕ_m) |l⟩ |m⟩∝∑_mx_lmcosϕ_m |lm ⟩
we can identify W_m ∝cosϕ_m∈ [-1.0, 1.0], with the proportionality chosen to bring the weights into the needed range.
§.§ Quantum regression algorithm
To engineer this mapping of phases to regression weights we use controlled phase gates of the form
U_C^(m)(ϕ)
=|0⟩⟨ 0| ⊗1⊗ e^iϕ_m |m⟩⟨ m| +
|1⟩⟨ 1| ⊗1⊗ e^-iϕ_m |m⟩⟨ m|
which act on an ancilla qubit for control, row register, and column register respectively.
(Note that if the hardware does not natively support such a controlled gate with symmetric phases, it can be realized as an uncontrolled rotation e^i ϕ_m followed by a controlled rotation e^-2iϕ_m).
This gate imparts the phase e^iϕ_m to |0⟩⊗ |l⟩⊗ |m⟩, imparts the phase e^-iϕ_m to |1⟩⊗ |l⟩⊗ |m⟩, and leaves states with column index ≠ m unchanged.
As we now show, the transformation x_lm→cosϕ_m x_lm can be accomplished by such controlled phase gates with a suitably prepared and measured ancilla. The steps of the algorithm and corresponding evolution of the quantum state are:
* Prepare the data state |ψ_D⟩:
|ψ_D⟩ = ∑_l,m x_lm|lm⟩
Ways of doing this will be discussed in Section 2.
* Prepare an ancilla qubit in the state |+⟩≡ (|0⟩ + |1⟩)/√(2):
⟶|+⟩⊗ |ψ_D⟩.
* Apply controlled phase gates U_C^(m) for each column m:
⟶ ∏_m U_C^(m)(ϕ_m)
( |0⟩ + |1⟩/√(2)⊗ |ψ_D⟩)
= 1/√(2)∑_l,m( e^iϕ_m |0⟩ + e^-iϕ_m |1⟩) ⊗ x_lm |lm⟩
* Apply a Hadamard gate to the ancilla:
⟶ 1/2∑_l,m( e^iϕ_m (|0⟩ + |1⟩) + e^-iϕ_m (|0⟩-|1⟩) ) x_lm |lm⟩
= 1/√(2)∑_l,m( cosϕ_m |0⟩ - i sinϕ_m |1⟩) x_lm |lm⟩
* Project the ancilla onto the state |0⟩:
⟶ |Ψ_0⟩ = ∑_l,m x_lmcosϕ_m |lm⟩
* Measurement by the hermitian operator:
M̂ = ∑_l=0^L-1∑_m,m'=1^M |lm⟩⟨ lm'|.
As shown in Appendix A, the expectation value ⟨M̂⟩≡⟨Ψ_0|M̂|Ψ_0⟩ is
⟨M̂⟩ = ∑_l = 0^L-1( ∑_m = 0^M+1 x_lmcosϕ_m)^2
= (cosϕ_0)^2 ∑_l( y_l - ∑_m=1^M x_lm W_m )^2
where we identify W_m = -cosϕ_m/cosϕ_0 as the regression coefficient for feature m with M
features
and the response variable component y_l is by definition the x_l0 component. With this identification the sum in the equation above can be recognized as the regression error (<ref>). This result bridges the gap between our quantum regression algorithm and the conventional regression algorithm and enables clear interpretation of the variational parameters as discussed in our numerical studies.
Although only the relative sign between the feature and response matters, we impose the condition ϕ_y =ϕ_0∈ (π/2, 3/2π) so that cosϕ_0 is always negative
and nonzero so that the regression coefficient W_m are well defined.
§.§ Model training and regularization
Since the goal is to minimize the regression error, the simplest approach is to take the cost function C(𝐖) to be just the regression error E,
C( W) = ∑_l = 0^L-1(y_l - ∑_m = 1^M x_lmW_m)^2 = ⟨M̂⟩/(cosϕ_0)^2
where the regression weights W = (W_1,…,W_m) are implicit functions of the circuit parameters ϕ = (ϕ_0,…,ϕ_m).
The parameter vector ϕ̅ that minimizes the cost function yields the optimal linear regression coefficients 𝐖̅ with W̅_m = -cosϕ̅_m / cosϕ̅_0.
A fundamental question to ask is how sensitive the cost function is to a well-trained model as opposed to a poorly-trained model. When the data
used to train the model is noise-free, we expect the cost function for a well-trained model to be zero. However, for a poor model, the cost function should be large enough to measure with significant probability. These details will be discussed in Appendix B.
In practice, regression typically includes regularization terms to bias toward models that fit the data well with fewer features, which helps avoid overfitting. The cost function is modified to
C( W) = ∑_l = 0^L-1(∑_m = 1^M x_lm W_m-y_l)^2 + α∑_m = 1^M|W_m| + β∑_m = 1^M|W_m|^2.
where α,β > 0. This cost function is given as a general elastic net regularization (α≠ 0, β≠ 0), which accommodates LASSO (least absolute shrinkage and selection operator, α≠ 0, β = 0) or Ridge (β≠ 0, α = 0) regularization as limiting cases. We note that the regularization terms can be evaluated on a classical computer and simply added to the cost function evaluated by the quantum computer.
With this general scheme, we can build our hybrid quantum-classical algorithm to find the best parameters ϕ_m,α,β which minimize the overall cost function. One would tend to implement the popular gradient-based approaches to search the minima. However, we do not consider this approach the best way forward for the following reasons. We expect landscapes with multiple vanishing gradients for our cost function due to the cosine functions for the phase angles and the optimization may end up with solutions far from optimal. In addition, the overhead in measurements protocol to extract gradient and Hessian matrix for the cost functions become the bottleneck for the overall hybrid quantum solutions.
Instead, we employ the gradient-free Nelder-Mead (NM) optimization algorithm for the cost function to search the global minima.
The downhill simplex method employed by NM may frequently be the best method to use if the figure of merit is “get something working quickly” for a problem whose computational burden is small <cit.>.
We found convergence to the optimal value of the cost function to high accuracy typically can be found by passing the sub-optimal result from the latest global NM search as the starting parameter for the next global search iteratively until the desired accuracy is reached.
When the model training involved a large amount of training data, the training can be broken down into ensemble training with multiple bootstrap data samples in parallel.
§ IMPLEMENTATION
Previously we have described how to implement the linear regression in a variational quantum circuit with controlled phase gates. To have an end-to-end solution, we need to consider how to encode the data in the quantum state, that is, how to prepare the data state |ψ_D ⟩. To that end we envision that the state can be prepared using programmable phase gates similar to those used to perform the regression. However, whereas in the regression step the phases depended only on the feature and were the same for each observation, to encode the data each a distinct phase ϕ_lm will generally be needed for each data element x_lm.
We notice that due to the global normalization condition, each normalized element x_lm is generally much smaller than 1. This indicates we can encode these classical data elements through small phase angles in which sin x_lm≈ x_lm, that is, the phase angles are approximately the data elements themselves.
To minimize the potential hardware errors, we prefer low qubit counts while maintaining the simplicity of the algorithm. This is a particularly appealing solution for well-connected and programmable qubits such as Rydberg atom-based and ultra-cold ion quantum platforms <cit.>.
At this point, we consider specifics of various encodings of the data.
§.§ One-hot encoding
§.§.§ Data state preparation
In one-hot encoding each pair (l,m) is mapped to a single index in {1,…,L(M+1)} and encoded by a 1 value in the qubit with corresponding index. This would require L(M+1) qubits. One-hot-encoding should be avoided for large data sets as it is resource hungry.
For quantum machine learning on near-term quantum devices, this encoding is still useful
for the proof-of-concept of the algorithm we proposed since the circuits to implement it are relatively simple.
With one-hot encoding the pair (l,m) is mapped to a single index j = m + l(M+1) ∈{0,…,L(M+1)-1}. The basis state |lm⟩ is then encoded as |1_j⟩≡ |0...010...0⟩ which has 1 for qubit j and zero for every other qubit. This gives
|ψ_D⟩ = ∑_l,m x_lm |lm⟩
= ∑_j=0^L(M+1)-1 x_j |1_j⟩
We observe that a uniform superposition of one-hot-encoded states
is the well-known W state that can be prepared by an efficient procedure <cit.>.
The data state |ψ_D⟩ can be prepared using essentially the same procedure but with modified rotation angles to produce the generally nonuniform amplitudes x_j.
The basic building block of this procedure is the gadget
1 X
R_y(θ) -1
consisting of a controlled-Y rotation followed by a controlled-NOT gate.
Such gates can be realized without difficulty in many experimental platforms (for example see Fig. 2 in the reference <cit.>).
This gadget maps |10⟩ to cosθ |10⟩ + sinθ|01⟩.
Starting with the state |1_0⟩ = |10...0⟩ and applying this gadget with various angles to qubit pairs (0,1), (1,2), (2,3), … one can prepare an arbitrary superposition of basis states |1_0⟩, …, |1_L(M+1)-1⟩.
For typical digital two-local gates, the run time complexity scales as LM in the encoding. With programmable fully connected non-local gates, the run time complexity will be of order one.
§.§.§ Quantum regression map
In the one-hot encoding, the ancilla-controlled phase gate used to impart regression coefficients takes the form
U_C^(j)(ϕ_j) = exp( -i ϕ_j |1⟩⟨ 1| ⊗ |1_j⟩⟨ 1_j| )
= exp( -i ϕ_j Z_A-I_A/2⊗Z_j-I_j/2)
where A denotes the ancilla, j indexes a data register qubit, Z_A (Z_j) is the Pauli Z operator on qubit A (j). It can be verified that U_C^j(ϕ_j) yields the desired effect on x_j as
U_C^j(|1⟩⊗ |1_j⟩) = exp(-iϕ_j)(|1⟩⊗ |1_j⟩) and for every other state U_C acts as the identity.
In Table <ref>, we summarize the full algorithm before measurement. The time complexity for the regression map will be further discussed in Appendix C.
§.§.§ Measurement
The measurement operator M̂ is the summation of individual operators of the form |lm⟩⟨ lm'|.
This may be understood as a transition from j=(l,m) to j'=(l,m') which can be achieved by operators of the form S_j'^+ S_j^- where S_j^+ = |1⟩⟨ 0| = (X_j - i Y_j)/2 is the raising operator on qubit j and S_j^- = |0⟩⟨ 1| = (X_j + i Y_j)/2 is the lowering operator on qubit j. M̂ can be written in terms of measurable quantities as
M̂ = ∑_l∑_m, m'|lm⟩⟨ lm'|
= I + ∑_l∑_m m' |lm⟩⟨ lm'|
= I + ∑_l∑_(j<k) = l(M+1)^l(M+1)+M( S_j^+S_k^- + S_k^+S_j^-)
= I + ∑_l∑_(j<k)= l(M+1)^l(M+1)+M( X_jX_k+Y_jY_k),
where I denotes the global identity operator. Thus M̂ can be measured as a linear combination of I, X_j X_k, and Y_j Y_k measurements.
Take a two-by-two data table for encoding as an example, the basis states for l=0, m=0,1 are |1000⟩ and |0100⟩.
For l=1, m=0,1, the states are |0010⟩ and |0001⟩.
For l=0 the relevent state transition operators S_0^+S_1^- and S_1^+S_0^- are given by |1000⟩⟨ 0100| and |0100⟩⟨ 1000|. For l= 1 the relvant state transition operators S_2^+S_3^- and S_2^+S_3^- are given by
|0010⟩⟨ 0001| and |0001⟩⟨ 0010|.
One-hot amplitude encoding introduced so far can be resource intensive as the number of qubits scales with the number of classical data entries.
However, the overall quantum algorithm is relatively simple. For the near-term hardware, we expect one-hot encoding to be the easiest to implement for proof-of-principle demonstrations of our algorithm. To mitigate hardware noise and reduce the size of quantum circuits needed a batch training strategy may be employed: One first divides the training data into much smaller batches of bootstrap samples and learns a regression model for each batch separately. Then the coefficients of the separate regression models are combined to yield an ensemble model, as will be demonstrated numerically in Section <ref>.
§.§ Compressed binary encoding
For the one-hot amplitude encoding, the allocated number of physical qubits to support the information grows linearly with the number of data entries.
To extend the quantum algorithm for distributed big data applications for potential quantum advantages, we need to have a much more compact
encoding scheme to minimize hardware noises due to a much larger qubit count for the same task.
Meanwhile, we want to again keep the structure of the classical data table and use the simple ancilla-controlled phase gates.
For this encoding scheme the row and column information are stored in separate qubit registers, |lm⟩ =|l⟩⊗ |m⟩. l and m are each encoded in binary using N_L = ⌈log_2 L ⌉ qubits for l and N_M = ⌈log_2 (M+1) ⌉ qubits for m. That is, |l⟩ = |l_N_L⟩⊗⋯|l_1⟩ and |m⟩ = |m_N_M⟩⋯|m_1⟩. Take for instance a 4× 4 data table representing 4 observations each of 3 features and 1 response variable. In this case the row index values l=0,1,2,3 would be represented by the four basis states |00⟩, |01⟩, |10⟩, and |11⟩, respectively; the same four basis states in the column register would encode m=0,1,2,3.
Thus, the number of qubits needed to store the entire data table is approximately log_2 L + log_2 M = log_2 LM, which represents a substantial compression.
However, due to the compressed encoding, the procedure to impart the regression coefficients into the quantum state has a much higher complexity in terms of one- and two-qubit operations.
Therefore, we will consider an alternative approach exploiting global entangling analog gates native to the latest cold-ion and Rydberg cold-atom systems.
§.§.§ State preparation
To prepare a quantum state |ψ_D⟩ containing the classical data X in a binary encoding scheme, we follow the scheme in <cit.> in which the real data is first digitized and programmed into a computational basis state of quantum memory register. This need only be done once upfront. Subsequently, each time a copy of the quantum data state |ψ_D⟩ is needed, it is prepared by a fixed, efficient circuit that coherently applies phases stored in the quantum memory register to a quantum processing unit (QPU) register, without destroying the data in the memory register.
We let k=0,…,K-1 index the entries of the data table, where K = L(M+1). Each data element x_k is encoded in the memory register as a key-value pair in a separate set of qubits:
|k⟩_KEY[k]|x̃_k⟩_VAL[k].
The key |k⟩ = |l⟩ |m⟩ is stored in binary encoding as discussed above using N_K = N_L + N_M qubits where N_K is allocated to be the largest integer smaller than 2^N_k. To store the value x_k, let a be an upper bound on the magnitude of the data: max_k |x_k| < a in which the magnitude of a is dependent on whether the data table is standardized and globally normalized or not before encoding. Then x_k ∈ [-a,a] can be represented using N_P bits of precision as
x_k ≈x̃_k = a ( 2^-1 (-1)^x_k,1 + 2^-2 (-1)^x_k,2 + ⋯ + 2^-N_P (-1)^x_k,N_P)
where x_k,1,…,x_k,N_P∈{0,1}. x̃_k is then stored in the memory as
|x̃_k⟩_VAL[k] = |x_k,1⟩|x_k,2⟩⋯|x_k,N_P⟩.
The full state of the memory register is
|𝐗⟩_MEM = ⊗_k=0^K-1|k⟩_KEY[k]|x̃_k⟩_VAL[k].
The total number of qubits for the memory register is K N_K N_P ≈ LM log_2 (LM) log_2 (ϵ^-1) where ϵ is the desired precision of each data element. While this the number of qubits is linear in the number of elements, as noted above these qubits need only be kept in a classical digital state. Furthermore, the number of qubits can be reduced by using batching as previously discussed.
Once the data has been stored in the memory register, a fixed circuit uses the memory register non-destructively to impart the discrete data to amplitudes of the superposition state |ψ_D⟩ on the QPU register. We introduce an ancilla qubit in the state |+⟩ and an N_K qubit QPU register in a uniform superposition of all the binary encoded keys, yielding the state
|Ψ⟩ = |+⟩⊗( 1/√(K)∑_k|k⟩_QPU) ⊗|𝐗⟩_MEM
The essential step is a unitary which transfers the digitized classical data |x̃_k⟩ to the phase of the ancilla qubit when the key in the QPU register is k <cit.>:
U^k_D(θ_k) = exp( -i Z_A ⊗θ̂_VAL[k]⊗∑_k'|k'⟩⟨k'|_QPU⊗|k'⟩⟨k'|_KEY[k])
The operator θ̂_k, in which Z_VAL[k][j] is the operator on the j-th qubit on the value register VAL[k] for the k-th data element, is given by
θ̂_VAL[k] = ∑_j=1^N_PΔθ_j Z_VAL[k][j]
when applied to the memory register evaluates to the N_P-bit approximation to x_k:
θ̂_VAL[k]|𝐗⟩_MEM = x̃_k |𝐗⟩_MEM,
in which the eigenvalue is given by x̃_k ≡θ_k.
Notice that the phase Δθ_j = a 2^-j is predetermined and can be realized by programming quantum gates with suitable gate times and interaction strengths, although fully programmable quantum hardware on a large scale is still an active research area in hardware implementation <cit.>. The factor ∑_k'|k'⟩⟨k'|_QPU⊗|k'⟩⟨k'|_KEY[k] in the exponent causes this phase to be produced only if the state of the QPU register matches the key stored in the KEY[k]. Explicitly,
U_D^k ( |b ∈{0,1}⟩⊗|k'⟩_QPU⊗|𝐗⟩_MEM) = e^-i (-1)^b∈{0,1}x̃_k δ_k,k'( |b⟩⊗|k'⟩_QPU⊗|𝐗⟩_MEM).
Then
∏_k U_D^k|Ψ⟩ = ( 1/√(K)∑_k e^-i x̃_k |0⟩ + e^i x̃_k|1⟩/√(2)⊗|k⟩_QPU) ⊗|𝐗⟩_MEM.
The encoded data state is then realized by projecting the ancilla qubit on to |-⟩:
⟨-|∏_k U_D^k|Ψ⟩ ∝∑_ksinx̃_k |k⟩_QPU⊗|𝐗⟩_MEM
≈|ψ_D⟩_QPU⊗|𝐗⟩_MEM
since sinx̃_k ≈x̃_k ≈ x_k for a standardized data table.
The unitary U_D = ∏_k U_D^k is rather complex with terms involving many-qubit Pauli operators. Here we show that we can take advantage of nonlocal Mølmer-Sørensen gates which are available in current cold-ion technology <cit.> and an active research area in Rydberg-atom platforms <cit.>. We first expand the key agreement operator in terms of Pauli strings:
∑_k'|k'⟩⟨k'|_QPU⊗|k'⟩⟨k'|_KEY[k] = ∏_i=1^N_K1 + Z_QPU[i]⊗ Z_KEY[k][i]/2
= 2^-N_k∑_P_QPU,P_KEY[k]∈{ I,Z }^⊗ N_K P_QPU⊗ P_KEY[k]
As a result, U_D^k can be written as
U_D^k = ∏_P_QPU,P_KEY[k]∈{ I,Z }^⊗ N_K∏_j=1^N_P e^-i 2^-N_kΔθ_j Z_A ⊗ Z_VAL[k][j]⊗ P_QPU⊗ P_KEY[k]
Notice that each factor is a multiqubit Pauli rotation, where the operator in the exponent is a product of Pauli Z operators operating digitally on selected qubits. Each such rotation can be realized by the sequences of nonlocal Mølmer-Sørensen gates in conjunction
with one-qubit ancilla gate on selected qubits. A basic MS operation generates a global set of pairwise interactions, while ancilla qubits in conjunction with MS operations can be used to generate interactions for Pauli strings on arbitrary numbers of qubits as needed (see Appendix D). The time complexity for the MS gates U_D^k is O(N_P 2^N_K).
In total, the time complexity for the state preparation for each key
scales as N_P 2^N_K≈ N_P LM. In total, there are LM keys, so the time complexity would go as at least (LM)^2. For typical digital local gates, the run time complexity would be increased by a logarithmic factor of log_2(LM) ≈ N_K.
§.§.§ State preparation
To prepare a quantum state |ψ_D⟩ containing the classical data X in a binary encoding scheme, we consider the scheme in <cit.> in which the real data is first digitized and programmed into a computational basis state of quantum memory register. This need only be done once upfront. Subsequently, each time a copy of the quantum data state |ψ_D⟩ is needed, it is prepared by a fixed, efficient circuit that coherently applies phases stored in the quantum memory register to a quantum processing unit (QPU) register, without destroying the data in the memory register. The quantum information in the memory register can also be reinitialized when quantum coherence time is surpassed without changing the quantum algorithm in QPU.
We assign the index k=0,…,K-1 as the entries of a data table, where K = L(M+1). Each data element x_k is digitized as x̃_k and stored in a separate qubit subregister (Fig. <ref>):
|x̃_k⟩_VAL[k].
To digitize x_k, let a be an upper bound on the magnitude of the data: max_k |x_k| < a in which the magnitude of a is dependent on whether the data table is standardized and globally normalized or not before encoding. Then x_k ∈ [-a,a] is approximated using N_P bits of precision as
x_k ≈x̃_k = a ( 2^-1 (-1)^x_k,1 + 2^-2 (-1)^x_k,2 + ⋯ + 2^-N_P (-1)^x_k,N_P)
where x_k,1,…,x_k,N_P∈{0,1}. x̃_k is then stored in the memory as
|x̃_k⟩_VAL[k] = |x_k,1⟩|x_k,2⟩⋯|x_k,N_P⟩.
The full state of the memory register is
|𝐗⟩_MEM = ⊗_k=0^K-1|x̃_k⟩_VAL[k].
The total number of qubits for the memory register is K N_P ≈ LM log_2 (ϵ^-1) where ϵ = 2^-N_p is the precision of each data element.
While this the number of qubits is linear in the size of the data table, as noted above these qubits need only be kept in a classical digital state. Furthermore, the number of qubits can be reduced by using batching as previously discussed.
Once the data has been stored in the memory register, a fixed circuit uses the memory register coherently to impart the discrete data to amplitudes of the superposition state |ψ_D⟩ on the QPU register. We introduce an ancilla qubit in the state |+⟩ and an N_K qubit QPU register in a uniform superposition of all the binary encoded keys, yielding the state
|Ψ⟩ = |+⟩⊗( 1/√(K)∑_k|k⟩_QPU) ⊗|𝐗⟩_MEM,
in which the state |k⟩_QPU is the shorthand for the encoded key |l⟩ |m⟩ for the data location in the table. Note that the QPU register size N_K = ⌈log_2 (L) ⌉ + ⌈log_2 (M+1) ⌉≈log_2 (LM) is much smaller than that of the (classical) memory register, which reduces opportunities for hardware errors.
The essential step is a unitary which transfers the digitized classical data |x̃_k⟩ to the phase of the ancilla qubit when the key in the QPU register is k <cit.>:
U^k_D = exp( -i Z_A ⊗|k⟩⟨k|_QPU⊗θ̂_k)
The operator θ̂_k is given by
θ̂_k = ∑_j=1^N_PΔθ_j Z_k,j
in which Z^k_j is the operator on the j-th qubit on the value register |VAL⟩[k]. When applied to the memory register, θ̂_k evaluates to the N_P-bit approximation to x_k:
θ̂_k|𝐗⟩_MEM = x̃_k |𝐗⟩_MEM.
Notice that the phase Δθ_j = a 2^-j is predetermined and can be realized by programming quantum gates with suitable gate times and interaction strengths, although fully programmable quantum hardware on a large scale is still an active research area in hardware implementation <cit.>.
The factor |k⟩⟨k|_QPU in the exponent causes this phase to be produced only if the state of the QPU register matches the key K. Explicitly,
U_D^k ( |b⟩_A ⊗|k'⟩_QPU⊗|𝐗⟩_MEM) = e^-i (-1)^bx̃_k δ_k,k'( |b⟩_A ⊗|k'⟩_QPU⊗|𝐗⟩_MEM)
where b ∈{0,1}. Then
∏_k U_D^k|Ψ⟩ = ( 1/√(K)∑_k e^-i x̃_k |0⟩_A + e^i x̃_k|1⟩_A /√(2)⊗|k⟩_QPU) ⊗|𝐗⟩_MEM.
The encoded data state is then realized by projecting the ancilla qubit onto |-⟩:
⟨-|_A ∏_k U_D^k|Ψ⟩ ∝∑_ksinx̃_k |k⟩_QPU⊗|𝐗⟩_MEM
≈|ψ_D⟩_QPU⊗|𝐗⟩_MEM
since sinx̃_k ≈x̃_k ≈ x_k for a standardized data table.
The unitary U_D = ∏_k U_D^k is rather complex with terms involving many-qubit Pauli operators. Here we show that we can take advantage of nonlocal Mølmer-Sørensen gates which are available in current cold-ion technology <cit.> and an active research area in Rydberg-atom platforms <cit.>. We first expand the key selection operator in terms of Pauli strings and the binary encoding of k as k = k_N_K⋯ k_2 k_1:
|k⟩⟨k|_QPU = ∏_i=1^N_K1 + (-1)^k_iZ^QPU_i/2
= 2^-N_K∑_P ∈{ I,Z }^⊗ N_K (-1)^p(P) P_QPU,
Here p(P) ∈{0,1} is the parity of those bits of k that correspond to factors of Z in P.
As a result, U_D^k can be written as U_D^k = ∏_j=1^N_P U_D^k,j where
U_D^k,j = ∏_P ∈{ I,Z }^⊗ N_K
e^-i 2^-N_K (-1)^p(P)Δθ_j Z_A ⊗ P_QPU⊗ Z_k,j.
Notice that each factor in U_D^k,j is a multiqubit Pauli rotation, where the operator in the exponent is a product of Pauli Z operators operating on selected qubits.
It will soon be possible to implement such rotations efficiently in fully programmable cold ion or cold atom qubit architectures. As discussed in <cit.> and Appendix C of this paper, a many-qubit rotation can be realized by a short (length O(1)) sequence of nonlocal Mølmer-Sørensen gates in conjunction
with one-qubit ancilla gates on selected qubits. A basic MS operation generates a global set of pairwise interactions, while ancilla qubits in conjunction with MS operations generate interactions for Pauli strings on as many qubits as needed. We point out that this is an example of rarely-discussed digital-analog quantum computation <cit.>.
In any case, the time complexity to implement U_D^k using such an approach is O(N_P 2^N_K) ≈ N_P LM.
There are LM keys, so the overall time complexity for state preparation ∏_kU_D^k goes as LM N_P 2^N_K≈ N_P(LM)^2.
In comparison, the cost of implementing U_D^k,j with digital local gates is greater. By the discussion of Hamiltonian simulation on page 210 of <cit.>, the time complexity to implement a multiqubit rotation using local digital gates is roughly proportional to the number of qubits involved. Thus the time complexity to implement U_D^k,j using local digital gates is on the order of ∑_n=1^N_K n N_kn = N_K 2^N_K with
the overall time complexity ∏_k,JU_D^k,j estimated as LM N_P N_K 2^N_K. This is greater than the time complexity of the suggested global MS implementation by a factor of N_K ≈log_2 (LM).
For efficient digital local gates in Eq. (<ref>), the run time complexity scales as N_P2^N_K from the overall product operation
∏_k∏_j=1^N_P
times the time complexity of non-local Pauli tensor product unitary local decomposition ∏_P_QPU∈{ I,Z }^⊗ N_K e^-i 2^-N_KP_KΔθ_j Z_A ⊗ Z^k_j⊗ P_QPU with Hamiltonian simulation by Nielsen's book on page 210 <cit.>.
The time complexity for the decomposed 2-local unitary can therefore be estimated approximately as the length N_K of the Pauli tensor product P_QPU. To account for all combinatorics for Pauli Z
operation in P_QPU, the time complexity for the unitary ∏_P_QPU∈{ I,Z }^⊗ N_K
e^-i 2^-N_KP_KΔθ_j Z_A ⊗ Z^k_j⊗ P_QPU can be estimated as
∑_n=1^N_KnC^N_K_n≈ N_K2^N_K.
In comparison with global MS gate based digital-analog gates,
the time complexity for ∏_kU_D^k are polynomially more costly by the factor N_K2^N_K≈ (log_2LM) LM which scales linearly with data size.
§.§.§ Quantum regression map
To impart the regression coefficients into the data state |ψ_D⟩ the memory register is not needed; the coefficients are imparted by the unitary U_C^m, Eq. (<ref>), acting on the QPU and ancilla register:
U_C^m = e^iϕ_m Z_A ⊗1⊗|m⟩⟨m|.
U_C^m is analogous to U_D^m but with two main differences. FIrst, whereas U_D^k selects a specific data element k=(l,m), U_C^m selects only the column m, and performs identically on each row (l) of the data table. The second difference is that whereas the phase imparted by U_D^k is encoded digitally in the quantum VAL register, the phase appearing in U_C^m is a simple scalar determined by the regression coefficient.
U_C^m can be implemented using the same strategy as U_D^k. First, recall that the column index m is represented in binary as m = m_N_M⋯ m_2 m_1 where N_M = ⌈log_2 (M+1) ⌉. Then
|m⟩⟨m| = ⊗_j=1^N_M|m_j⟩⟨m_j| = ∏_j=1^N_M1 + (-1)^m_j Z_j/2
where Z_j is the operator on j-th qubit in the |m⟩ subregister within the QPU register. Upon expanding the product U_C^m may be written as
U_C^m = ∏_P ∈{ I,Z }^⊗ N_M e^+i(2^-N_Mϕ_m (-1)^p(P) Z_A⊗ 1⊗ P_QPU,
where this time p(P) is the parity of those bits of m that correspond to factors of Z in P. Again, these multiqubit rotations can be implemented either as a multi-qubit-controlled gate or using multi-qubit Mølmer-Sørensen gates as discussed above.
The time complexity for the feature mapping in quantum regression
scales like 2^N_M× (M+1) ≈ M^2.
In Table <ref>, we summarize the full algorithm before measurement
§.§.§ Measurement
In the binary encoding, the measurement operator M̂, Eq. (<ref>) takes a particularly simple form:
M̂ = ∑_l=0^L-1|l⟩⟨l|∑_m,m'=1^M|m⟩⟨m'|.
Using the binary expansion of |l⟩ we have ∑_l |l⟩⟨l| = I^⊗ N_L. Similarly, ∑_m,m'|m⟩⟨m'| = (2 |+⟩⟨+|)^⊗ N_M = (I+X)^⊗ N_M. Thus
M̂ = 2^N_M I^⊗ N_L⊗ (|+⟩⟨+|)^⊗ N_M
= I^⊗ N_L⊗ (I+X)^⊗ N_M
Take a 2-by-4 data table for example. The |k⟩ states for l=0 are |0⟩|00⟩, |0⟩|01⟩, |0⟩|10⟩, and |0⟩|11⟩; the states for l=1 are analogous. In this case
M̂ = I ⊗ (I+X) ⊗ (I+X)
= I ⊗ I ⊗ I + I ⊗ I ⊗ X + I ⊗ X ⊗ I + I ⊗ X ⊗ X.
|Ψ_0⟩ = ψ_000|0⟩|00⟩ + ⋯ + ψ_111|1⟩|11⟩ denotes the state of the QPU register just prior to measurement. Then
⟨M̂⟩ =
|ψ_000 + ψ_001 + ψ_010 + ψ_011 |^2 +
|ψ_100 + ψ_101 + ψ_110 + ψ_111 |^2.
In Appendix C, we discuss hardware implementation and resources for gate operation for interested readers.
§ NUMERICAL RESULTS
Conventionally, to draw reliable interpretations from a trained regression model,
we need to characterize the statistics of the uncertainty for the corresponding regression parameter for the predictor variables to justify its relevance in explaining the data. Motivated by the bootstrap aggregation (bagging) in the success of the random forest algorithm, we can build a regression model with bootstrap samples and compute the average and the standard errors (SEs) of the predicted regression coefficients by drawing the same number of data records from the bootstrapped samples from the original master (data) population (See reference <cit.> for advanced statistical concepts and practical numerical analysis).
Since the qubits in quantum hardware would be noisy to handle a big data set, a plausible solution is to train the regression model from smaller bootstrap samples (bagging) from the smaller subsets of the training data with the same circuit to quantify errors and gather the final bootstrap statistics of the regression parameters by averaging the measurement results from the quantum algorithm running by the quantum hardware. In another vein, the traditional statistical model approaches in data science should be generalizable to hybrid quantum machine learning as
illustrated in the following numerical demonstration as an example. The generalization and extension to other quantum machine models are required to explore further.
Here we show the promise of quantum encoded data that can be processed in well-connected quantum hardware and provide an alternative hybrid quantum solution for quantum machine learning applications. For the proposed variational quantum regression (VQR), we show a different and robust strategy to use a global optimization search algorithm to find the optimal regression coefficients to avoid measurement overheads based on gradient-based approaches.
The best estimation can be found by using the sub-optimal solutions with lower accuracy for regression coefficients as a new ansatz initialization for the next round of global Nelder-Mead (NM) optimization algorithm until the final converged solution to high accuracy is found. For numerical demonstration, we adopt the NM optimization algorithm from SciPy (an open-source Python library for scientific and technical computing) to validate the batch learning strategy with the analytically-known cost function in Eq. (17) (for big data application with bootstrap samples, one should survey the NM optimizers provided by Tensorflow or Pyspark: the algorithm provided can be utilized to save time with high performance computation).
In the following numerical results, the tuning variables for the cost function are cosine functions for the phase angles
instead of phase angles.
The search for our optimal solutions is more effective from the new variables because of the unconstrained search
for the NM optimizer.
§.§ Ensemble model training
Machine learning from an ensemble model can be useful for statistical modeling so that it is scalable with large data.
We trained an ensemble model from N_b sets of bootstrap samples of various sizes. The best model is determined by the estimated weight vector W calculated from the estimated feature weights W=(W_1, W_2, ...., W_M) from bootstrap samples, that is, W_i = N_b^-1∑_b = 0^N_b-1W_i^b in which W_i^b is the weight learned from the batch b for the feature i and the standard errors (SEs) for the weights from model training is denoted as δW_i.
To validate the ensemble learning, we generate synthetic and standardized classical data sets with a deterministic linear map with small randomness between the M features X_j = (X_j,1,X_j,2,...,X_j, M) and the target variable Y_j∈ R.
Specifically, the ideal (noiseless) linear map is given by the expression
Y_j = X_jiW_i where X_ji is the L-by-M data matrix and
the best weight vector W = (W_1,W_2,...,W_M) of size M after data standardization.
We let each feature follow the uniform random distribution between values [-1,1] to cover the feature space.
For the response column Y_j, it is generated by the linear map Y_j = X_jiW_i with the random variables {W_i = 1,2,.., M} with the ideal population mean W=(W_1=1.0,W_2=2.0,W_3=3.0,..,W_M=float(M)) and the standard deviation δ(the same for each feature) from its mean value W_i.
We would expect the model training would be more uncertain for the first few features due to the smaller signal-to-noise ratio W_i/δW_i where i = 1,2,..., M from an equal number of bootstrap samples with different sample sizes.
Notice that the weights W_i and the SEs δ W_i from training are in tilde to differentiate from the mean weight W_i and the standard deviation δ from the data generation respectively.
A bootstrap sample is a sampled data set that is drawn with the replacement from the original master population.
For our numerical demo, the master population data set has N_b = 1024 data records/rows and we drew 1024 bootstrap samples, each of which has a much smaller chosen sample size. The regression weight vector learned by the proposed regression algorithm from the 1024 bootstrap re-samples with the respective sample sizes 10, 20, 40, 60, 100, and 150 records. The zero bias term is guaranteed to be negligible from data standardization due to subtraction from the sample mean.
Including additional columns from the response variable for the quantum encoding, we can emulate classically the quantum regression training with
13 qubits (2^13 = 1024 × 7) without padding additional zeros.
Due to the variance of the bootstrap samples, the trained weight vectors fluctuate among these samples.
To establish our baseline errors from sampling and training, we show the ideal case with six features where the training data has no noise δ = 0 to see if we can emulate the learning.
As shown in Table <ref>, the training reproduces the theoretical values for the synthetic data we generate with the ideal weight vector W = (1.0, 2.0, 3.0, 4.0, 5.0, 6.0) and we observe that the bootstrap sampling for the learning is reproduced for various batch sizes.
SEs of the weight vector δW = (δW_1, δW_2,....,δW_6) stay small and almost unchanged for different batch sizes as shown in Table <ref>. With the SEs in weight staying more or less constant, we expect a much larger t for the features with higher weights.
If we look at the t-statistics metrics defined by the ratio
t_i = W_i/δW_i as shown in Table <ref>, we observe these values are much greater than one representing
the statistical significance of the learned results.
This shows that bootstrap sampling analysis is a valuable tool and generalizable in practice beyond Gaussian noise hypothesis <cit.>, typically imposed in traditional statistical analysis.
To further confirm the practicality of the training approach with noise present in the map between features X_i and the response variable Y_i, we go through the simulation with the noise level δ = 0.1. For this case, we observe the
deviation of the learned weight vectors away from the ideal case without noise. With small sample sizes 10, 20, and 40, the sample mean weights can deviate from the theoretical weight vector more than what is indicated by the noise δ = 0.1 in Table <ref>.
This is due to the sample variance is more pronounced at smaller batch sizes as indicated in noiseless case δ = 0 in Table <ref>.
For the larger sample sizes 60, 100, and 150, we do observe the mean weight vector from training mostly reproduce what is expected for the noise level
δ = 0.1. The SEs of the weight vectors for the noisy cases is shown in Table <ref>. When the learned weight vectors significantly deviate from theoretical values, we observe a corresponding larger SE for the weight vector. This correlation gives us guidance on how reliable our learned weight vectors are.
For example, for the first feature W_1 with the sample size 20, we see a large deviation from the theoretical value 1± 0.1.
We also observe a larger deviation in its SE: δW_1 at the sample size 20.
In Table <ref>, we observe the overall t values are lower in comparison with the cases with no noise δ = 0 due to the presence of non-sampling noise. In addition, we can identify the overall t values are the largest for the batch size 150. This indicates that we
can use bootstrap sampling with the optimal sample size of 150.
For larger batch sizes greater than 150 (not shown), we start to observe the deviation from what we expect from theoretical values for the weight vector W.
This is due to the fact that the ensemble training from the resample data sets is under-fitting due to higher chances of duplicated data records in each sample, leading to the training bias.
Even though the bias hinders us to draw quantitative inferences from the data,
this behavior does not prevent us from selecting important features based on the t-statistics metrics defined by the ratio
t_i = W_i/δW_i as shown in Table <ref> and can be avoided with smaller bootstrap sample size. Note that this is also the case when there is no noise δ = 0 (Table <ref>) but occurs at a larger batch size > 150 not shown in Table <ref>.
§.§ Feature importance and regularization
In machine learning, we may have a potentially large list of features that can be used to describe the mapping between the response variable and input variables. Regularization provides an algorithmic way to select the optimal subset of original features quickly before doing a more detailed bootstrap sampling analysis for the finalized features. Regularization
penalizes the model with many features with important weights to avoid over-fitting the noise present in the training data that is relevant in noisy hardware and avoids dependent predictors entering the built models.
Since the regularization is done outside the quantum loop, this is a valid strategy for hybrid quantum machine learning. Here we demonstrate that the optimal feature selection can be enabled by turning on regularization in the cost function
.
To establish the baseline, we generate synthetic data without Gaussian noises δ_0 = 0 where the response variable Y=sin( x) depends on the independent real variable x in infinite order and the values distribute randomly between the values [-1,1]. This is an infinite series for any real x values but can be truncated to a finite series when x ranges between [-1,1], which is the case for our normalized features. For regularization, we test with L1 regularization and L2 regularization.
We found the L1 regularization works robustly with the NM optimization algorithm in this case.
In the following demonstration, we show that the nonlinear features can be built first in the feature so that the linear regression
algorithm can be used for the nonlinear regression model building.
We generate the synthetic data with controlled mapping between predictors (features) X = (X_1 = x, X_2 = x^2, X_3 = x^3 ..., X_15 = x^15) and the target (response) variable Y ∈R = sin( x) = ∑_n ∈ +ℤ (-1)^n+1x^2n-1/(2n-1)!. The number of population records
2^5 are generated where each feature X_n is uniformly generated between values [-1,1]. With L1 regularization, we use the very small
regularization parameter α = 1.2 × 10^-7 and alternating signs for the initial weight ansatz as, (W_1, W_3, W_5 .., W_15) = (+1, -1, +1,.., -1).
Our hybrid algorithm converges to the optimal weight parameter W≈W, which selects the first few odd terms as the important features.
Noticed that the optimization may end up with a much smaller cost function but with the wrong signs.
However, this confirms the experience that constraints from domain knowledge typically are required since the classical optimization algorithms can only be used as a filter for possible solutions and even a mathematical global minimum solution may not be reasonable for the domain of applications.
What we found is that L1 regularization works with the proper regularization parameters α. The number of vanishing weights in the converged weight vector reveals itself to enable the feature selection effectively.
Take the best-learned weight to produce the predicted value Y for x values ranging between [-1, 1] as shown in Figure. <ref>, we can reproduce what we expected from synthetic training data.
§ CONCLUSION
To conclude, our work significantly bridges the gap between the quantum algorithm and the regression modeling in quantum machine learning through the explicit interpretation of relevant model parameters from the physical parameters of qubit resources used for structural encoding.
We have prescribed a hybrid VQA for quantum regression modeling and discussed its time complexity. The proposal is particularly suited for a quantum platform with well-connected qubits as resources such as cold atoms and ions. In addition, we elucidate the roles regression coefficients(weights) played in the model, which is connected to the ratio of cosine functions of phase angles between amplitude-encoded states assigned between the features and the response variable in the original tableau. For well-known linear inversion problems for optimal physical modeling, our hybrid quantum algorithm can also be applied if proper normalization is accounted for with care.
For near-term hardware with limited quality qubits, we investigate an alternative ensemble training solution and provide numerical evidence for the feasibility of the strategy. We show the t statistics to characterize the reliability of the learned weight coefficient versus its standard error. For the training tasks with enormous features to select for model training, we suggest that L1 regularization can be a valid tool external to the variational quantum state ansatz when it is too expressive.
To enable the realization of our algorithm in hardware, we recommend the simple one-hot amplitude encoding for small training data sets for its less stringent technical requirement for hardware. In conjunction with ensemble learning, the hybrid machine-learning solution can scale in time.
For less noisy and well-connected computing hardware, compressed amplitude encoding should be more advantageous, in synchronous with the very rapid development of cold atom platforms.
As we have learned in the algorithm design, it appears more advantageous to have a quantum algorithm designed with digital local and global gates present in NISQ hardware for improved time complexity and reduced qubit resources beneficial for noise mitigation. Our research indicates the potential quantum advantage with digital-analog gate models.
§ ACKNOWLEDGMENTS
We thank Phil Lotshaw for constructive feedback on the manuscript. C.-C. Joseph Wang and Ryan Bennink acknowledges the support by the DOE Office of Science, Office of ASCR, under FWP No. ERKJ354.
plain
9
Quantum Algorithm for Data Fitting N. Wiebe, D. Braun, and S. Lloyd, Phys. Rev. Lett., 109:050505, 2012.
Prediction by linear regression on a quantum computer M. Schuld, I. Sinayskiy, and F. Petruccione, Phys. Rev. A., 94:0222342, 2016.
Fast quantum algorithms for least squares regression and statistic leverage scores Y. Liu and S. Zhang, Theoretical Computer Science, 657:38-47, 2017.
Somma Somma, Rolando D. and Subaşı, Yiğğit, PRX Quantum, 2:010315, 2021.
Paine Paine, Annie E. and Elfving, Vincent E. and Kyriienko, Oleksandr, Phys. Rev. A, 107:032428, 2023.
NISQ J. Preskill, Quantum, 2:79, 2018.
Variational quantum algorithms Cerezo, M., Arrasmith, A., Babbush, R. et al., Nat. Rev. Phys, 3:625–644, 2021.
A quantum approximate optimization algorithm E. Farhi, J. Goldstone, and S. Gutmann. A quantum approximate optimization algorithm. arXiv preprint quant-ph/1411.4028, 2014.
An adaptive variational algorithm for exact molecular simulations on a quantum computer Grimsley, H.R., Economou, S.E., Barnes, E. et al. An adaptive variational algorithm for exact molecular simulations on a quantum computer. Nat. Commun., 10:3007, 2019.
Hybrid Quantum-Classical Algorithms and Quantum Error Mitigation Suguru Endo, Zhenyu Cai, Simon C. Benjamin, and Xiao Yuan, Hybrid Quantum-Classical Algorithms and Quantum Error Mitigation, J. Phys. Soc. Jpn, 87:023002, 2018.
Variational Circuit Compiler for Quantum Error Correction Xiaosi Xu, Simon C. Benjamin, and Xiao Yuan, Variational Circuit Compiler for Quantum Error Correction, Phys. Rev. Applied, 15:034068, 2021.
Universal variational quantum computation Jacob Biamonte, Phys. Rev. A, 103:L030401, 2021.
Quantum Algorithms Is NP-Hard Lennart Bittel and Martin Kliesch, Phys. Rev. Lett., 127:120502, 2021.
Quantum state preparation protocol for encoding classical data into the amplitudes of a quantum information processing register's wave function S. Ashhab, Phys. Rev. Research, 4:013091, 2022.
W state
Cruz, D., Fournier, R., Gremion, F., Jeannerot, A., Komagata, K., Tosic, T., Thies-brummel, J., Chan, C.L., Macris, N., Dupertuis, M.-A. and Javerzac-Galy, C., Adv.Quantum Technol., 2:1900015, 2019.
TensorFlow Quantum: Impacts of Quantum State Preparation on Quantum Machine Learning Performance D. Sierra-Sosa, M. Telahun and A. Elmaghraby, "TensorFlow Quantum: Impacts of Quantum State Preparation on Quantum Machine Learning Performance," in IEEE Access, vol. 8, pp. 215246-215255, 2020, DOI: 10.1109/ACCESS.2020.3040798.
Numerical Recipes Numerical Recipes. The Art of Scientific Computing, 3rd Edition, 2007, ISBN 0-521-88068-8. (C++ code)5-A. and Javerzac-Galy, C., Adv. Quantum Technol., 2:1900015, 2019.
Martin
Anupam Mitra, Michael J. Martin, Grant W. Biedermann, Alberto M. Marino, Pablo M. Poggi, and Ivan H. Deutsch
Phys. Rev. A, 101:030301(R), 2020.
Martin2
Michael J. Martin, Yuan-Yu Jau, Jongmin Lee, Anupam Mitra, Ivan H. Deutsch, Grant W. Biedermann, arxiv preprint
quant-ph/2111.14677, 2021.
Small Programmable Cold Ion S. Debnath, N. M. Linke, C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe, Nature, 536:63-66, 2016.
Peter Zoller2
Barreiro J, Müller M, Schindler P, Nigg D, Monz T, Chwalla M, Hennrich M, Roos C F, Zoller P and Blatt R, Nature, 470:486 2011.
Peter Zoller
M Müller, K Hammerer, Y L Zhou, C F Roos , and P Zoller, New Journal of Physics, 13:085007, 2011.
Rydberg1 Graham, T.M., Song, Y., Scott, J. et al. Multi-qubit entanglement and algorithms on a neutral-atom quantum computer. Nature, 604:457–462, 2022.
Rydberg2 Bluvstein, D., Levine, H., Semeghini, G. et al. , Nature, 604:451–456, 2022.
book Nielsen, M. A., and Chuang, I. L. (2011). Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press.
Statistical Learning
Peter Bruce and Andrew Bruce, Practical Statistics for Data Scientists, O'Reilly Media, Inc., First Edition, 2017.
Eugene
Tasio Gonzalez-Raya, Rodrigo Asensio-Perea, Ana Martin, Lucas C. Céleri, Mikel Sanz, Pavel Lougovski, and Eugene F. Dumitrescu, PRX Quantum, 2:020328, 2021
§ PROOF OF COST FUNCTION FROM MEASUREMENT
|Ψ_0⟩ = 1/√(2)∑_l',m' x_l'm'cosϕ_m'|l'm'⟩,
M̂ = ∑_l”,m”∑_l”, m”'|l”m”⟩⟨ l”m”'|,
M̂|Ψ_0⟩ = 1/√(2)∑_l',m'∑_l”, m”'∑_l”, m” x_l'm'cosϕ_m'⟨ l”m”'|l'm'⟩ |l”m”⟩,
Using the orthogonality relation,
⟨ l”m”'|l'm'⟩ = δ_l”m”'=l'm', 0
= 1/√(2)∑_(l”,m”')∑_(l”,m”)x_l”m”'cosϕ_m”'|l”m”⟩,
⟨Ψ_0| M̂ |Ψ_0⟩ = 1/2∑_l”, m”'∑_l, m∑_l”, m” x_lmx_l”m”'cosϕ_mcosϕ_m”'⟨ lm|l”m”⟩
= 1/2∑_l”, m”'∑_l”, m”x_l”m”x_l”m”'cosϕ_m”cosϕ_m”'
= 1/2∑_l”∑_m”∑_m”'x_l”m”x_l”m”'cosϕ_m”cosϕ_m”'
= 1/2∑_l(∑_m x_lmcosϕ_m)^2,
Q.E.D.
§ MEASUREMENT FEASIBILITY
By the measurement outcome from a perfectly trained model with less noisy data and good independent features selected in Eq. (16),
we expect vanishing measurement results Pr_perfect from a perfect destructive interference since the predicted response ŷ_l∈{0, 1, 2,..., L-1} agrees with the actual response y_l in comparison with models which are not well trained.
For a poor model, what the model learned is the vanishing weights W_m.
If the response variable is standardized with subtraction from its mean value, we expect the standardized bias term is zero and we expect a finite outcome from Eq. (16) only contributed from the standardized response data y^S_l as
Pr_Poor∝∑_l(y_l^Scosϕ_y)^2.
For the worst model, we expect the signs of the weights are all wrong and
a much larger probability outcome is expected as
Pr_Worst∝∑_l(2 y_l^Scosϕ_y)^2.
We can define the goodness model metrics G_ M for the trained model as G_M≡ 1- Pr_M/ Pr_Poor
G_ M:{
= 1 Perfect
= 0 Poor
= -3 Worst .
.
For the worst circumstance where the best estimation of the weight are wrong in signs,
the goodness for the worst model G_Worst will approach the value -3. This will be the case when the optimizer
is not set up correctly to find the minima but finding the maxima or the sign for rotational angle for the response variable is not well taken off.
For any meaningful training results, the goodness metrics should be in the range G_ M∈ (0,1].
Notice that the model metrics G_ M is independent of any normalization convention in the algorithm.
To conclude if the measurement outcome is feasible, we can estimate what is needed for the measure probability
Pr_Poor to be resolved in lab experiments.
In terms of the standardized response variable y_l^S after global normalization,
the probability Pr_Poor
can be expressed as
Pr_Poor = 1/R^2∑_l(y_l^Scosϕ_y)^2,
where R^2 is the global normalization factor after state preparation.
In terms of typical column standardized classical data,
the global norm R^2 is given by
R^2 = ∑_ly_l^S^ 2 + ∑_lmx_lm^S^2
= σ_y^S^2 + ∑_mσ^2_x_m^S,
in which the number of row is given by L, the variance for the input variable x_m^S with column m is given by σ^2_x_m^S and the variance for the response variable y is given by σ^2_y^S.
Finally, we can arrive at the following expression for the probability Pr_poor as
Pr_Poor = cos^2ϕ_yσ_y^S^2/σ_y^S^2+∑_mσ_x_m^S^2
= cos^2ϕ_y1/1 + F,
in which the total variance of all M features is given by ∑_mσ_x_m^S^2, the variance
of the response variable is given by
σ_y^S^2, and the relative variance ratio factor F between all features and the response is defined by F ≡∑_mσ_x_m^S^2/σ^2_y^S.
To maximize observable success probability outcome, we expect to work at the parameter regime |cosϕ_y| to be close to one with the rotational angle ϕ_y 2π close to 0, π. For the relative variance ratio factor F, it scales with number of the encoded feature M.
Therefore, we expect the success probability Pr_Poor to scale inversely as the number of features M.
For a perfect model built, there should be a destructive interference leading to zero observable probability Pr_M.
To be distinguishable from Pr_Poor in probability measurement resolution ϵ≪ 1 which is about a few percent, the number of features cannot be too large, that is, the number of the feature M is upper bounded by 1/ϵ (M < ϵ^-1) as our measurability criteria.
For a classical data set with K data entries, that is, K ≈ LM, the criteria changes to K/L < ϵ^-1.
Therefore, a tall table (L≫ M) with limited features are possible to be measured, which further strengthens the importance of feature selection in quantum machine learning.
§ HARDWARE IMPLEMENTATION
In cold-ion hardware, the native gates include arbitrary one-qubit Pauli rotational gates and the two-qubit gate XX <cit.>. The controlled phase (CPH) gate and the controlled NOT (CNOT) gates can be realized using the XX gate in conjunction with 1-qubit Pauli rotations. The H gate can be decomposed as R_X(π) R_Y(π/2) also in the platform. (Note that arbitrary 1-qubit gates plus any entangling 2-qubit gate constitute a universal set, so the Rydberg platform implements a universal set.)
In the Rydberg-atom platform <cit.>, any rotation in the Bloch sphere can be implemented and the native two-qubit gate is ZZ. The CNOT gate can also be decomposed in this platform as (I⊗ H) C_Z ( I⊗ H)
where the controlled Z gate C_Z, which is a special case for the controlled phase gate, is enabled by Rydberg states. The CPH gate can be decomposed in principle in terms of the CNOT gate with a one-qubit rotational gate <cit.>.
The specifics of CPH gates vary with encoding schemes.
For the one-hot amplitude encoding, the factorization of controlled two-body Pauli rotations U_C^j along an axis is required.
Typically, this can be achieved in a preferred Pauli Z axis in a particular platform up to
a single qubit rotation from a native axis to the Z axis. For example, the native axis for cold ion would be Pauli X
and the native axis for the Rydberg atom will be Pauli Z.
The multi-qubit controlled phase gate can be implemented for the native Pauli Z axis as
U_C^(m)=∏_jexp(-iϕ_mZ_A-I_A/2⊗Z_j-I_j/2).
Equivalently, it can also be decomposed locally as
U_C^(m)=∏_je^-iϕ_m/4 Z_A⊗ Z_j
e^+iϕ_m/4 Z_A⊗ I_j
e^+iϕ_m/4 I_A⊗ Z_j
e^-iϕ_m/4 I_A⊗ I_j,
in which the last unitary exponential factor is the idler unitary operator, which is state independent and can be dropped. By digital 1-local and 2-local gate operation, the time complexity for each feature is of O(LM). With M features, the time complexity will be of O(LM^2). For digital globally-addressed analogue gate operation on each feature,
the time complexity is greatly reduced to be O(1) and scales as O(M) with globally addressed analog gates. With fully-programmable global analogue gate operation with all features and the involved commutative Pauli operators,
∏_m U_C^(m) can be fused into a global unitary, and therefore the time complexity can be minimized to
O(1) This illustrates the relevance of computing paradigms to time complexity.
For the native X axis, we need to apply Pauli Y rotation R_Y_A, j(-π/2) to each physical qubit state in U_C(ϕ_m) as
U_C^(m) = ∏_j R_Y_A(+π/2) R_Y_j(+π/2) e^-iϕ_m/4 X_A⊗ X_jR_Y_j(-π/2)R_Y_A(-π/2)
⊗ R_Y_A(+π/2) e^+iϕ_m/4 X_A⊗ I_j R_Y_A(-π/2)
⊗ e^+iϕ_m/4 I_A⊗ X_j ,
in which the digital and local decomposition has arrived.
For the controlled phase gate for state preparation, the gate needs to be applied to each qubit reserved for the data registry.
For the compressed binary encoder, the controlled phase gates are more complicated to implement, and we suggest the application of a global entanglement gate, Mølmer-Sørensen (MS) gate with a ancilla qubit, to achieve the quantum logic gate <cit.>.
The MS gate unitary operator in trapped cold ions is typically expressed as
U_MS(θ_MS,ϕ)=exp(-iθ_MS/4(cos(ϕ)S_X+sin(ϕ)S_Y)^2),
in which S_X,Y=∑_iX_i,∑ Y_i are the collective Pauli-operators.
With the help of an ancilla qubit, Pauli-string operation along a Pauli-axis X, Y can be enabled by choosing
the value and the sign of the phase ϕ to be 0,π.
For ϕ =(0,π), U_MS = (U_MS^X(θ_MS,ϕ=0), U_MS^Y(θ_MS,ϕ=π)).
The angle θ_MS can be used to tune the strength
of the rotation for the MS gate where the exhaustive implementation has been listed in Table 1 in the reference. Notice that any discrepancy between the algorithm we develop can be easily adjusted to the native preferred axis for any platform after global rotation without many difficulties. The same comments hold as the one-hot encoder for the state preparation with controlled phase gates based on MS gate. For Rydberg atoms, the research on MS gates for two atoms and multiple atoms are just in their infancy <cit.>.
For well-connected qubits, the multi-controlled phase gates would be sufficient to implement with only one ancilla in principle, especially when the connectivity is close to infinitely long-ranged. Partly the practicality of the implementation is limited by the range of the phase gates that can be applied uniformly across physical qubits.
To add the fine programmable capability, individual and segmented digital addressing with the global entangled MS gate and local gates are
possible.
For example, for cold ions in a one-dimensional linear Paul trap and a two-dimensional Penning trap,
the ions are mostly uniformly distributed with long-ranged Ising interactions at the center of the trap. Therefore, it would be wise to select the ions away from the edges of the trap as the data register to reduce sophisticated waveform engineering. Due to this controlled scalability limitation, we anticipate machine learning limited to a certain number of qubits which limits the amount of training data that can be encoded for learning currently.
|
http://arxiv.org/abs/2307.00905v1
|
20230703100741
|
Four-band tight-binding model of TiSiCO-family monolayers
|
[
"Chaoxi Cui",
"Yilin Han",
"Ting-Ting Zhang",
"Zhi-Ming Yu",
"Yugui Yao"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
Centre for Quantum Physics, Key Laboratory of Advanced Optoelectronic
Quantum Architecture and Measurement (MOE), School of Physics, Beijing
Institute of Technology, Beijing 100081, China
Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems,
School of Physics, Beijing Institute of Technology, Beijing 100081,
China
Centre for Quantum Physics, Key Laboratory of Advanced Optoelectronic
Quantum Architecture and Measurement (MOE), School of Physics, Beijing
Institute of Technology, Beijing 100081, China
Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, School of Physics, Beijing Institute of Technology, Beijing 100081,
China
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
[email protected]
Centre for Quantum Physics, Key Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement (MOE), School of Physics, Beijing
Institute of Technology, Beijing 100081, China
Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, School of Physics, Beijing Institute of Technology, Beijing 100081, China
Centre for Quantum Physics, Key Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement (MOE), School of Physics, Beijing
Institute of Technology, Beijing 100081, China
Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, School of Physics, Beijing Institute of Technology, Beijing 100081, China
The TiSiCO-family monolayers have recently been attracting significant attention due to their unique valley-layer coupling (VLC).
In this work, we present a minimal, four-band tight-binding (TB) model to capture the low-energy physics of the TiSiCO-family monolayers X_2YCO_2
(X= Ti, Zr, Hf; Y= Si, Ge) with strong VLC.
These monolayers comprise two X atom layers separated by approximately 4 Å in the out-of-plane direction.
Around each valley (X or X'), the conduction and valence bands are mainly dominated by the A_1{d_z^2(x^2-y^2)} and B_2{d_yz} orbitals of the top X atoms,and the A_1{d_z^2(x^2-y^2)} and B_1{d_xz} orbitals of the bottom X atoms.
Using these four states as a basis, we construct a symmetry-allowed TB model.
Through parameter fitting from first-principles calculations, the four-band TB model not only reproduces the electronic band structure, but also captures the strong VLC, high-order topology, and valley-contrasting linear dichroism of the monolayers.
Furthermore, the TB model reveals that these monolayers may exhibit various intriguing topological phases under electric fields and biaxial strains.
Hence, the TB model established here can serve as the starting point for future research exploring the physics related to VLC and the X_2YCO_2 monolayers.
Four-band tight-binding model of TiSiCO-family monolayers
Yugui Yao
=========================================================
§ INTRODUCTION
Valleytronics materials, which are characterized by the presence of multiple symmetry-connected energy extremal points in the low-energy
bands, have been a focus of research in condensed matter physics <cit.>.
The concept of valleytronics works in both three dimensions and two dimensions. However, due to the flexibility and controllability in two dimensions, it is the discovery of two-dimensional (2D) valleytronics materials
like graphene and transitional metal dichalcogenides (TMDs) that leads to the rapid growth in the field of valleytronics <cit.>. The 2D valleytronics
materials are particularly attractive for both fundamental studies
and the development of application devices <cit.>.
Recently, the TiSiCO-family monolayers X_2YCO_2 (X=Ti, Zr, Hf; Y=Si, Ge) have been proposed as a novel class of 2D valleytronics materials <cit.>.
Similar to the graphene and TMDs, both the conduction and valence bands of these monolayers exhibit two valleys located at two high-symmetry
points of the Brillouin zone (BZ), namely, X and X' (Y) points. However, the two valleys in monolayer X_2YCO_2 are time-reversal T invariant points and are connected by the spatial operators S_4z and C_2,110, which is completely different from that in the graphene and TMDs <cit.>. As a result, the valley polarization in monolayer X_2YCO_2 can be realized by the methods that do not break T symmetry.
Particularly, due to the strong VLC–the conduction or valence electrons in different valleys have strong but opposite layer polarization, the gate electric field is an intuitive and efficient way to generate valley polarization in monolayer X_2YCO_2.
This electric control of valley polarization is highly desirable for the applications.
In addition to static control, dynamical generation of valley polarization also can be realized in these monolayers, as they exhibit valley-contrasting linear
dichroism <cit.>.
Furthermore, it has been predicted that the monolayer X_2YCO_2 is not a normal semiconductor but a second-order topological insulator [cite han].
Therefore, monolayer X_2YCO_2 will be of broad interest to multiple fields, including valleytronics, 2D materials, optoelectronics, and higher-order topology.
In our previous work, an effective two-band k· p model was developed based on invariant theory <cit.>, where the spin-orbit coupling (SOC) effect is not included due to negligible SOC in the low-energy bands of the monolayer X_2YCO_2. The effective model clearly demonstrates the coupling between valley and layer degrees of freedom, and can be used to describe the optical properties of the monolayer X_2YCO_2. However, it is insufficient to capture the higher-order topology of systems and the physics away from the two valleys.
In this work, we present a minimal lattice model for the monolayer X_2YCO_2 without SOC effect. The TB model is constructed by the d orbitals of X atoms, i.e. the A_1{d_z^2(x^2-y^2)} and B_2{d_yz} orbitals of the top X atoms, and the A_1{d_z^2(x^2-y^2)}
and B_1{d_xz} orbitals of the bottom X atoms. This effective model contains four bands: two valence bands and two conduction bands. All parameters in the model are obtained by fitting the electronic bands from the first-principles calculations.
We demonstrate that our four-band TB model can effectively describe the low-energy physics of the monolayer X_2YCO_2, including strong VLC, optical properties, and higher-order topology. Furthermore, the TB model suggests that the monolayer X_2YCO_2 may undergo multiple phase transitions under external fields.
This paper is organized as follows. In Sec. <ref>, we introduce
the processes that lead to our four-band lattice model. In Sec. <ref>,
the optical and topological properties of the effective lattice model
are studied. We investigate possible phase transitions of the model
under external field in Sec. <ref>. Conclusions are given
in Sec. <ref>.
§ THE MINIMAL LATTICE MODEL
The monolayer X_2YCO_2 belongs to layer group (LG) NO. 59 or space group (SG) No. 115 with D_2d point-group symmetry.
The crystalline structure of the monolayer X_2YCO_2 is shown in Fig. <ref>(a) and the electronic band of the monolayer Ti_2SiCO_2 is plotted in Fig. <ref>(c-d). Figure <ref>(b) shows the Brillouin zone (BZ) with the high-symmetry points being labeled. Here, the position of the X point is (π,0) and that of X' point is (0,π).
Notice that there exists an alternative notation <cit.> where the positions of X and X' points are interchanged, which is adopted in our previous work <cit.>.
The X, Y, C, and O atoms are located at 2g, 1b, 1a and 2g Wyckoff positions, respectively.
1.4
Around Fermi level, the electronic bands of these monolayers mainly consist of certain d orbitals of the two X atoms, while the contribution of other orbits, i.e., other d orbit of X atoms, and s and p orbit of all other atoms is negligible, as shown in Fig. <ref>(c-d) and appendix (Fig. <ref>). Specifically, the low-energy bands at X (X') valley are dominated by the d_yz (d_xz) orbits of the top (bottom) X atoms, and the d_z^2 and d_x^2-y^2 orbits of the both top and bottom X atoms [see Fig. <ref>(c)].
From these band analysis, one knows that the valley states including both conduction and valence valley states have strong layer polarization, and the layers polarization for the X and X' valleys are opposite, leading to strong VLC effect. The VLC effect is protected by the S_4z and C_2,110 symmetries.
The band representation of the conduction (valence) band edge at X valley is calculated as X_1 (X_3) <cit.>, from which the band representation of the band edges at X' valley can be inferred.
The site symmetry group of the X atoms (Wyckoff position 2g in SG No. 115) is C_2v. For spinless systems, the d orbitals in C_2v point-group symmetry would split into five non-degenerate energy levels: 2A_1+A_2+B_1+B_2, as listed in Table <ref>. Since the d_z^2 and d_x^2-y^2 orbitals share the same representation A_1 of the C_2v point group,
it is unnecessary to distinguish them in the band analysis. Therefore, for simplification, we only use the d_z^2 and d_yz orbitals of the top X atom and the d_z^2 and d_xz orbitals of the bottom X atom to construct a four-band model. It can be proved that the four-band model is the minimal one to capture the physics of the monolayer X_2YCO_2. First, since there are two X atoms in a unit cell, the band number of the lattice model must be even. Second, the band representations of the SG No. 115 (with T symmetry) from 2g Wyckoff position can be found in the BCS website <cit.>, and are rewritten in Table <ref>. From Table <ref>, one observes that a two-band lattice model based on the d orbitals of X atom must be a semimetal, as the two bands will be degenerate at Γ or M points. However, the monolayer X_2YCO_2 is a semiconductor rather than a semimetal. This contradiction indicates that the lattice model of the monolayer X_2YCO_2 should have (at least) four bands.
To construct the lattice model, we need to determine the matrix representations of the generators of the SG 115. The basis of the TB model here is {d_z^2^1,d_yz^1,d_z^2^2,d_xz^2}, where the superscript 1(2) denotes the top (bottom) X atom, located at (0,b/2,d/2) and (a/2,0,-d/2), respectively. Here, a=b is the lattice constant and d refers to the vertical distance between the two X atoms. The generators of the symmetry operators of the monolayer X_2YCO_2 are S_4z, M_y and 𝒯, and their matrix representations are obtained as:
S_4z=[[ 0 0 -1 0; 0 0 0 1; -1 0 0 0; 0 -1 0 0 ]], M_y=[[ 1 0 0 0; 0 -1 0 0; 0 0 1 0; 0 0 0 1 ]],
and
𝒯=[[ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1 ]]𝒦,
where 𝒦 is complex conjugation operator. Then, the symmetry-allowed
TB Hamiltonian of the monolayer X_2YCO_2 is established
as
H = ([ H_ top H_ inter; H_ inter^† H_ bottom ]),
with H_ bottom(k_x,k_y)=H_ top(k_y,-k_x),
H_ top=Δσ_z+([ ∑_α=x,yt_1,αcos k_α it^'sin k_y; -it^'sin k_y ∑_α=x,yt_2,αcos k_α ]),
and
H_ inter=([ r_1cosk_x/2cosk_y/2 ir^'cosk_x/2sink_y/2; ir^'cosk_y/2sink_x/2 r_2sink_x/2sink_y/2 ]).
Here, σ_z is the z-component of the Pauli matrix, and all the parameters are real. The Hamiltonian (<ref>) contains the
nearest-neighbor (NN) intra-layer and NN inter-layer hoppings, resulting in 9 symmetry-allowed real parameters. We employ
the gradient descent method <cit.> to determine the parameters by comparing the electronic bands from the first-principles calculations with those from the TB model (<ref>) with appropriate initial parameters. The fitted bands of the monolayer X_2YCO_2 are illustrated in Fig. <ref> and the corresponding parameters are listed in Table <ref>.
As studied in Ref. <cit.>, one of the most intriguing properties of the monolayer X_2YCO_2 is the strong VLC effect. To demonstrate that our TB model can capture the VLC effect, we present the layer polarization of the valley states of the TB model in Fig. <ref>. The layer polarization is defined as <cit.>
P_n(k) = ∫_z>0|ψ_nk|^2dr-∫_z<0|ψ_nk|^2dr,
with ψ_nk representing the eigenstate of n-th Bloch band
and k denoting the wave vector. Here, the z=0 plane is set on the middle Y/C atom layer. The layer polarization P_n(k)
indicates the polarization of ψ_nk between the
top (z>0) and bottom (z<0) layers. From Fig. <ref>, strong valley-contrasted layer polarization can be observed for both conduction and valence bands, reproducing the VLC effect in the monolayer X_2YCO_2.
§ OPTICAL AND TOPOLOGICAL PROPERTIES
In this section, we show that the four-band TB model can describe the optical and topological properties of the monolayer X_2YCO_2. TMDs are known to exhibit valley-contrasting circular dichroism in optical interband absorption <cit.>. However, due to the difference in symmetry, the X (X') valleys in monolayer X_2YCO_2
exclusively couple to x-linearly (y-linearly) polarized light <cit.> rather than the circularly polarized light. Consequently, the monolayer X_2YCO_2
features valley-contrasting linear dichroism. The k-resolved
linear polarization degree of the optical interband absorption between valence
and conduction bands is characterized by
η(k) =|M_x|^2-|M_y|^2/|M_x|^2+|M_y|^2
where M_i=⟨ψ_c k|∂ H/∂ k_i| ψ_v k⟩ is the coupling strength between valence and conduction band with the optical field linearly polarized in the i-th direction. η(k) indicates the normalized absorption difference between x and y-linearly polarized light. The η(k) calculated from our TB model is shown in Fig. <ref>(b). We find that η(X)=1 and η(X')=-1, indicating an opposite linear dichroism around the X and X' valleys. This is consistent with the results in Ref. <cit.>.
Moreover, since the four high-symmetry lines, Γ-X, Γ-X', M-X, and M-X' have mirror symmetry (M_x or M_y), the electronic states on them must exclusively couple to a linearly polarized light whose polarization direction is either parallel or perpendicular to the mirror.
This property also can be found in our calculations, where η( k)=± 1 for the four high-symmetry lines [see Fig. <ref>(b)].
Our four-band TB model can also reproduces the topological properties of the monolayer X_2YCO_2, which is predicted as a second-order topological insulator <cit.>. The topological properties of our TB model can be directly diagnosed using the theory of topological quantum chemistry (TQC) <cit.>.
As shown in Fig. <ref>, the irreducible representations of valence band at all the high-symmetry points are calculated as Γ_1⊕Γ_2+X_3⊕ X_3+M_3⊕ M_4, which are induced by the d_x^2-y^2 and d_z^2 orbital located at 1b Wyckoff position [see Fig. <ref>(b)]. However, in the four-band TB model, the 1b Wyckoff position is empty. This indicates that the four-band TB model established here must be nontrivial.
According to the classification of higher-order topology <cit.>, it is a second-order topological insulator (SOTI).
A characteristic of the SOTI is the presence of corner states at specific corners of SOTI nanodisk. Here, based on the TB model, we calculate the spectrum for a nanodisk with 15×15 unit cells whose edges are on 110 and 110 directions. The results are plotted in Fig. <ref>(c-d), where four degenerate corner states in the gap of bulk state can be clearly observed. The
degeneracy of these four corner states is protected by the S_4z
symmetry of system.
Similar to the low-energy bands in bulk, the four corner states also have strong layer polarization.
Again, due to the S_4z symmetry, the layer polarization of the corner states at x and y axis are opposite, consistent with the previous work<cit.>.
§ PHASE TRANSITIONS
In addition to reproducing the low-energy physics of monolayer
X_2YCO_2, the four-band TB model is itself physically interesting, and can host many topological phases under external fields. One can expect that these topological phases may be realized in the monolayer X_2YCO_2 under suitable conditions.
Owing to the strong VLC effect, the most convenient way to control the bands of the TB model is applying a gate electric field normal to the plane of system, as it can produce an opposite electrostatic potential for the top and bottom atoms. Approximately, the effect of gate
electric field can be incorporated in the TB model (<ref>) by introducing the an on-site energy term,
H_E=α E([ 1 0 0 0; 0 1 0 0; 0 0 -1 0; 0 0 0 -1 ]),
where E is the electric field and α is a real parameter depending on the material details, like the separation of the top and bottom X atoms, layer polarization of valley states, and the screening effect. The values of α for different monolayer X_2YCO_2 are listed in Table <ref>, which is extracted from the first-principle calculations [see Appendix <ref>]. When E is finite, both S_4z and C_2,110 symmetries of the system are broken, rendering the two valleys X and X' non-equivalent. As E increases (assuming E>0), the band gap in X valley decreases while that in X' valley becomes lager. At a critical value E=E_c, the conduction and valence bands touch at X valley, forming a semi-Driac point <cit.>.
Interestingly, the semi-Driac point exhibites a linear dispersion along k_x direction but a quadratic dispersion along k_y direction [see Fig. <ref>(a)], and can be considered as a critical point where two conventional Dirac points merge together.
With the continuous increase of the gate field, the semi-Driac point splits into two conventional Dirac points located at M-X path, as illustrated in Fig. <ref>(c). Since both Dirac points reside around X valley, we term this phase as valley-polarized topological Dirac semimetal (V-DSM).
1.4
The four-band TB model can also be tuned by symmetry-preserving perturbations like biaxial strain, which changes the value of the parameters in the original Hamiltonian (<ref>).
Consider a symmetry-preserving perturbation
H_δ=δ(cosk_x-cosk_y)([ 1 0 0 0; 0 1 0 0; 0 0 -1 0; 0 0 0 -1 ]),
corresponding to the situation that the parameter t_i,α (i=1,2 and α=x,y) by δ has been changed, while the other parameters unchange.
This perturbation changes the band gap at both valleys. Particularly, the band gap of system decreases when δ<0, and closes at a critical value δ=δ_c, in such case, there exist two semi-Driac points residing at X and X' valleys, respectively.
When δ<δ_c, the two semi-Driac points become four symmetry-connected conventional Dirac points, and the system becomes a topological Dirac semimetal (DSM), as shown in Fig. <ref>(c). The phase diagram of the TB model under these two perturbations E and δ is plotted in Fig. <ref>(d).
§ CONCLUSION
In this work, we construct a four-band TB model for the TiSiCO-family monolayers (X_2YCO_2) based on the d-orbitals of the two X atoms.
Via the theory of band representation, we show this four-band model is the minimal one that can capture the low-energy physics of the TiSiCO-family monolayers.
Our TB model includes both inter-layer and intra-layer NN hoppings, and the hopping parameters are fitted to first-principle calculations by gradient descent method.
Consequently, our model accurately reproduces the energy dispersion and the layer polarization of the bands around X and X' valleys.
Our model can also describe the valley-contrasted linear dichroism of the monolayer X_2YCO_2.
Furthermore, we demonstrate that the TB model is a SOTI, and exhibits topological corner states in its nanodisk.
These results are consist with that calculated from first-principle calculations.
We then investigate the possible phase transitions of the TB model under different perturbations. Under a gate field and biaxial strain, the TB model is transformed from a SOTI to two distinct phases: valley-polarized topological Dirac semimetal and conventional topological Dirac semimetal.
Therefore, our TB model not only effectively describes the low-energy properties of monolayer X_2YCO_2, which greatly simplify the further study on monolayer X_2YCO_2 materials, but also can be used to study the interplay between valley physics and higher-order topology.
The authors thank J. Xun for helpful discussions.
This work was supported by the National Key R&D Program of China (Grant No. 2020YFA0308800), the NSF of China (Grants Nos. 12004035, 12061131002 and 12234003), and the National Natural Science Fund for Excellent Young Scientists Fund Program (Overseas).
§ BAND ANALYSIS OF THE TISICO-FAMILY MONOLAYERS
Figure <ref> shows the electronic band structures of the monolayers X_2YCO_2 without SOC. The projection of the electronic bands
onto atomic orbitals is also presented.
One observes that they have similar features as those discussed in the main text for monolayer Ti_2SiCO_2.
§ GATE FIELD CONTROL OF VALLEY STATES IN TISICO-FAMILY MONOLAYERS
Due to VLC, a gate-field control of the valley polarization can be realized in the monolayer X_2YCO_2.
The electronic band of the monolayer X_2YCO_2 under a gate field of 0.05 eV/Å are shown in Fig. <ref>, from which the coefficient α in Eq. (<ref>) can be obtained.
|
http://arxiv.org/abs/2307.01575v1
|
20230704090436
|
Continuous-time mean field Markov decision models
|
[
"Nicole Bäuerle",
"Sebastian Höfer"
] |
math.PR
|
[
"math.PR",
"math.OC",
"90C40, 60J27"
] |
]Continuous-time mean field Markov decision models
N. Bäuerle]Nicole Bäuerle^*
[N. Bäuerle]Department of Mathematics,
Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
mailto:[email protected]@kit.edu
S. Höfer]Sebastian Höfer^*
[S. Höfer]Department of Mathematics,
Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
mailto:[email protected] [email protected]
We consider a finite number of N statistically equal individuals, each moving on a finite set of states according to a continuous-time Markov Decision Process. Transition intensities of the individuals and generated rewards depend not only on the state and action of the individual itself, but also on the states of the other individuals as well as the chosen action. Interactions like this are typical for a wide range of models in e.g. biology, epidemics, finance, social science and queueing systems among others. The aim is to maximize the expected discounted reward of the system, i.e. the individuals have to cooperate as a team. Computationally this is a difficult task when N is large. Thus, we consider the limit for N→∞. In contrast to other papers we do not consider the so-called Master equation. Instead we define a 'limiting' (deterministic) optimization problem from the limiting differential equation for the path trajectories. This has the advantage that we need less assumptions and can apply Pontryagin's maximum principle in order to construct asymptotically optimal strategies. We show how to apply our results using two examples: a machine replacement problem and a problem from epidemics. We also show that optimal feedback policies are not necessarily asymptotically optimal.
[
[
August 1, 2023
==================
otsep5
14cm
Key words
Continuous-time Markov Decision Process, Mean Field Problem, Process limits, Pontryagin's maximum principle
§ INTRODUCTION
We consider a finite number of N statistically equal individuals, each moving on a finite set of states according to a continuous-time Markov Decision Process. Transition intensities of the individuals and generated rewards can be controlled and depend not only on the state and action of the individual itself, but also on the states of the other individuals. Interactions like this are typical for a wide range of models in e.g. biology, epidemics, finance, social science and queueing systems among others. The aim is to maximize the expected discounted reward of the system, i.e. the individuals have to cooperate as a team. This can be implemented by a central controller who is able to observe the whole system and assigns actions to the individuals. Though this system itself can be formulated as a continuous-time Markov Decision Process, the established solution procedures are not really practical since the state space of the system is complicated and of high cardinality. Thus, we consider the limit N→∞ when the number of individuals tends to infinity and analyze the connection between the limiting optimization problem which is a deterministic control problem and the N individuals problem.
Investigations like this are well-known under the name Mean-field approximation, because the mean dynamics of the individuals can be approximated by differential equations for a measure-valued state process. This is inspired by statistical mechanics and can be done for different classes of stochastic processes for the individuals. In our paper we restrict our investigation to continuous-time Markov chains (CTMC). Earlier, more practical studies in this spirit with CTMC, but without control are e.g. <cit.>
which consider illustrating examples to discuss how the mean-field method is used in different application areas. The convergence proof there is based on the law of large numbers for centred Poisson processes, see also <cit.>. <cit.> look at so-called reaction networks which are chemical systems involving multiple reactions and
chemical species. They take approximations of multiscale nature into account and show that 'slow' components can be approximated by a deterministic equation. <cit.> formulate some simple conditions under which a CTMC may be approximated by the solution to a differential equation, with quantifiable error probabilities. They give different applications. <cit.> explore models proposed for the analysis of BitTorrent P2P systems and provide the arguments to justify the passage from the stochastic process, under adequate scaling, to a fluid approximation driven by a differential equation. A more recent application is given in <cit.> where
a multi-type analogue of Kingman’s coalescent as a death chain is considered. The aim is to characterize the behaviour of the replicator coalescent as it is started from an initial population that is arbitrarily large. This leads to a differential equation called the replicator equation.
A related topic are fluid models. Fluid models have been introduced in queueing network theory since there is a close connection between the stability of the stochastic network and the corresponding fluid model, <cit.>. They appear under 'fluid scaling' where time in the CTMC for the stochastic queueing network is accelerated by a factor N and the state is compressed by factor 1/ N. Fluid models have also been used to approximate the optimal control in these networks, see e.g. <cit.>. In <cit.> different scales of time are treated for the approximation and some components may be replaced by differential equations. But there is no mean-field interaction in all of these fluid models.
There are also investigations about controlled mean-field Markov decision processes and their limits in discrete time. An early paper is <cit.> where the mean-field limit for increasing number of individuals is considered in a model where only the central controller is allowed to choose one action. However, in order to get a continuous limit the authors have to interpolate and rescale the original discrete-time processes. This implies the necessity for assumptions on the transition probabilities. The authors show the convergence of the scaled value functions and derive asymptotically optimal strategies.
The two recent papers <cit.> discuss the convergence of value functions and asymptotically optimal policies in discrete time. Thus, also the limit is a controlled process in discrete-time.
Another strand of literature considers continuous-time mean-field games on a finite number of states <cit.>. These papers among others consider the construction of asymptotically optimal Nash-equilibria from a limiting equation. The exception is <cit.> where it is shown that any solution of the limiting game can be approximated by ϵ_N-Nash equilibria in the N player game. However, all these papers deal with the convergence of the HJB equations which appear in the N player game to a limiting equation, called the Master equation (<cit.>) which is a deterministic PDE for the value function. This approach needs sufficient regularity of the value functions and many assumption. <cit.> consider the problem with common noise and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations.
The contribution of our paper is first to establish and investigate the limit of the controlled continuous-time Markov decision processes, seen as a sequence of controlled stochastic processes. In contrast to previous literature our limiting problem is constructed from the differential equations for the limiting state processes and does not appear as a limit of optimality equations per se. This approach is in the spirit of fluid models, however, the limiting procedure is far more natural than in the other situations discussed before, because the scaling in time and space is already intrinsic in the model. Further, the proofs are technically much different from the discrete-time situation, because in discrete-time convergence proofs can be done inductively over the time stages whereas in continuous-time the state trajectories as a whole have to be considered. Second, we are also able to construct an asymptotically optimal strategy for the N individuals model. Our model is very general, has only a few, easy to check assumptions and allows for various applications. The advantage of our limiting optimization problem is that we can apply Pontryagin's maximum principle easily which is often more practical than deterministic dynamic programming. Further we show that an optimal feedback policy in the deterministic problem does not necessarily imply an asymptotically optimal policy for the N individuals problems. Third, we can consider finite and infinite time horizon at the same time. There is essentially no difference. We restrict the presentation mainly to the infinite time horizon.
Our paper is organized as follows: In the next section we introduce our N individuals continuous-time Markov decision process. The aim is to maximize the expected discounted reward of the system. In Section <ref> we introduce a measure-valued simplification which is due to the symmetry properties of the problem and which reduces the cardinality of the state space. The convergence proofs if the number of individuals tends to infinity can be found in Section <ref>. It is essentially based on martingale convergence arguments. In Section <ref> we construct a sequence of asymptotically optimal strategies from the limiting model for the N individuals model. We also show that different implementations may be possible. Finally in Section <ref> we discuss three applications. The first one is a machine replacement problem when we have many machines, see e.g. <cit.>. The second one is the spreading of malware which is based on the classical SIR model for spreading infections, <cit.>. The last example shows that one has to be careful with feedback policies.
§ THE MULTI-AGENT CONTINUOUS-TIME MARKOV DECISION PROCESS
We consider a finite number of N statistically equal individuals, each moving on a finite set of states S according to a continuous-time Markov Decision Process. The vector 𝐱_t = (x_t^1,...,x_t^N)∈ S^N describes the state of the system at time t∈ [0,∞), where x_t^k is the state of individual k=1,…,N. The action space of one individual is a compact Borel set A. The action space of the system is accordingly A^N. We denote an action of the system by 𝐚 = (a^1,...,a^N)∈ A^N where a^k is the action chosen by individual k=1,…,N.
Let D(i)⊂ A be the set of actions available for an individual in state i∈ S which we again assume to be compact. Then the set of admissible actions for the system in state 𝐱∈ S^N is given by 𝐃(𝐱):= D(x^1)×…× D(x^N)⊂ A^N. The set of admissible state-action combinations for one individual is denoted by D:= {(i, a) ∈ S× A | a ∈ D(i) ∀ x∈ S}, and for the whole system by 𝐃:= {(𝐱,𝐚)|𝐚∈𝐃(𝐱)}.
For the construction of the system state process we follow the notation of <cit.>. The state process of the system is defined on the measurable space (Ω,ℱ):= ((S^N×_+)^∞,ℬ((S^N×_+)^∞). We denote an element of Ω by ω=(x_0,t_1,x_1,t_2,...). Now define
X_n :Ω→ S^N, X_n(ω) = x_n, n∈_0,
τ_n :Ω→_+, τ_n(ω) = t_n, n∈,
T_n := ∑_k=1^n τ_k, T_0 := 0.
The controlled state process of the system is then given by
X_t := ∑_n∈_01_{T_n≤ t< T_n+1}𝐗_n, t∈ [0,∞).
The construction of the process can be interpreted as follows: The random variables τ_n describe the sojourn times of the system in state 𝐗_n-1. Based on the sojourn times, T_n describes the time of the n-th jump of the process and 𝐗_n the state of the process on the
interval [T_n,T_n+1). By construction the continuous-time state process (𝐗_t) has piecewise constant càdlàg-paths and the embedded discrete-time process is (𝐗_n).
The system is controlled by policies. W.l.o.g. we restrict here to Markovian stationary policies. Further, we allow for randomized decisions, i.e. each individual can choose a probability distribution on A as its action. Hence a policy for the system is given by a collection of N stochastic kernels π(d𝐚|𝐱) = (π^k(da|𝐱))_k=1,...,N, where
π^k:S^N×ℬ(A) → [0,1], (𝐱,𝒜) ↦π^k(𝒜|𝐱) (kernel for agent k).
π^k(𝒜|𝐱) is the stochastic kernel (it is here considered as a relaxed control) with which agent k chooses an action, given the state 𝐱 of the system. Naturally, it should hold that the kernel is concentrated on admissible actions, i.e. π^k(D(x^k) |𝐱)= 1 for all individuals k=1,...,N.
The action process is thus defined by
π_t := ∑_n∈_01_{T_n< t≤ T_n+1}π(·|𝐗_n), t∈ [0,∞).
In contrast to the state process, the action process has piecewise constant càglàd-paths. This means that a new decision can only be taken after a change of state has already occurred. The general theory on continuous-time Markov decision processes states that the optimal policy can be found among the piecewise constant, deterministic, stationary policies. In particular, varying the action continuously on the interval [T_n,T_n+1) does not increase the value of the problem. Also randomization does not increase the value, but in view of the sections to come, we already allowed for randomization (relaxation) here.
To prepare the description of the transition mechanism in our model, we define the empirical distribution of the individuals over the states, i.e.
μ[𝐱] := 1/N∑_k=1^N δ_x_k.
where δ_x_k is the Dirac measure in point x_k. The transition intensities for one individual are given by a signed kernel
q:S× A×(S)×𝒫(S) →, (i,a,μ,Γ)↦ q(Γ| i,a,μ) = ∑_j∈Γ q({j}| i,a,μ).
Here (S) is the set of all probability distributions on S and 𝒫(S) is the power set of S.
Note that the transition of an individual depends not only on its own state and action, but also on the empirical distribution of all individuals over the states.
We make the following assumptions on q:
(Q1) q({j}|i,a,μ)≥ 0 for all i,j∈ S, j≠ i, a ∈ D(i), μ∈(S).
(Q2) ∑_j q({j}|i,a,μ)=0 for all i∈ S, a∈ D(i), μ∈(S).
(Q3) sup_i,a,j,μ |q({j}|i,a,μ)|=: q_max<∞.
(Q4) μ↦ q({j}|i,a,μ) is continuous w.r.t. weak convergence for all i,j∈ S, a∈ D(i).
(Q5) a ↦ q({j}|i,a,μ) is continuous for all i,j∈ S, μ∈(S).
Note that (Q3) follows from (Q4) and (Q5), but since it is important we list it here. Based on the transition intensities for one individual, the transition intensities of the system are given by
q({(x^1,…,x^k-1,j,x^k+1,… x^N)} |𝐱, 𝐚 ) := q({j}|x^k,a^k,μ[𝐱])
for all 𝐱, 𝐚∈𝐃(𝐱), j∈ S, j≠ x^k and
q({𝐱}| 𝐱, 𝐚) := ∑_k=1^N q({x^k}|x^k,a^k,μ[𝐱]).
All other intensities are =0. The intensity in (<ref>) describes the transition of individual k from state x^k∈ S to state j∈ S, while all other individuals stay in their current state. Since only one individual can change its state at a time, this definition is sufficient to describe the transition mechanism of the system.
Further we set (in a relaxed sense) for a decision rule π^k(da|𝐱)
q({(x^1,…,x^k-1,j,x^k+1,… x^N)} |𝐱, π ) = ∫_A q({j}|x^k,a,μ[𝐱])π^k(da|𝐱).
Note that in a certain sense there is abuse of notation here since we use the letter q both for the individual transition intensity and for the system transition intensity. It should always be clear from the context which one is meant.
The probability measure of the multi-agent process is now given by the following transition kernels
^π(τ_n≤ t, 𝐗_n ∈ B| 𝐗_n-1) =
∫_0^t q(B| 𝐗_n-1,π) e^s · q ({𝐗_n-1}| 𝐗_n-1,π)ds
for all t≥0 and B∈𝒫(S^N). In particular, the sojourn times τ_n are exponentially distributed with parameter -q ({𝐗_n-1}| 𝐗_n-1,π) respectively.
Returning to the model's control mechanism, keep in mind that the policy of an individual π^k(da|𝐱) is allowed to depend on the state of the whole system, i.e. we assume that each individual has information about the position of all other individuals. Therefore, we can interpret our model as a centralized control problem, where all information is collected and shared by a central controller.
The goal of the central controller is to maximize the social reward of the system. In order to implement this, we introduce the (stationary) reward function for one individual as
r:D×(S)→, (i,a,μ) ↦ r(i,a,μ),
which does not only depend on state and action of the individual, but also on the empirical distribution of the system. We make the following assumptions on the reward function:
(R1) For all (i,a)∈ D the function μ↦μ(i) r(i,a,μ) is continuous w.r.t. weak convergence.
(R2) For all i∈ S and μ∈(S) the function a ↦ r(i,a,μ) is continuous.
Since the set of admissible actions D(i) is compact, (R1) and (R2) imply that the following expression is bounded:
sup_(i,a)∈ D, μ∈(S) |μ(i) r(i,a,μ)|<∞.
The (social) reward of the system is the average of the individuals' rewards
r(𝐱,𝐚):= 1/N∑_k=1^N r(x^k,a^k, μ[𝐱]),
or, in a relaxed sense for a decision rule π^k(da|𝐱)
r(𝐱,π):= 1/N∑_k=1^N ∫_A r(x^k,a, μ[𝐱]) π^k(da|𝐱).
The aim is now to find the social optimum, i.e. to maximize the joint expected discounted reward of the system over an infinite time horizon. For a policy π, a discount rate β>0 and an initial configuration 𝐱∈ S^N define the value function
V_π(𝐱) = _𝐱^π[ ∫_0^∞ e^-β t r(𝐗_t,π_t)dt]
V(𝐱) = sup_π V_π(𝐱).
We are not discussing solution procedures for this optimization problem here since we will simplify it in the next section and present asymptotically optimal solution methods in Section <ref>.
§ THE MEASURE-VALUED CONTINUOUS-TIME MARKOV DECISION PROCESS
As N is getting larger, so does the state space S^N, which could make the model increasingly complex and impractical to solve. Therefore we seek for some simplifications. An obvious approach which is common for these kind of models, is to exploit the symmetry of the system by capturing not the state of every single individual, but the relative or empirical distribution of the individuals across the | S| states.
Thus, let μ_t^N := μ[𝐗_t] and define as new state space the set of all distributions which are empirical measures of N atoms
_N(S) := {μ∈(S)|μ = μ[𝐱], for 𝐱∈ S^N}.
It holds that the new state process μ_t^N is the same as
μ_t^N = ∑_n∈_01_{T_n ≤ t < T_n+1}μ[ 𝐗_n], t∈ [0,∞).
As action space take the | S|-fold Cartesian product (A)^| S| of (A). Hence, an action is given by | S| probability measures α(d𝐚) = (α^i(da))_i∈ S with α^i(D(i)) = 1. Hereby the i-th component indicates the distribution of the individuals' actions in state i∈ S. The set of admissible state-action combinations of the new model is given by D̂ := _N(S) ×(A)^| S |.
For the policies we restrict again to Markovian, stationary policies given by a collection of | S | stochastic kernels π̂(d 𝐚|μ)= (π̂^i(da|μ))_i∈ S, where
π̂^i:(S)×ℬ(A) → [0,1], (μ,𝒜) ↦π̂^i(𝒜|μ) (kernel for state i).
where π̂^i(D(i)|μ)=1.
In what follows we denote μ_n^N := μ[𝐗_n]. Then we can express the action process by setting
π̂_t := ∑_n∈_01_{T_n < t ≤ T_n+1}π̂(·| μ_n^N), t∈ [0,∞).
The transition intensities of the process (μ_t^N)_t≥ 0 are given by
q({μ^i→ j}| μ,α )= Nμ(i) ∫_A q({j}| i,a,μ) α^i(da), μ∈(S), α∈(A)^| S |,
with μ^i→ j:= μ-1/Nδ_i+1/Nδ_j for all i,j∈ S, i≠ j if μ(i)>0. This intensity describes the transition of one arbitrary individual in state i∈ S to state j∈ S, while all other individuals stay in their current state. Note that the intensity follows from the usual calculations for continuous-time Markov chains, in particular from the fact that if X,Y are independent random variables with X∼ Exp(λ), Y∼ Exp(ν), then X∧ Y ∼ Exp(λ+ν). In the situation in (<ref>) we have Nμ(i) individuals in state i. Further we set for all μ∈(S) and α∈(A)^| S|
q({μ}| μ,α ):= -∑_i, μ(i)>0∑_j≠ iq({μ^i→ j}| μ,α ).
All other intensities are zero, since again only one individual can change its state at a time.
The probability distribution of the measure-valued process under a fixed policy π̂ is now given by the following transition kernels
^π̂(τ_n≤ t, μ_n^N ∈ B| μ_n-1^N) =
∫_0^t q(B| μ_n-1^N,π̂) e^s· q ({μ_n-1^N}| μ_n-1^N,π̂)ds
for all t≥0 and B⊂_N(S) measurable, where the random variables (τ_n) are the same as before.
The reward function of the system is derived from the reward for one individual:
r(μ,α):= ∑_i∈ S∫ r(i,a,μ) α^i(da) μ(i).
In view of (<ref>) r(μ,α) is bounded.
The aim in this model is again to maximize the joint expected discounted reward of the system over an infinite time horizon. For a policy π̂, a discount rate β>0 and an initial configuration μ∈_N(S) define the value function
V_π̂^N(μ) = _μ^π̂[ ∫_0^∞ e^-β t r(μ_t^N,π̂_t)dt]
V^N(μ) = sup_π̂ V_π̂^N(μ).
We can now show that both formulations (<ref>) and (<ref>) are equivalent in the sense that the optimal values are the same. Of course, an optimal policy in the measure-valued setting can directly be implemented in the original problem. The advantage of the measure-valued formulation is the reduction of the cardinality of the state space. Suppose for example that S={0,1}, i.e. all individuals are either in state 0 or state 1. Then |S^N|=2^N in the original formulation whereas |_N(S)|=N+1 in the second formulation.
It holds that V(𝐱)=V^N(μ) for μ=μ[𝐱] for all 𝐱∈ S^N.
First of all observe that the reward function r in (<ref>) in the multi-agent problem is symmetric, i.e. r(𝐱,𝐚)=r(s(𝐱),s(𝐚)) for any permutation s(·) of the vectors. Moreover, the individual transition intensities q(·|i,a,μ[𝐗_t]) depend only on the own state of the individual and on μ[𝐗_t]. Thus, the optimal policy in the multi-agent problem at time t only depends on μ[𝐗_t]. Now for a decision rule π for the multi-agent problem define for all states i∈ S:
π̂^i(da|μ) := 1/Nμ(i)∑_k=1^N π^k(da|𝐱) 1_{x^k=i}
where μ=μ[𝐱].
On the right-hand side we consider all agents in state i and take a convex combination of their action distributions as the action distribution in state i.
If π depends only on μ[𝐱], then this is also true for π̂.
Choosing π̂ in the measure-valued MDP yields the reward (again μ=μ[𝐱])
r(μ,π̂)= ∑_i∈ S∫ r(i,a,μ) 1/Nμ(i)∑_k=1^N π^k(da|𝐱) 1_{x^k=i}μ(i)
= 1/N∑_k=1^N ∑_i∈ S1_{x^k=i}∫ r(i,a,μ)π^k(da|𝐱)=r(𝐱,π).
Thus, the reward in both formulations is the same.
Finally the transition intensity in the multi-agent model that one individual changes its state from i to j is given by (again μ=μ[𝐱])
∑_k=1^N 1_{x^k=i}∫ q({j}| i,a,μ) π^k(da|𝐱 )
= Nμ(i) ∫_A q({j}| i,a,μ) 1/Nμ(i)∑_k=1^N π^k(da|𝐱) 1_{x^k=i}
=
Nμ(i) ∫_A q({j}| i,a,μ) π̂^i(da|μ) = q({μ^i→ j}| μ,α ).
Thus, the empirical measure process of the multi-agent problem is statistically equal to the measure-valued MDP process and they produce the same expected reward under measure-dependent policies which implies the result. A formal proof has to be done by induction like in <cit.> Thm. 3.3.
The problem we have introduced is a classical continuous-time Markov Decision Process and can be solved with the established theory accordingly. Thus, we obtain:
There exists a continuous function v:_N(S)→ satisfying
β v(μ) = sup_α∈(A)^|S|{ r(μ,α) + ∫ v(ν) q(dν|μ,α) }
for all μ∈_N(S) and there exists a maximizer π̂(·|μ) of the r.h.s. such that v=V^N and π̂ determines the optimal policy by (<ref>).
Follows from Theorem 4.6, Lemma 4.4 in <cit.> or Theorem 3.1.2 in <cit.>.
Theorem <ref> implies a solution method for problem (<ref>). It can e.g. be solved by value or policy iteration. However, as already discussed, even in this simplified setting, the computation may be inefficient if N is large, since this leads to a large state space.
§ CONVERGENCE OF THE STATE PROCESS
In this section we discuss the behaviour of the system when the number of individuals tends to infinity. In this case we obtain a deterministic limit control model which serves as an asymptotic upper bound for our optimization problem with N individuals. Moreover, an optimal control of the limit model can be used to establish a sequence of asymptotically optimal policies for the N individual model.
In what follows we consider (μ_t^N) as a stochastic element of D__N(S)[0,∞), the space of càdlàg paths with values in _N(S) equipped with the Skorokhod J_1-topology and metric d_J_1. On _N(S) we choose the total variation metric.
Further, we consider (π̂_t^i) as an stochastic element of ℛ:= {ρ:_+→(A)} endowed with the Young topology (cf. <cit.>). It is possible to show that ℛ is compact and metrizable. Measurability and convergence in ℛ can be characterized as follows:
a) ρ:_+→(A) is measurable if and only if ρ is a transition probability from _+ into A.
b) Let ρ^n,ρ∈ℛ. ρ^n→ρ for n→∞ if and only if
∫_0^∞∫_A ψ(t,a) ρ_t^n(da)dt →∫_0^∞∫_A ψ(t,a) ρ_t(da)dt
for all measurable functions ψ:_+× A→ such that a↦ψ(t,a) is continuous for all t≥ 0 and ∫_0^∞sup_a |ψ(t,a)|dt <∞.
In a first step we define for N∈, a fixed policy π̂^N and arbitrary j∈ S, the one-dimensional process
M_t^N(j) :=μ_t^N(j)-μ_0^N(j)-∫_0^t ∑_ν∈_N(S) (ν(j)-μ_s^N(j)) q ({ν}|μ_s^N,π̂_s)ds.
Then (M_t^N(j)) are martingales w.r.t. the filtration ℱ_t^N = σ(μ_s^N,s≤ t). This follows from the Dynkin formula, see e.g. <cit.>, Proposition 14.13. Next we can express the process (M_t^N(j)) a bit more explicit. Note that the difference ν(j)-μ_s^N(j) can either be -1/N if an individual changes from state j to a state k≠ j or it could be 1/N if an individual changes from state i≠ j to state j. Since by (Q2)
∑_k≠ j∫ q ({k}|j,a,μ_s^N) π̂^N,j_s(da) = - ∫ q ({j}|j,a,μ_s^N) π̂_s^N,j(da)
we obtain by inserting the intensity (<ref>) and by using (<ref>)
M_t^N(j)= μ_t^N(j)-μ_0^N(j)-∫_0^t ∑_k≠ j -1/N N μ_s^N(j) ∫ q ({k}| j,a,μ_s^N)π̂^N,j_s(da)ds
-∫_0^t ∑_i≠ j1/N N μ_s^N(i)∫ q ({j}|i,a,μ_s^N) π̂_s^N,i(da)ds
= μ_t^N(j)-μ_0^N(j)-∫_0^t ∑_i∈ Sμ_s^N(i)∫ q ({j}|i,a,μ_s^N) π̂_s^N,i(da)ds.
With this representation we can prove that the sequence of stochastic processes (M^N(j)) converges weakly (denoted by ⇒) in the Skorokhod J_1-topology to the zero process.
We have for all j∈ S that
(M_t^N(j))_t≥ 0⇒ 0, N→∞
First we show that M_t^N(j) is bounded for fixed t:
| M_t^N(j)| = |μ_t^N(j)-μ_0^N(j)-∫_0^t ∑_i∈ Sμ_s^N(i)∫ q ({j}|i,a,μ_s^N) π̂_s^N,i(da)ds|
≤|μ_t^N(j)-μ_0^N(j)|+∫_0^t ∑_i∈ Sμ_s^N(i)∫| q ({j}|i,a,μ_s^N)|π̂_s^N,i(da)ds
≤ 1+q_max· t <∞
Therefore (M_t^N(j))_t≥ 0 are square-integrable martingales. Now we take advantage of the fact that there are only jumps of height 1/N in our model, since no two individuals change their state simultaneously. With the quadratic variation of the process we obtain
𝔼[(M_t^N(j))^2] = 𝔼[⟨ M_t^N(j)⟩] ≤1/N^2𝔼[ # jumps in [0,t]]
≤1/N^2· N· q_max· t = 1/N· q_max· t N→∞⟶ 0.
Doob's L^p-inequality provides on [0,t]
𝔼[(sup_s∈ [0,t]M_s^N(j))^2] ≤ 4·𝔼[(M_t^N(j))^2] N→∞⟶ 0.
Thus for the sequence (sup_s∈ [0,t] M_s^N(j))_N∈ it holds that
sup_s∈ [0,t] M_s^N(j) L^2⟶ 0.
Now we can find a suitable probability space (Ω,ℱ,), such that for -almost all ω∈Ω the sequence of functions ((M_s^N(j)(ω))_s∈ [0,t])_N∈ converges uniformly to the zero-function.
The finite-dimensional distributions with arbitrary time-points t_1,..,t_k∈ [0,t] then obviously fulfill
(M_t_1^N(j),...,M_t_n^N(j)) a.s.⟶ (0,...,0)
and therefore in particular
(M_t_1^N(j),...,M_t_n^N(j)) ⇒ (0,...,0).
Here ⇒ is the usual weak convergence of random vectors in ^n.
To apply Theorem VI.16 in <cit.> we check Aldous' condition. Let (δ_N) be a sequence of positive numbers with δ_N → 0 and (ρ_N) a sequence of stopping times w.r.t. (ℱ_t^N) with values in [0,t]. Then we have
[(M_ρ_N^N(j))^2] ≤𝔼[(sup_s∈ [0,t]M_s^N(j))^2] ≤ 4·𝔼[(M_t^N(j))^2] N→∞⟶ 0.
Further, for N sufficiently large it holds that
[(M_ρ_N+δ_N^N(j))^2] ≤𝔼[(sup_s∈ [0,2t]M_s^N(j))^2] ≤ 4·𝔼[(M_2t^N(j))^2] N→∞⟶ 0.
Therefore M_ρ_N^N(j) and M_ρ_N+δ_N^N(j) converge in L^2 to 0 (and thus their difference). Hence, the conditions of Theorem VI.16 in <cit.> are fulfilled, and the sequence M_t^N(j) converges weakly on [0,∞) towards 0 in the sense of the Skorokhod J_1-metric.
Next we show that an arbitrary state-action process is relatively compact which implies the existence of converging subsequences.
A sequence of arbitrary state-action processes (μ^N,π̂^N)_N is relatively compact. Thus, there exists a subsequence (N_k) which converges weakly
(μ^N_k,π̂^N_k) ⇒ (μ^*,π̂^*), k→∞.
Moreover, the limit μ^* satisfies
a) (μ^*_t) has a.s. continuous paths,
b) and for each component j we have
μ^*_t(j) = μ^*_0(j) + ∫_0^t ∑_i∈ Sμ_s^*(i) ∫ q({j}|i,a,μ^*_s) π̂^*,i_s(da)ds.
We start by showing the relative compactness of a sequence (μ^N)_N.
We use Theorem 2.7 in <cit.>.
The sequence (μ^N)_N has paths in D_(S)[0,∞), where (S) is complete and separable with respect to the total variation distance.
In what follows let S_T^N be the set of (ℱ_t^N)-stopping times τ with τ≤ T a.s.
For every ε>0 and rational t≥0 choose the compact set Γ_t,ε≡(S). Then we obtain by construction of the model
(μ_t^N ∈Γ_t,ε) = 1.
Moreover, for every T>0 it holds that
lim_δ→ 0lim sup_N→∞sup_τ∈ S_T^N[min{1,||μ_τ^N - μ_τ+δ^N||_TV}]
≤ lim_δ→ 0lim sup_N→∞sup_τ∈ S_T^N[||μ_τ^N - μ_τ+δ^N||_TV]
≤ lim_δ→ 0lim sup_N→∞sup_τ∈ S_T^N[# state changes in [τ,τ+δ] ] ·1/N
≤ lim_δ→ 0lim sup_N→∞sup_τ∈ S_T^N N· q_max·δ·1/N =0.
The second inequality holds because ||μ_s^N-μ_t^N||_TV = 1/N, provided that in [s,t] only one state change occurs, i.e. one individual changes its state. Theorem 2.7 in <cit.> now states that (μ^N)_N is relatively compact.
Since ℛ is compact, so is ℛ^| S| and we obtain directly the relative compactness of (π̂^N)_N. The relative compactness of the sequence of state-action-processes (μ^N,π̂^N)_N then follows by Proposition 3.2.4 in <cit.>. Thus, a converging subsequence exists. To ease the notation we will still denote it by (N).
To prove the continuity of the limit state process define for arbitrary μ∈ D_(S)[0,∞)
J(μ,u) = sup_0≤ t≤ u||μ_t-μ_t-||_TV.
J(μ) = ∫_0^∞ e^-uJ(μ,u)du.
For the sequence of state processes (μ^N)_N we get
lim_N→∞ J(μ^N) = lim_N→∞∫_0^∞ e^-usup_0≤ t≤ u||μ_t^N-μ^N_t-||_TV du ≤lim_N→∞1/N =0.
We exploit the fact that there can be at most jumps of height 1/N in the state processes with N individuals. Theorem 3.10.2 a) in <cit.> then implies the a.s. continuity of the limit state process (μ_t^∗)_t≥0.
In particular, due to the Skorokhod representation theorem we find a probability space such that convergence of μ^N⇒μ^* holds almost surely in J_1 and is uniformly on compact sets such as [0,t] since μ^* is a.s. continuous (see p. 383 in <cit.>). Thus, component-wise for almost all ω in the probability space above we obtain:
lim_N→∞ sup_0≤ s≤ t||μ^N_s(ω)-μ^∗_s(ω) ||_TV= 0
for every t∈ [0,∞).
Finally we have to take the limit N→∞ in (<ref>). By the previous Lemma <ref> we know that the martingale on the left-hand side converges to zero and that μ^N_t(ω) →μ_t^*. Now consider the integral on the right-hand side:
| ∫_0^t∑_i∈ Sμ_s^N(i)∫ q({j}|i,a,μ^N_s) π̂^N,i_s(da)ds - ∫_0^t∑_i∈ Sμ_s^*(i)∫ q({j}|i,a,μ^*_s) π̂^*,i_s(da)ds|
≤ | ∫_0^t∑_i∈ Sμ_s^N(i)∫ q({j}|i,a,μ^N_s) π̂^N,i_s(da)ds - ∫_0^t∑_i∈ Sμ_s^*(i)∫ q({j}|i,a,μ^*_s) π̂^N,i_s(da)ds|
+ | ∫_0^t∑_i∈ Sμ_s^*(i)∫ q({j}|i,a,μ^*_s) π̂^N,i_s(da)ds - ∫_0^t∑_i∈ Sμ_s^*(i)∫ q({j}|i,a,μ^*_s) π̂^*,i_s(da)ds|.
The second expression tends to 0 for N→∞ due to the definition of the Young topology and the fact that a↦ q({j}|i,a,μ^*_s) is continuous by assumption. The first expression can be bounded by
∫_0^t∑_i∈ S∫| μ_s^N(i) q({j}|i,a,μ^N_s) - μ_s^*(i) q({j}|i,a,μ^*_s)| π̂^N,i_s(da)ds
≤ ∫_0^t∑_i∈ Ssup_a∈ D(i)| μ_s^N(i) q({j}|i,a,μ^N_s) - μ_s^*(i) q({j}|i,a,μ^*_s)| ds
which also tends to zero due to dominated convergence, (Q4),(Q5) and Lemma <ref>.
Now putting things together, equation (<ref>) implies that the limit satisfies the stated differential equation.
§ THE DETERMINISTIC LIMIT MODEL
Consider the following deterministic optimization problem:
(F) sup_π̂∫_0^∞ e^-β t r(μ_t,π̂_t) dt,
s.t. μ_0∈(S), π̂_t^i ∈(A), π̂_t^i(D(i))=1,
μ_t(j) = μ_0(j) + ∫_0^t ∑_i∈ Sμ_s(i) ∫ q({j}|i,a,μ_s) π̂_s^i(da)ds, ∀ t≥ 0, j=1,…,N, i∈ S.
Note that the theory of continuous-time Markov processes implies that μ_t is automatically a distribution. Hence one of the N differential equations in (F) may be skipped. Also note that when the transition intensity and the reward are linear in the action, relaxation of the control is unnecessary.
We denote the maximal value of this problem by V^F(μ_0). We show next, that this value provides an asymptotic upper bound to the value of problem (<ref>).
For all (μ^N_0) ⊂_N(S), μ_0∈(S) with μ_0^N ⇒μ_0 and for all policies (π̂_t^N) we have
lim sup_N→∞ V^N_π̂^N(μ_0^N) ≤ V^F(μ_0).
According to Theorem <ref> we can choose a subsequence (N_k) of corresponding state and action processes such that
(μ^N_k,π̂^N_k) ⇒ (μ^*,π̂^*), k→∞.
For convenience we still denote this sequence by (N). We show that
lim_N→∞ V^N_π̂^N(μ_0^N) = lim_N→∞[ ∫_0^∞ e^-β t r(μ_t^N,π̂_t^N)dt]
= [ ∫_0^∞ e^-β t r(μ_t^*,π̂_t^*)dt] ≤ V^F(μ_0).
The last inequality is true due to the fact that by Theorem <ref> the limit process (μ^*,π̂^*) satisfies the constraints of problem (F).
Let us show the second equality. We obtain by bounded convergence (r is bounded)
lim_N→∞[ ∫_0^∞ e^-β t r(μ_t^N,π̂_t^N)dt]= [ lim_N→∞∫_0^∞ e^-β t r(μ_t^N,π̂_t^N)dt].
Further we have
| ∫_0^∞ e^-β t∑_i∈ S∫_A r(i,a,μ_t^N) π̂_t^N,i(da) μ_t^N(i)dt - ∫_0^∞ e^-β t∑_i∈ S∫_A r(i,a,μ_t^*) π̂_t^*,i(da) μ_t^*(i)dt|
≤ | ∫_0^∞ e^-β t∑_i∈ S∫_A r(i,a,μ_t^N) π̂_t^N,i(da) μ_t^N(i)dt - ∫_0^∞ e^-β t∑_i∈ S∫_A r(i,a,μ_t^*) π̂_t^N,i(da) μ_t^*(i)dt|
+ | ∫_0^∞ e^-β t∑_i∈ S∫_A r(i,a,μ_t^*) π̂_t^N,i(da) μ_t^*(i)dt-∫_0^∞ e^-β t∑_i∈ S∫_A r(i,a,μ_t^*) π̂_t^*,i(da) μ_t^*(i)dt|.
The second expression tends to zero for N→∞ due to the definition of the Young topology and the fact that a↦ r(i,a,μ) is continuous by (R2). The first expression can be bounded from above by
∫_0^∞ e^-β t∑_i∈ S∫_A | r(i,a,μ_t^N) μ_t^N(i) - r(i,a,μ_t^*)μ_t^*(i) | π̂_t^N,i(da) dt
≤ ∫_0^∞ e^-β t∑_i∈ Ssup_a∈ D(i)| r(i,a,μ_t^N) μ_t^N(i) - r(i,a,μ_t^*)μ_t^*(i) | dt
which also tends to zero for N→∞ due to (R1), (R2), Lemma <ref> and dominated convergence. Thus, the statement follows.
On the other hand we are now able to construct a strategy which is asymptotically optimal in the sense that the upper bound in the previous theorem is attained in the limit. Suppose that (μ^*,π̂^*) is an optimal state-action trajectory for problem (F). Then we can consider for the N individual problem the strategy
π̂_t^N,i:= π̂^*,i_t
which applies at time t the kernel π̂^*,i_t irrespective of the state μ_t^N the process is in. More precisely, the considered strategy is deterministic and not a feedback policy.
Suppose π̂^* is an optimal strategy for (F) and let (μ^N_0) ⊂_N(S) be such that μ_0^N ⇒μ_0∈(S). Then if we use this strategy for problem (<ref>) for any N we obtain
lim sup_N→∞ V_π̂^*^N(μ_0^N) =V^F(μ_0).
Thus, we call π̂^* asymptotically optimal.
First note that π̂^* is an admissible policy for any N. Further let (μ_t^N) be the corresponding state process when N individuals are present. Let (N_k) be a subsequence such that
μ^N_k⇒μ^*, k→∞
holds (Theorem <ref>). Using the same arguments as in the last proof we obtain
lim_N→∞[ ∫_0^∞ e^-β t r(μ_t^N,π̂_t^*)dt]= [ ∫_0^∞ e^-β t r(μ_t^*,π̂_t^*)dt] = V^F(μ_0).
Together with the previous theorem, the statement is shown.
Remarks:
a) If the differential equation for (μ_t) in (F) has a unique solution under π̂^*, then we have convergence μ^N⇒μ^* for N→∞ and not only convergence of subsequences.
b) Note that the construction of asymptotically optimal policies which we present here, works in the same way when we consider control problems with finite time horizon. I.e. instead of (<ref>) we consider
sup_π̂_𝐱^π̂[ ∫_0^T e^-β t r(μ^N_t,π̂_t)dt+g(μ^N_T)]
with possibly a terminal reward g(·) for the final state.
In this case (F) is given with a finite time horizon
sup_π̂∫_0^T e^-β t r(μ_t,π̂_t) dt + g(μ_T)
s.t. μ_0∈(S), π̂_t^i ∈(A), π̂_t^i(D(i))=1,
μ_t(j) = μ_0(j) + ∫_0^t ∑_i∈ Sμ_s(i) ∫ q({j}|i,a,μ_s) π̂_s^i(da)ds, ∀ t∈[0,T], j=1,…,N, i∈ S,
Theorem <ref> holds accordingly.
c) Suppose we obtain for problem (F) an optimal feedback rule π̂_t (·)= π̂(·|μ_t). If μ↦π̂(·|μ) is continuous, this feedback rule is also asymptotically optimal for problem (<ref>). The proof can be done in the same way as before.
d) Natural extensions of our model that we have not included in the presentation are resource constraints. For example the total sum of fractions of a certain action may be limited, i.e. we restrict the set (A)^|S| by requiring that ∑_i∈ Sπ̂_t^i({a^0})≤ c < |S| for a certain action a^0∈ A. As long as the constraint yields a compact subset of (A)^|S| our analysis also covers this case.
Moreover, a direct implementation of policy π̂^∗ in the problem (<ref>) might make it necessary to update the policy continuously. This can be avoided by using the following policy instead. We assume here that t↦π̂^*_t is piecewise continuous. Thus, let (t_n)_n∈ be the discontinuity time points of π̂^∗ and define the set
{ T_n^N, n∈}∪{ t_n, n∈} =: {T̃_1^N< T̃_2^N <…}
where T_n^N describes the time of the n-th jump of the N individuals process. The r.v. (T̃_n) is the ordered sequence of the time points in this set. Then
π_t^N,∗ := ∑_n=0^∞π̂_T̃_n^∗1_[T̃_n^N, T̃_n+1^N)(t).
The idea of the action process π_t^N,∗ is to adapt it to π̂^∗ only when an individual changes its state or when π̂^∗ has a jump, and to keep it constant otherwise.
It can be shown that this sequence of policies is also asymptotically optimal.
Suppose π̂^* is a piecewise continuous optimal strategy for (F) and let (μ^N_0) ⊂_N(S) be such that μ_0^N ⇒μ_0∈(S). Then if we use the strategy ( π_t^N,∗) of (<ref>) for problem (<ref>) for any N we obtain
lim sup_N→∞ V_π̂^N,∗^N(μ_0^N) =V^F(μ_0).
In light of the proof of Theorem <ref> it is enough to show that π^N,∗⇒π^*. Indeed, the convergence can be shown -a.s. Now (π^N,∗) converges in J_1-topology against π^* on [0,∞) if and only if (π^N,∗)|_[0,T] the restriction to [0,T] converges in the finite J_1-topology to the restriction π^*_[0,T] for all T which are continuity points of the limit function (see <cit.> sec.16, Lem.1). Since π̂^* is piecewise continuous we can consider the convergence on each compact interval of the partition separately. Indeed we have if t∈ [T̃_n^N, T̃_n+1^N]
||π_t^N,∗ - π̂_t^∗||_TV≤sup_s∈ [T̃_n^N, T̃_n+1^N]||π̂_s^∗ - π̂_t^∗||_TV.
Since t↦π̂^*_t is continuous on this interval and since all |T̃_n+1^N-T̃_n^N| converge to zero for N→∞ uniformly (the jump intensity increases with N) we have that the right hand side converges to zero for N→∞ uniformly in t which implies the statement.
§ APPLICATIONS
In this section we discuss two applications of the previously derived theorems and one example which shows that state processes under feedback policies do not necessarily have to converge. More precisely we construct in two applications asymptotically optimal strategies for stochastic N individuals systems from the deterministic limit problem (F). The advantage of our problem (F) in contrast to the master equation is that it can be solved with the help of Pontryagin's maximum principle which gives necessary conditions for an optimal control and is in many cases easier to apply than dynamic programming. For examples see <cit.> and for the theory see e.g. <cit.>.
§.§ Machine replacement
The following application is a simplified version of the deterministic control problem in <cit.>. A mean-field application can be found in <cit.>. Suppose a company has N statistically equal machines. Each machine can either be in state 0='working' or in state 1='broken', thus S={0,1}. Two actions are available: 0='do nothing' or 1='repair', thus A={0,1}. A working machine does not need repair, so D(0)={0}. The transition rates are as follows: A working machine gets broke with fixed rate δ>0. A broken machine which gets repair changes to the state 'working' with rate ρ>0. Thus, we can summarized the transition rates of one machine by
q({1} | 0, 0, μ_t^N) = δ, q({0} | 1, a_t, μ_t^N) = ρδ_{a_t=1}.
The diagonal elements of the intensity matrix are given by
q({0} | 0, 0, μ_t^N) = -δ, q({1} | 1, a_t, μ_t^N) = -ρδ_{a_t=1},
and all other intensities are zero. Obviously (Q1)-(Q5) are satisfied. The initial state of the system is μ_0^N=(1,0), i.e. all machines are working in the beginning. Each working machine produces a reward rate g>0 whereas we have to pay a fixed cost of C>0 when we have to call the service for repair, i.e.
r(i,a,μ_t^N)= g δ_{i=0}-C δ_{a=1}δ_{i=1}1/1-μ_t^N(0).
Hence we obtain an interaction of the individuals in the reward. Note that (R1), (R2) are satisfied.
This yields the reward rate for the system
r(μ_t^N,π̂_t) = g μ_t^N(0)-C(1-π̂^1_t({1}|μ_t^N)).
Thus, problem (F) in this setting is given by (we denote the limit by (μ_t(0),μ_t(1))=:(μ_t,1-μ_t) and let α_t:= π̂_t^1({0})):
(F) sup_(α_t)∫_0^T g·μ_t-C·(1-α_t)dt,
s.t. t∈[0,T]
μ_t = 1 + ∫_0^t ρ(1-μ_s)(1-α_s) -δμ_sds.
We briefly explain how to solve this problem using Pontryagin's maximum principle. The Hamiltonian function to (F) is given by
H(μ_t,α_t,p_t,t) = gμ_t -C(1-α_t)+p_t (ρ(1-μ_t)(1-α_t) -δμ_t)
= (1-α_t)(ρ p_t(1-μ_t)-C) +gμ_t-δ p_tμ_t
where (p_t) is the adjoint function. Pontryagin's maximum principle yields the following sufficient conditions for optimality (<cit.>):
The control (α_t^*) with the associated trajectory (μ_t^*)
is optimal for (F) if there exists a continuous and piecewise continuously differentiable function (p_t) such that for all t>0:
(i) α_t^* maximizes α↦ H(μ_t,α,p_t,t) for α∈[0,1],
(ii) ṗ_t = -g+p_t(δ+ρ(1-α_t)) at those points where p_t is differentiable,
(iii) p(T)=0.
Inspecting the Hamiltonian it is immediately clear from (i) that the optimal control is essentially 'bang-bang'. For a numerical illustration we solved (F) for the parameters C=1, g=2, δ=1, ρ=2 and T=4. Here it is optimal to do nothing until time point t^*=ln2. Then it is optimal to repair the fraction α^*=1/2 of the broken machines which keeps the number of working machines at 1/2. Finally, ln2 time units before the end, we do again nothing and wait until the end of the time horizon. A numerical illustration of the optimal trajectory μ_t(0) of the deterministic problem together with simulated paths under this policy for different number of N can be found in Figure <ref>, left. A number of different simulations for N=1000 are shown in Figure <ref>, right. The simulated paths are quite close to the deterministic trajectory.
The optimal value in the deterministic model is V^F(1,0) = 9/2-3/2ln(2) ≈ 3.4603. If we simulate ten times the trajectory of the state process for N = 1000 machines while following the asymptotically optimal policy and take the average of the respective values, we obtain a mean of 3.43612 which is slightly less than the value for (F), cp. Theorem <ref>.
§.§ Spreading malware
This example is based on the deterministic control model considered in
<cit.>, see also <cit.> and treats the propagation of a virus in a mobile wireless network. It is based on the classical SIR model by Kermack–McKendrick, <cit.>. Suppose there are N devices in the network. A device can be in one of the following states: Susceptible (S), Infective (I), Dead (D) or Recovered (R). A device is in the susceptible state if it is not contaminated yet, but prone to infection. A device is infective if it is contaminated by the virus. It is dead if the virus has destroyed the software and recovered if the device has already a security patch which makes it immune to the virus. The states D and R are absorbing. The joint process μ_t^N=(S_t^N,I_t^N,D_t^N,R_t^N) is a controlled continuous-time Markov chain where X_t^N represents the fraction of devices in state X∈{S,I,D,R}. The control is a strategy of the virus which chooses the rate a(t)∈ [0,a̅], infected devices will be destroyed. In this model we have S_t^N+I_t^N+D_t^N+R_t^N=1 and S_t^N,I_t^N,D_t^N,R_t^N≥ 0. The transition rates of one device are as follows: A susceptible device gets infected with rate β I_t with β >0. The rate is proportional to the number of infected devices and we thus have an interaction of one individual with the empirical distribution of the others. And it gets recovered with rate ρ>0 which is the rate the security patch is distributed. An infected device gets killed by the virus with rate a(t)∈ [0,a̅] chosen by the attacker and gets recovered at rate γ>0. The rates are shown in the following figure:
The intensities of one device at time t are summarized by
q({I} | S, ·, μ_t^N) = β I_t^N, q({R} | S, ·, μ_t^N) = ρ,
q({D} | I, a_t, μ_t^N) = a_t, q({R} | I, ·, μ_t^N) = γ.
Thus, the diagonal elements of the intensity matrix are given by
q({S} | S, ·, μ_t^N) = -β I_t^N-ρ, q({I} | I, a_t, μ_t^N) = -a_t-γ,
q({D} | D, ·, μ_t^N) = q({R} | R, ·, μ_t^N) = 0
and all other intensities are zero. Note that (Q1)-(Q5) are satisfied and that since the intensities are linear in a, there is no need for a relaxed control. The initial state of the network is μ_0^N=(S_0^N,I_0^N,D_0^N,R_0^N)=(1-I_0,I_0,0,0) with 0<I_0<1. The aim of the virus is to produce as much damage as possible over the time interval [0,T], evaluated by
[ D_T^N + 1/T∫_0^T (I^N_t)^2 dt]
which is given when we choose r(i,a,μ)=1/T(μ(2))^2 (the second component of μ squared) and an appropriate terminal reward. (R1) and (R2) are satisfied. Thus, problem (F) in this setting is given by (we denote the limit by μ_t=(S_t,I_t,D_t,R_t))
(F) sup_(a_t) D_T + 1/T∫_0^T I^2_t dt,
s.t. a_t ∈ [0,a̅], t∈[0,T]
S_t = 1-I_0 + ∫_0^t -β I_sS_s-ρ S_sds,
I_t = I_0 + ∫_0^t β I_sS_s-γ I_s -a_t I_sds,
D_t = ∫_0^t a_t I_sds.
A solution of this deterministic control problem can be found in <cit.>. It is shown there that a critical time point t_1∈[0,T] exists such that a_t=0 on t∈ [0,t_1] and a_t=a̅ on t∈(t_1,T]. Thus, the attacker is not destroying devices from the beginning because this lowers the number of devices which can get infected. Instead, she first waits to get more infected devices before setting the kill rate to a maximum.
A numerical illustration can be found in Figure <ref>. There we can see the trajectories of the optimal state distribution in (F) and simulated paths for N=1000 devices for β=0.6, ρ=γ=0.2, a̅=1, T=10. The optimal time point for setting a_t to the maximum is here 4.9. The simulated paths are almost indistinguishable from the deterministic trajectories.
§.§ Resource competition
This example shows that feedback policies in the deterministic problem are not necessarily asymptotically optimal when implemented in the N individual problem. This may happen when discontinuities in the feedback function are present. The example is an adaption of the queuing network considered in <cit.> to our setting. Suppose the state space is given by S={1,2,3,4,5,6,7,8}. Individuals starting in state 1 change to state 2, then 3 and are finally absorbed in state 4. Individuals starting in state 5 change to state 6, then 7 and are finally absorbed in state 8. The intensity for leaving states 1 and 5 is μ_1=μ_5=1, the full intensity for leaving states 2 and 6 is μ_2=μ_6=6 and finally the full intensity for leaving states 3 and 7 is μ_3=μ_7=1.5. The action space is A={0,1} where actions have to be taken in states 2,3,6 and 7 and determine the activation of the transition intensity. Action a=0 means that the intensity is deactivated and a=1 that it is activated. There is a resource constraint such that the sum of the activation probabilities in states 2 and 7 as well as the sum of the activation probabilities in states 3 and 6 are constraint by 1 (see remark on p.13). When we denote the randomized control by π̂_t^2= a_t, π̂_t^7= 1-a_t, π̂_t^6= b_t, π̂_t^3= 1-b_t, a_t,b_t∈[0,1] then
the intensities are given by
q({3} | 2,a_t, μ_t^N) = a_t μ_2, q({4} | 3, 1-b_t, μ_t^N) = (1-b_t) μ_3,
q({7} | 6,b_t, μ_t^N) = b_tμ_6, q({8} | 7, 1-a_t, μ_t^N) = (1-a_t)μ_7.
An illustration of this model can be seen in Figure <ref>.
The initial state distribution is given by μ_0=(5/14,1/14,1/14,0,5/14,1/14,1/14,0) where we assume for the simulation that we have N=1400 individuals.
Now suppose further that individuals in the absorbing states 4 and 8 produce no cost whereas individuals in state 3 and 7 are the most expensive as soon as there are at least 0.01% of the population present. This optimization criterion leads to a priority rule where individuals in state 3 receive priority (and thus full capacity) over those in state 6 (as long as there are at least 0.01% present) and individuals in state 7 receive priority (and thus full capacity) over those in state 2 (as long as there are at least 0.01% present). In the deterministic problem the priority rule can be implemented such that once the number of individuals in state 3 and 7 fall to the threshold of 0.01% of the population it is possible to keep this level. This is not possible in the N individuals problem. The priority switch leads to blocking the individuals in the other line, see Figure <ref>. The blue line shows the state trajectories in the deterministic model. The red line is a realization of the system for N=1400 individuals where we use the deterministic open-loop control of Theorem <ref>. We see that the state processes converge. Finally the green line is a realization of the N=1400 individuals model under the priority rule. We can see that here state processes do not converge.
§ APPENDIX
Let X be a separable metric space, Y be compact metric and f:X × Y→ continuous. Then x_n→ x for n→∞ implies
lim_n→∞sup_y∈ Y |f(x_n,y)-f(x,y)|=0.
For a proof see e.g. Lemma B.12, <cit.>.
apalike
|
http://arxiv.org/abs/2307.02463v1
|
20230705173644
|
An Euler-Bernoulli-Type Beam Model of the Vocal Folds for Describing Curved and Incomplete Glottal Closure Patterns
|
[
"Mohamed A. Serry",
"Gabriel A. Alzamendi",
"Matías Zañartu",
"Sean D. Peterson"
] |
physics.med-ph
|
[
"physics.med-ph"
] |
The waveform of the scalar induced gravitational waves in light of Pulsar Timing Array data
Fengge Zhang
August 1, 2023
===========================================================================================
Incomplete glottal closure is a laryngeal configuration wherein the glottis is not fully obstructed prior to phonation. It has been linked to inefficient voice production and voice disorders. Various incomplete glottal closure patterns can arise and the mechanisms driving them are not well understood. In this work, we introduce an Euler-Bernoulli composite beam vocal fold (VF) model that produces qualitatively similar incomplete glottal closure patterns as those observed in experimental and high-fidelity numerical studies, thus offering insights in to the potential underlying physical mechanisms. Refined physiological insights are pursued by incorporating the beam model into a VF posturing model that embeds the five intrinsic laryngeal muscles. Analysis of the combined model shows that co-activating the lateral cricoarytenoid (LCA) and interarytenoid (IA) muscles without activating the thyroarytenoid (TA) muscle results in a bowed (convex) VF geometry with closure at the posterior margin only; this is primarily attributed to the reactive moments at the anterior VF margin. This bowed pattern can also arise during VF compression (due to extrinsic laryngeal muscle activation for example), wherein the internal moment induced passively by the TA muscle tissue is the predominant mechanism. On the other hand, activating the TA muscle without incorporating other adductory muscles results in anterior and mid-membranous glottal closure, a concave VF geometry, and a posterior glottal opening driven by internal moments induced by TA muscle activation. In the case of initial full glottal closure, the posterior cricoarytenoid (PCA) muscle activation cancels the adductory effects of the LCA and IA muscles, resulting in a concave VF geometry and posterior glottal opening. Furthermore, certain maneuvers involving co-activation of all adductory muscles result in an hourglass glottal shape due to a reactive moment at the anterior VF margin and moderate internal moment induced by TA muscle activation. These findings have implications regarding potential laryngeal maneuvers in patients with voice disorders involving imbalances or excessive tension in the laryngeal muscles such as muscle tension dysphonia.
§ INTRODUCTION
The configuration of the vocal folds (VFs), a cornerstone of voice production, is determined by the particular combination of activated intrinsic and extrinsic laryngeal muscles. Nominally, the VFs are completely adducted prior to the onset of phonation, and their interaction with the air flow driven by the lungs results in vibrations and consequent acoustic waves, which forms the basis of voiced speech. In some scenarios, complete glottal closure is not attained, which can result in inefficient voice production <cit.>, and, in some cases, stress concentrations in the VFs that may lead to VF trauma <cit.>. Hence, incomplete glottal closure is often linked to disorders that are associated with inefficiencies in, or damage to, the vocal mechanism, including Parkinson's disease <cit.>[Parkinson’s disease is a relatively prevalent disorder, affecting the human central, peripheral, and enteric nervous systems <cit.>.] and muscle tension dysphonia (MTD) <cit.>[MTD <cit.>, also known as non-phonotraumatic hyperfunction <cit.>, is a class of voice disorders associated with misuse of the vocal mechanisms without the presence of organic changes in the vocal organs, leading to low speech quality and vocal fatigue, with a wide range of symptoms and patterns, including excessive/unbalanced activation of intrinsic and extrinsic laryngeal muscles <cit.>, supraglottal compression <cit.>, and abnormal fundamental frequency <cit.>.].
Incomplete glottal closure[In this study we refer to the glottal configuration of the VFs at rest immediately prior to phonation initiation, identifying any gaps between the folds as incomplete glottal closure. In clinical settings, incomplete glottal closure typically refers to gaps between the folds when the VFs are at their maximum glottal closure phase during phonation <cit.>. Our definition herein isolates laryngeal factors, which are the focus of this study, by dismissing the dynamics of VF vibrations.] comes in various patterns <cit.> as shown schematically in Figure <ref>, including bowed shape: a glottal pattern wherein the left and right VF geometries are convex with a gap at the mid-membranous portion; posterior glottal opening: full glottal closure is achieved in the anterior and mid-membranous regions only, leaving the posterior margin open; and hourglass glottal configuration: a pattern with anterior and posterior gaps and potential VF contact in the mid-membranous region. There exist other incomplete glottal closure patterns, sharing similarities with those mentioned above, such as spindle-shaped glottis, and anterior opening (see <cit.>). These latter patterns are not addressed in this work.
There exist several experimental and clinical studies in the literature that attempt to elucidate, at least in part, some of the laryngeal mechanisms associated with curved and incomplete glottal closure patterns. Based upon inspection of cadaver larynges, <cit.> posited that a posterior glottal opening is associated with excessive activation of the posterior cricoarytenoid (PCA) muscle. <cit.> conducted experimental investigations of excised canine models and found that activating the thyroarytenoid (TA) muscle, while keeping other adductory laryngeal muscles inactive, leads to anterior and mid-membranous glottal closure, whereas the posterior glottis stays open, thus resulting in a closure pattern similar to a posterior glottal opening. On the other hand, they found that when the TA muscle is relaxed and other adductory muscles (LCA/IA) are activated, closure is achieved only at the posterior margins of the VFs with mid-membranous opening <cit.>, thus leading to a bowed configuration as seen in Figure <ref>. Moreover, complete glottal closure of excised canine larynges was attained via co-activation of all adductory muscles. More recent clinical investigations using refined experimental setups (see, e.g., <cit.>) further confirm these observations. <cit.> conducted a parametric study of the effects of intrinsic laryngeal muscle activation, modulated by graded stimulation, on the pre-phonatory posture of a canine model. In addition to confirming the findings of <cit.>, <cit.> found that when keeping the TA muscle activation at a constant level and increasing activation of the cricothyroid muscle (CT), the glottal area increases and the medial bulging caused by the TA muscle activation is reduced. In addition, they observed that when keeping the LCA and IA muscle activation at constant levels and increasing CT muscle activation, the glottis starts to open posteriorly and the glottal area increases. Interestingly, the authors found that with certain muscular executions involving co-activation of the LCA, IA, and TA muscles, the glottis exhibits an hourglass shape (see <cit.>).
Besides the aforementioned clinical and experimental works, there exist numerical studies that shed some light onto curved and incomplete glottal closure patterns. <cit.> developed one of the early three-dimensional VF posturing models, where adductory and abductory muscles are incorporated, showing that full activation of the LCA muscle induces nonuniform curvature of the medial surface. <cit.> studied the influence of incomplete glottal closure patterns (with linear and curved VF geometries) on VF vibrations using a multi-mass model, showing that some resting incomplete glottal closure configurations may induce localized VF impact, which they hypothesized to be a potential underlying mechanism inducing VF trauma, especially in females. However, the authors did not study the laryngeal maneuvers that induce these resting glottal shapes. <cit.> conducted numerical simulations using high-fidelity numerical models, showing that posterior glottal opening occurs with the sole activation of the TA muscle, mid-membranous opening when the LCA and IA muscles are co-activated (without incorporating the TA muscle), and full glottal closure when all adductors are co-activated, in agreement with the aforementioned clinical observations. In a more recent study, <cit.> proposed a detailed physiologically accurate finite-element posturing model, based on MRI scan images of a canine larynx. Even though the study does not study the glottal geometry, it provides useful insights into how synergistic activation of laryngeal muscles exhibits complex interaction with laryngeal variables (e.g., VF strain, rotation and translation of arytenoid cartilages, and glottal area). Recently, research interest has also been directed towards investigating how activating laryngeal muscles alters the VF medial surfaces <cit.>.
Despite these valuable efforts, a clear picture of the physical mechanisms inducing different glottal patterns remains elusive. It is challenging to isolate and control the factors underlying posturing mechanics experimentally. Moreover, high-fidelity numerical models, despite their accuracy in replicating physiological laryngeal postures, do not provide clear intuitive understanding of the mechanics of posturing, and typically suffer from high computational costs. We hypothesize that the non-homogeneous structure of the VFs, which comprise overlapping tissue layers with different mechanical and geometrical properties <cit.>, underlies, in part, the different glottal shapes displayed in Figure <ref>. As such, we propose an Euler-Bernoulli composite beam model of the VFs to elucidate some of the mechanisms underlying glottal patterns prior to phonation. Beam models have been utilized previously to explore VF vibrations and phonation fundamental frequency <cit.>. We opt for this relatively simple modeling framework to facilitate exploration of the mechanisms underlying the resulting glottal shapes. To gain refined physiological insights into how intrinsic laryngeal muscles may influence glottal geometry, the proposed model is integrated with the muscle-controlled posturing model of <cit.>.
The organization of this work is as follows: a detailed derivation of the composite beam VF model is introduced in Section <ref>; analysis is conducted in Section <ref>; numerical simulations of the integrated beam and posturing model are presented in Section <ref>; Section <ref> presents discussion of the results; and the study is concluded in Section <ref>.
§ MODEL DEVELOPMENT
Herein, we propose a static Euler-Bernoulli-type composite beam model (see, for example, <cit.>) for the VFs, with the different VF layers represented by strata in the beam. For simplicity we assume symmetry with respect to the medial plane; hence, we consider only one (the left) VF. A schematic representation of the composite beam model is shown in Figure <ref>.
The VF beam model consists of three layers: (1) the mucosa with depth d_muc and cross-sectional area A_muc; (2) the vocal ligament with depth d_lig and cross-sectional area A_lig; and (3) the thyroarytenoid (vocalis) muscle with depth d_ta and cross-sectional area A_ta. For the sake of compact presentation, we define the index set
ℐ={muc,lig,ta},
where muc, lig, and ta refer to the mucosa, ligament, and TA muscle tissue, respectively. We assume that each layer has a uniform rectangular cross-section and the layer thicknesses (in the inferior-superior direction) are equal and denoted by b; thus, layer depth can be computed as d_i = A_i/ b for i ∈ℐ.
Our modelling framework assumes that VF deformation consists of (a) potentially large longitudinal stretching/compression with uniform strain, and (b) modest bending due to the induced moments inside the VF.
Let L_0 denote the resting VF length and L denote the VF length after longitudinal deformation due to the associated nominal uniform strain ε̅; that is,
L=(1+ε̅)L_0.
We assume the nominal strain is known a priori[Such as from the two-dimensional posturing model of <cit.>, which incorporates the mechanics of the arytenoid cartilages and cricothyroid joints, and relates them to VF strain, see the discussion in Section <ref>.].
Let x∈ [0,L] denote the position along the deformed VF configuration (after applying strain ε̅) relative to the anterior VF margin, and r denote the depth position along the axis perpendicular to the VF axis relative to the base of the TA muscle (see Figure <ref>). Consider a plane VF cross-section at position x, and let y_muc denote the relative position along the r-axis with respect to the geometrical center of the mucosa (i.e., y_muc=0 corresponds to the geometrical center of the mucosal cross-section). Similarly, let y_lig and y_ta be analogous coordinates for the ligament and TA muscle, respectively (see Figure <ref>). Note that the range of y_i is [-d_i/2,d_i/2] for i∈ℐ.
Let w(x) denote the transverse deflection of the beam (in the r-direction). Moreover, let u_i(x,y_i), i∈ℐ, denote the longitudinal displacement of the i^th VF layer, where longitudinal displacements are with respect to the deformed VF configuration under ε̅. In addition, let u̅_i(x)=u_i(x,y_i=0), i∈ℐ, denote the longitudinal displacement at the center of the i^th layer. Under Euler-Bernoulli beam theory (see, for example, <cit.>), the longitudinal displacement functions can be written as
u_i=u̅_i-y_iw', i∈ℐ,
where the prime symbol denotes differentiation with respect to x. Continuity of displacement fields necessitates that
u_muc(x,-d_muc/2)= u_lig(x,d_lig/2) and u_lig(x,-d_lig/2)= u_ta(x,d_ta/2) for all x∈ [0,L],
which yields the conditions
u̅_lig =u̅_muc+1/2(d_lig+d_muc)w',
u̅_ta =u̅_muc+1/2(d_ta+2d_lig+d_muc)w'.
Given longitudinal displacement in the i^th layer with respect to the deformed configuration under ε̅, the total strain in that layer is given by[Consider an infinitesimal line element dx_0 that experiences a composition of two deformations: the first is longitudinal deformation with associated uniform normal strain ε_0, and the second is due to a longitudinal displacement field u (with respect to the configuration after applying the strain ε_0). The length of the line element after applying strain ε_0 is dx_1=(1+ε_0)dx_0 (dx_0=dx_1/(1+ε_0)) and the length after applying the displacement field is d x_2=(1+du/dx_1)dx_1. Therefore, the total strain due to the combination of the two deformations is ε=(dx_2-dx_0)/dx_0=ε_0+(1+ε_0)du/dx_1.]
ε_i=ε̅+(1+ε̅)u'_i, i∈ℐ.
Substituting Equation (<ref>) into Equation (<ref>) results in
ε_i=ε̅+ (1+ε̅)(u̅'_i-y_iw”), i∈ℐ.
The stress field is estimated from strain and, in the case of the TA muscle layer, TA muscle activation 𝚊_ta, which
is a non-dimensional parameter, ranging between 0 and
1, that corresponds to the activation level in the
TA muscle, with 0 indicating a completely flaccid muscle and 1 being maximum contraction. Herein, we utilize local linearization about the nominal strain ε̅. That is, the stress functions in the VF layers, σ_i, i∈ℐ, are given by the approximate relations
σ_i= σ_i,0+E_i(ε_i-ε̅), i∈ℐ,
where
σ_j,0=σ̅_j(ε̅), E_j=dσ̅_j(ε̅)/dε, j∈{muc,lig},
σ_ta,0=σ̅_ta(ε̅,_ta), E_ta=dσ̅_ta(ε̅,_ta)/dε,
and σ̅_i, i∈ℐ denotes the nonlinear stress function associated with the i^th layer.
VF tissues exhibit a highly nonlinear hysteretic viscoelatic behaviour <cit.>. The literature is rich in various studies attempting to develop VF constitutive models that capture, at least in part, the complex mechanical behaviors of the VF tissues (see the review study of <cit.>). For example, <cit.> proposed a one-dimensional modified Kelvin model for the VF tissues and laryngeal muscles with nonlinear active and passive stresses to account for tissue viscoelasticity and muscle activation and implemented this constitutive modelling framework in simulations of laryngeal postures <cit.>. <cit.> proposed a constitutive model for the VF cover tissues, which consists of a hyperelastic equilibrium network in parallel with an inelastic,
time-dependent network, and integrated it with an ideal string model to gain insights into the influence of cover tissue mechanical behaviour on phonation fundamental frequency. In a study based on measurements collected from porcine VFs, <cit.> observed that the collagen fibrils, which are major constituent of the VFs, are rope-shaped, where the geometric characteristics of the fibrils have been incorporated in a hyperelastic mechanical model. In an attempt to capture the anisotropic properties of the VF lamina propria, <cit.> proposed a structurally-based constitutive model that links the microstructural characteristics of the lamina propria to its macromechanical properties; the proposed model has shown good agreement with biaxial tensile testing measurements.
Herein, and for simplicity, we assume the constitutive stress-strain relations associated with the nonlinear stresses σ̅_i to be elastic (functions of strain only) and of exponential type (see <cit.>) with symmetry about zero strain. In particular,
σ̅_j=sign(ε)m_j(^| n_jε|-1), j∈{muc,lig},
and, in the case of the TA muscle, we include stress induced by muscle activation, resulting in
σ̅_ta=sign(ε)m_ta(^| n_taε|-1)+_taσ_a,max,
where m_i,n_i, i∈ℐ, are parameters of the constitutive relations, and σ_a,max is the maximum active stress in the TA muscle. Symmetric stress-strain relations are employed herein to account for compressive forces developed in the VF, which have been often dismissed in previous studies of VF biomechanics. The numerical values of the constitutive relation parameters adopted in this study are listed in Table <ref>.
The normal forces in the VF layers are computed as
N_i=b∫_-d_i/2^d_i/2σ_idy_i, i∈ℐ.
Substituting Equations (<ref>) and (<ref>) in yields
N_i=F_i,0+(1+ε̅)E_iA_iu̅'_i, i∈ℐ,
where
F_i,0=A_iσ_i,0, i∈ℐ,
denote the nominal normal forces generated by each layer.
The total internal normal force is then
N=∑_i∈ℐN_i.
The moment about the center of the i^th layer due to the stress developed in that layer is given by
M_i=-b∫_-d_i/2^d_i/2 y_iσ_idy_i, i∈ℐ.
Substituting Equations (<ref>) and (<ref>) into this formula gives
M_i=(1+ε̅)E_iI_iw”, i∈ℐ,
where
I_i=b∫_-d_i/2^-d_i/2 y_i^2 dy_i, i∈ℐ,
denotes the area moment of inertia of the i^th layer.
Let the positions of the geometric centers of the VF layers along the r-axis be denoted r_i, i∈ℐ; that is,
r_muc =d_ta+d_lig+d_muc/2,
r_lig =d_ta+d_lig/2,
r_ta =d_ta/2,
see Figure <ref>. Take a cross-section at longitudinal position x and consider an arbitrary point on the cross-section located at a vertical position r= r_c (see Figure <ref>).
The moment at r_c, denoted M_c, is given by
M_c= ∑_i∈ℐM_i+(r_c-r_i)N_i
= (1+ε̅)(∑_i∈ℐE_iI_i)w”
+ (1+ε̅)∑_i∈ℐ(r_c-r_i)A_iE_iu̅'_i
+ ∑_i∈ℐ(r_c-r_i)F_i,0.
Consider an element of infinitesimal longitudinal length dx with left edge at position x (see Figure <ref>), and let V(x) and q(x) denote the shear force and distributed load per unit length, respectively. The force and moment balances on the infinitesimal element yield
N' =0
,
V'-q =0,
M'+V =0,
where second order and higher terms are omitted.
From Equation (<ref>) we deduce that the total normal force N is constant through the VF length. By the assumption that the VF undergoes compression/elongation with associated strain ε̅ (see Equation (<ref>)), the force N should be equal to the force that results in that strain, which is the sum of nominal forces F_i,0, i∈ℐ, (in Equation (<ref>)). That is,
N=∑_i∈ℐF_i,0.
Therefore, by substituting Equations (<ref>) and (<ref>) into Equation (<ref>), we have
∑_i∈ℐ(1+ε̅)E_iA_iu̅'_i=0,
implying
∑_i∈ℐE_iA_iu̅'_i=0.
As we are interested in transverse deflection, we aim to obtain a balance equation solely in terms of w. For convenience, we define
l_lig = 1/2(d_lig+d_muc)
l_ta = 1/2(d_ta+2d_lig+d_muc),
l_muc =l_ligE_ligA_lig+l_ta E_taA_ta/E_mucA_muc+E_ligA_lig+E_taA_ta,
and
α_muc =-l_muc,
α_lig =l_lig-l_muc,
α_ta =l_ta-l_muc.
From the continuity condition in Equation (<ref>) and the zero force condition in Equation (<ref>), it can be deduced that the displacement functions u̅_i, i∈ℐ, satisfy the relations
u̅'_i=α_iw”, i∈ℐ.
Substituting Equation (<ref>) into Equation (<ref>), we obtain
M_c
=μ_c w”+M_c,0,
where
μ_c=(1+ε̅)∑_i∈ℐE_iI_i+(r_c-r_i)A_iE_iα_i
is the composite bending stiffness and
M_c,0=∑_i∈ℐ(r_c-r_i)F_i,0
is the nominal moment at r=r_c due to the nominal normal forces. Note that the bending stiffness is strain-dependent, which can be of importance in posturing scenarios with large VF strains.
Combining Equations (<ref>) and (<ref>) results in
M”_c+q=0,
which, in terms of the deflection w (obtained by substituting in Equation (<ref>)) is
μ_c w””+q=0.
The distributed load q is due to VF contact, which is assumed to be proportional to the transverse overlap beyond the medial plane. That is,
q=K_col(w-xtan(θ_G))𝐇(w-xtan(θ_G)),
where
K_col is a stiffness coefficient associated with VF contact, θ_G is the clockwise angle between the medial plane and the deformed VF configuration under strain ε̅ (see Figure <ref>), and 𝐇 is the Heaviside function.
In this work, we assume zero transverse deflection at the anterior and posterior ends of the VF. That is,
w(0)=w(L)=0.
Moreover, we assume zero moment at the posterior VF margin,
M_c(L)=0.
Furthermore, we assume a reactive moment at the anterior VF margin that is proportional to the rotational displacement with respect to the VF angle at rest, θ_0. The total angle at the anterior margin between the medial plane and the VF is approximately given by θ_G-w'(0). Consequently, the moment boundary condition at the anterior margin is given by
M_c(0)=-K_r(θ_G-w'(0)-θ_0),
where K_r is a rotational stiffness coefficient. Like the nominal strain ε̅, it is assumed that the angle θ_G is known a priori. Finally, we assume that r_c corresponds to the geometrical center of the ligament, that is r_c=r_lig. This assumption, in addition to the boundary condition given in Equation (<ref>), implies that the total normal force N is positioned at the geometrical center of the ligament. This can be deduced from the fact that M_c(L)=(r_c-r_N)N, where r_N denotes the r-position of the total normal force N (that is, the force centroid).
§ ANALYTICAL INSIGHTS FROM A SPECIAL CASE
To gain simple yet useful insights into how internal moments inside the VF beam model affect its curvature, we consider the scenario of zero contact forces (i.e., q(x)=0) and assume | w'(0) |≪|θ_G|, which reduces the boundary condition given in Equation (<ref>) to
M_c(0)= -K_r(θ_G-θ_0).
Equation (<ref>), with boundary conditions given in Equations (<ref>), (<ref>), and (<ref>), and the definition of M_c in Equation (<ref>), can be solved analytically. The curvature of the VF beam model, w”, is given explicitly by
w”=-M_c,0/μ_c+(1-x/L)K_r/μ_c(θ_0-θ_G).
Note that positive w” implies a convex VF geometry, whereas negative curvature implies a concave geometry. Recalling the definition of M_c,0 given in Equation (<ref>) and implementing the assumption that r_c=r_lig result in
-M_c,0 =-d_ta+d_lig/2F_ta,0+d_muc+d_lig/2F_muc,0
=M̃_ta+M̃_muc,
where
M̃_ta=-d_ta+d_lig/2F_ta,0,
and
M̃_muc=d_muc+d_lig/2F_muc,0.
By additionally defining
M̃_r=K_r(θ_0-θ_G),
Equation (<ref>) can be rewritten as
w”=1/μ_c[M̃_ta+M̃_muc+(1-x/L)M̃_r].
In the following discussion, we assume that bending stiffness μ_c is always positive (μ_c according to Equation (<ref>) changes with the elongation or compression of the VF beam model). First, let us analyze abstractly the effects of the moment terms M̃_ta, M̃_muc, and M̃_r on the VF curvature. We can observe from Equation (<ref>) that the effect of the reactive moment M̃_r on the VF curvature decays linearly with a maximum effect (in magnitude) at x=0 and zero effect at x=L. In contrast, M̃_ta and M̃_muc have spatially invariant (i.e., constant) effects on w”. The curvature is positively correlated with M̃_r and M̃_ta+M̃_muc. That is, M̃_r>0 and M̃_ta+M̃_muc>0 implies positive curvature (i.e., convex VF geometry), whereas M̃_r<0 and M̃_ta+M̃_muc<0 implies negative curvature (i.e., concave VF geometry). Considering the fact that the effect of the anterior reactive moment M̃_r decays linearly along the VF length and the nominal moments induced by the VF layers, M̃_ta and M̃_muc, are spatially invariant, there can arise an interesting scenario for which the curvature changes sign along the VF length. In particular, when
M̃_r>-(M̃_ta+M̃_muc) > 0,
w” is positive on [0,x_cr), where
x_cr=L(1+M̃_ta+M̃_muc/M̃_r),
and negative for x∈ (x_cr,L], a change from convexity to concavity. The conditions on the internal moments and resulting VF shapes from this analysis are summarized in Table <ref>.
We note that a convex VF geometry is a defining characteristic of the bowed VF pattern. Moreover, the concave VF geometry can be associated with posterior glottal opening. Furthermore, transition along the VF length from convex to concave resembles the hourglass glottal pattern (see Figure <ref>). This demonstrates that the beam model has the capacity to produce the experimentally-observed glottal configurations shown in Figure <ref>.
Now, let us relate the findings listed in Table <ref> to physiological posturing scenarios. The term M̃_r, as seen from its definition in Equation (<ref>), is related to VF adduction and abduction, wherein M̃_r>0 corresponds to VF adduction (θ_G< θ_0) and M̃_r<0 corresponds to VF abduction (θ_G> θ_0). The term M̃_muc+M̃_ta, which is defined according to Equations (<ref>) and (<ref>), is determined by the reactive moments developed in the VF layers, especially the mucosa and TA muscle, during VF tensioning or compression.
Note that, based on the area measurements (Table <ref>) and the assumption of uniform thickness b, the moment arm of the TA muscle, (d_ta+d_lig)/2, is larger than that of the mucosa, (d_muc+d_lig)/2. In the case of VF compression, the compressive forces in the TA muscle are typically larger in magnitude than that in the mucosa (i.e., F_ta,0≪ F_muc,0<0); hence, the moment induced by the TA muscle, M̃_ta, is positive and predominant making M̃_ta+M̃_muc>0. On the other hand, when the VF is tensioned due to activating the TA muscle, the force F_ta,0 is positive and predominant and, consequently, the term M̃_ta is negative (see Equation (<ref>)) and predominant. This scenario results in M̃_ta+M̃_muc<0. These relations between the moment terms and corresponding laryngeal posturing scenarios are summarized in Table <ref>.
The combined findings presented in Tables <ref> and <ref> can be summarized by following observations: The bowed shape with convex VF geometry can be due to
(a) positive reactive moment at the anterior margin (M̃_r>0) during VF adduction, and/or
(b) internal moments during VF compression, wherein M̃_ta+M̃_muc>0.
Moreover, the concave VF shape arising in the case of posterior glottal opening can be due to
(a) negative reactive moment at the anterior margin (M̃_r<0) during VF abduction, and/or
(b) sufficiently large activation of the TA muscle, wherein M̃_ta+M̃_muc<0.
The hourglass shape may necessitate a coordinated laryngeal maneuver that involves sufficient TA activation (M̃_ta+M̃_muc<0) and VF adduction (M̃_r>0) such that M̃_r+M̃_ta+M̃_muc>0.
§ SIMULATIONS OF THE COMBINED BEAM AND POSTURING MODEL
In this section we further investigate VF curvature and incomplete glottal closure by combining our beam model with the VF posturing modeling introduced by <cit.>. In particular, we adopt the implementation of <cit.>. The posture model relates activation of the five intrinsic muscles to the prephonatory glottal configuration, and in particular, the rotational and linear displacements of the cricothyroid joints and the arytenoid cartilages, where the VFs and intrinsic muscles are modelled as spring-like elements. From the aforementioned displacements, the VF nominal strain ε̅ and glottal angle θ_G are estimated. Similar to the muscle activation parameter a_ta embedded in the VF beam model, the posture model relies on five normalized muscle activation parameters, a_ta, a_ct, a_lca, a_ia, and a_pca, which correspond to the TA, cricothyroid (CT), lateral cricoarytenoid (LCA), interarytenoid (IA), and PCA muscles, respectively. In this study, we assume that the muscle activation parameter a_ta embedded in the VF beam model is identical in value to the muscle activation parameter a_ta in the posturing model (a_ta=a_ta).
It is important to mention that the constitutive relations embedded in the VF beam model are different from those in the posture model implementation adopted from <cit.>. The focus of the current study is to replicate the VF static configurations, wherein we employ experimental stress-strain data based on human and canine samples <cit.> to prescribe the mechanical behaviors of the tissues. The posture model of <cit.> instead focuses on replicating physiologically accurate posturing and phonation outputs; this required ad hoc tuning of some posturing model parameters. Prior experimental and numerical studies typically suffer from significant variability in the reported numerical values of biomechanical parameters (see, e.g., <cit.>), and in some cases numerical values are missing altogether, which motivates the ad hoc tuning approach adopted by <cit.>.
The posturing model in <cit.> is dynamic due to inertial and viscous effects. In this study, and as we are interested in static posturing scenarios, the posture model is run until the VF strain and glottal angle reach steady-state and these values are input into the composite beam model. Once the VF strain and glottal angle parameters are fed into the beam model, Equation (<ref>), supplemented with Equations (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), is solved numerically. The aforementioned equations and boundary conditions are discretized by means of finite difference. For the simulations θ_0 is set as the glottal angle from the posturing simulations when all laryngeal muscles are inactive. Numerical values for the remaining VF beam model parameters are listed in Table <ref>.
To clearly illustrate the glottal geometries resulting from the simulations, a coordinate system (x_1,x_2) with origin at the anterior margin of the VFs is established. The x_1-axis is aligned along the medial plane pointing in the posterior direction and the x_2-axis is perpendicular to the medial plane pointing to the right, relative to the human body frame (see Figure <ref>). In all figures presented in this section the VF configurations are plotted with respect to this coordinate system; model symmetry is utilized to produce the opposing VF shape.
This section explores several laryngeal maneuvers and how they influence the VF geometry[The end points (anterior and posterior margins) of the VFs resulting from the proposed beam model are identical to those established by the posturing model of <cit.>. Accounting for internal bending moments results in deviation of the VF shape from the linear medial surface prescription (with angle θ_G) of <cit.>.]. In particular, and motivated by previous clinical, experimental, and numerical findings <cit.>, we consider laryngeal maneuvers associated with adductory (TA, LCA, and IA) and abductory (PCA), muscles as they have been found to play major roles in inducing curved glottal geometries. The CT muscle has been found to play a major role in regulating phonation fundamental frequency by stretching the VFs, but not in posturing and is thus excluded from this study. Herein, we compare simulation results with findings from previous clinical, experimental, and high-fidelity numerical studies to verify the proposed VF beam model. Moreover, we attempt to elucidate potential mechanisms underlying the curved VF geometries observed clinically by analyzing the beam model details (see Figure <ref>).
First, we investigate the effects of increasing co-activation of the LCA and IA muscles, which are responsible for adducting the VFs <cit.>, while the remaining intrinsic muscles are inactive. Figure <ref> presents the glottal shapes and the induced moments corresponding to simulations wherein LCA and IA muscle activation levels are increased simultaneously. Figure <ref>(left) shows that co-activation of the LCA and IA muscles leads to posterior glottal closure with a remaining mid-membranous gap. This convex VF shape matches previous clinical and numerical findings <cit.>. Figure <ref>(right) shows that the VF convexity is due to the predominance of the reactive moments at the anterior VF margin (M̃_r is positive and relatively large), which arises due to VF adduction (θ_G<θ_0), which agrees with the theoretical predictions in Section <ref>.
Figure <ref> exhibits the glottal shapes and induced moments corresponding to simulations wherein TA muscle activation levels are increased, while all other intrinsic muscles are inactive. Figure <ref>(left) shows that isolated activation of the TA muscle leads to anterior and mid-membranous glottal closure with remaining posterior opening, while also shortening the folds. The resulting concave VF shapes are in agreement with previous experimental and numerical investigations <cit.>. Figure <ref>(right) shows that the concavity is primarily determined by the internal moments induced by the TA muscle activation (M̃_ta is negative and relatively large in magnitude), which is in alignment with the analysis in Section <ref>.
In an effort to explore the mechanics of the hourglass glottal shape, we explore the glottal shape associated with increasing activation of the TA muscle while the LCA and IA are kept at constant non-zero activation levels. Simulating such maneuvers is encouraged by the findings from the theoretical analysis in Section <ref> and the experimental observations in <cit.>. Figure <ref> displays the glottal shapes and induced moments associated with slight increasing activation of the TA muscle, while the LCA and IA are kept at constant levels (a_lca=a_ta=0.6). Figure <ref>(left) shows that in the case of zero TA activation, the glottal shape is bowed with slight, but not full, posterior adduction. As TA activation is increased, a medial bulge is observed whereas anteriorly the glottal geometry is still convex, resulting in an overall hourglass shape. Figure <ref>(right) displays how the internal moments M̃_ta and M̃_muc and the reactive moment at the anterior margin M̃_r satisfy the condition of Equation (<ref>). This aligns with the analysis in Section <ref> and suggests that the hourglass glottal shape necessitates involvement of reactive moments at the anterior VF margin (associated with VF adduction) and internal moments induced inside the VF layers (primarily the TA muscle). In addition, this finding is in good agreement with observations in <cit.>, which showed that an hourglass shape is induced by coactivating all the adductory muscles.
In aggregate, Figures <ref>-<ref> indicate that dismissing the TA muscle or the LCA and IA muscles cannot produce full glottal closure, therefore, in the next set of simulations, we investigate the effects of co-activating all of the adductory muscles. Figure <ref> shows that (almost) full glottal closure can be attained when all adductors are co-activated (a_ia=a_lca=0.45, a_ta=0.7), which is in alignment with previous experimental and numerical investigations <cit.>.
Finally, we explore the effects of the PCA muscle, a primary VF abductor, on the glottal geometry, where we consider simulations motivated by the clinical observations highlighted in <cit.>. Figure <ref> displays glottal patterns associated with increasing PCA activation where adductory muscles are kept at activation levels associated with near full closure. The figure displays that increasing PCA activation leads to posterior opening, while the VFs are sustaining concave shapes similar to those presented when TA alone is activated (see Figure <ref>). This suggests that PCA activation tends to neutralize the posterior adductory effects of the LCA and IA muscles, which supports the clinical observations highlighted in <cit.>.
§ DISCUSSION
The results of Sections <ref> and <ref> highlight potential mechanisms underlying different patterns of incomplete glottal closure. In particular, results indicate that bowed VF shapes result, in part, from low or null activation of the TA muscle in combination with co-activation of the LCA and IA muscles.
The predominant mechanism in this case is the anterior reactive moments that resists bringing the VFs together during adduction. This pattern can also arise in the case of low TA muscle activation and VF compression, as suggested by the analysis in Section <ref>. In this case, the internal moment induced by the TA muscle tissue (M̃_ta is positive and predominantly large) is the driving factor. This scenario (bowing due to VF compression) can potentially take place when extrinsic laryngeal muscles are excessively activated, especially those associated with VF compressing, such as the thyrohyoid muscle <cit.>.
In addition, our analysis suggests that posterior glottal opening with combined VF concavity results from high activation of the TA muscle and low or null activation of the LCA and IA muscles. Our model suggests that the driving mechanism here is the internal moment induced by the TA muscle activation M̃_ta, which is negative and predominantly large in magnitude in this case. A similar glottal pattern also occurs when all adductory muscles are activated in addition to the activation of the PCA muscle. This supports the hypothesis of <cit.>, regarding the excessive activation of the PCA muscle in patients with MTD. Finally, our analysis suggests that the hourglass glottal shape may emerge from laryngeal maneuvers that involve, for example, moderate co-activation of all adductory muscles, where both anterior reactive moment and internal moment due to TA muscle activation are at play and opposing each other.
The implications above concerning potential connections between incomplete and curved glottal closure patterns and particular muscular executions may help speech therapists to uncover the underlying laryngeal mechanisms associated with some voice disorders. As highlighted in the introduction, incomplete glottal closure can be linked to voice disorders that are characterized by excessive, imbalanced, or deficient activity of the intrinsic and extrinsic muscles such as MTD and Parkinson's disease. Our analysis in the current work suggests two potential mechanisms underlying bowed VFs in some patients with voice disorders (1) the TA muscle is not properly activated (possibly due to muscle activation imbalance), and (2) excessive activation of extrinsic neck muscles, leading to VF compression. Besides, our analysis posits that in patients with abnormal posterior glottal opening and concave VF geometry either (1) insufficiently activate the LCA and IA muscles, whereas the TA muscle is activated sufficiently (in comparison to normal posturing scenarios), or (2) suffer from excessive activation of all adductory and abductory muscles, where the PCA muscle activation mitigates the effects of the LCA and IA muscles, in agreement with the postulation in <cit.> concerning patients with MTD. In summary, speech clinicians and therapists may consider the aforementioned candidate underlying mechanisms of curved and incomplete glottal closure patterns when examining patients with voice disorders involving muscular inefficiencies/deficiencies.
A number of simplifying assumptions are embedded in the presented model of this study, including (1) negligible shear deformation, (2) negligible elastic forces from the connective tissues attached to the TA muscle, (3) negligible motions in the superior-inferior direction, (4) neglecting potential bending effects from the vocal ligament by setting r_c=r_lig, (5) zero moments at the posterior ends of the VFs, (6) small transverse VF deflections, and (7) one-way coupling between the VF beam model and the posturing model, where any contact forces emerging due to the VF curvature do not alter the mechanics of the laryngeal cartilages.
Assumption (1) is a consequence of the adopted Euler-Bernoulli framework. Note that with the uniform thickness assumption, and considering the model dimensions given in Tables <ref> and <ref>, the total VF depth, ∑_i∈ℐd_i, is approximately 10 mm whereas the resting VF length is 15 mm; hence, the depth and length dimensions are quite comparable. For such cases (thick beams), Timoshenko beam theory <cit.> is typically adopted to account for shear stresses[ It is worth noting that the two theories (Euler-Bernoulli and Timoshenko) do coincide for a uniform homogeneous simply-supported linear beam with specified moments at the end points and zero distributive load, regardless of the beam thickness or mechanical properties (in Section <ref>, we studied a similar simply-supported case with zero distributive load). These two theories, when compared, tend to produce qualitatively, but not necessarily quantitatively, similar predictions (see, e.g., <cit.>).]. As the goal of the current work is to construct a simple analytically-tractable model that predicts qualitatively the curved glottal configurations observed clinically, we adhered to the Euler-Bernoulli framework, leaving derivations of more complex models to future work. We posit that assumption (2) is reasonable as the elastic forces from the connective tissues are passive, mostly only restricting the extent to which the VF deflects. Moreover, assumption (3) is suitable as the majority of the VF motion during posturing occurs medially and/or laterally (see, e.g., the findings of <cit.>). Regarding assumption (4), the ligament is stiffer than other VF layers and it geometrically forms the intermediate VF layer, making it the `chassis' of the VF layered structure; hence setting r_c=r_lig, which indicates that the total normal force in the VF is positioned at the center of the ligament (see the end of Section <ref>), is sensible. Assumptions (5)-(7), in addition to other assumptions such as the rectangular geometries of the VF layers, are introduced to primarily simplify our analysis; hence, further investigation is needed to verify the validity of such assumptions in different posturing scenarios, and refine them when needed.
Despite these simplifying assumptions, our modelling framework is capable of predicting some of the glottal patterns observed in previous clinical and high-fidelity numerical studies, which is encouraging. Still, the speculations and potential explanations provided in this work need further extensive investigation into the biomechanics of VF posturing in both healthy subjects and patients with imbalances or deficiencies in the laryngeal muscles.
§ CONCLUSION
In this study, we introduced a simple one-dimensional Euler-Bernouilli-type composite beam model of the vocal folds to understand the mechanisms underlying glottal configuration and incomplete glottal closure. The model, despite its simplicity, was capable of predicting several clinically observed glottal configurations. Our analysis highlighted how the different patterns of incomplete glottal closure can arise naturally due to the layered VF structure and the associated induced moments. We coupled the proposed beam model with the posturing model of <cit.> to gain physiologically relevant insights into the role of laryngeal muscle activation. Our analysis showed that a bowed VF shape can arise due to activation of the LCA and IA muscles without incorporating the TA muscle during adduction or due to VF compression. On the other hand, isolated activation of the TA muscle results in medial bulging and posterior glottal opening. Posterior opening can also occur due to activating all adductors in addition to activating the PCA muscle. Moreover, our analysis suggested that an hourglass glottal shape can arise from specific laryngeal maneuvers involving the adductory laryngeal muscles. These results provided potential explanations and conjectures regarding the posturing mechanics of patients with voice disorders such as MTD.
In future efforts we aim to refine our modelling framework, where two-way coupling between the VF beam model presented herein and the posturing model of <cit.> is incorporated, to account for potential effects that curved VF geometries may exert on the mechanics of laryngeal cartilages. Moreover, we intend to incorporate the beam model with numerical phonation models, to study how curved and partially closed glottal geometries may influence tissue-flow-acoustic interactions, voice quality, and vocal function during phonation.
§ ACKNOWLEDGMENTS
Research reported in this work was supported by the NIDCD of the NIH under Award No. P50DC015446, and ANID BASAL FB0008. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
§ DECLARATION OF COMPETING INTEREST
Matías Zañartu has a financial interest in Lanek SPA, a company focused on developing and commercializing biomedical devices and technologies. His interests were reviewed and are managed by the Universidad Técnica Federico Santa María in accordance with its conflict-of-interest policies.
§ AUTHOR CONTRIBUTIONS
Mohamed Serry: Conceptualization, Methodology, Formal analysis, Software, Visualization, Writing - original draft preparation. Gabriel Alzamendi: Software, Validation, Writing - reviewing and editing. Matías Zañartu: Validation, Writing - reviewing and editing, funding acquisition. Sean Peterson: Supervision, Validation, Writing - reviewing and editing, funding acquisition.
apalike
[Altman et al., 2005]AltmanAtkinsonLazarus05
Altman, K. W., Atkinson, C., and Lazarus, C. (2005).
Current and emerging concepts in muscle tension dysphonia: a 30-month
review.
Journal of Voice, 19(2):261–267.
[Alzamendi et al.,
2020]AlzamendiManriquezHadwinDengPetersonErathMehtaHillmanZanartu20
Alzamendi, G. A., Manríquez, R., Hadwin, P. J., Deng, J. J., Peterson,
S. D., Erath, B. D., Mehta, D. D., Hillman, R. E., and Zañartu, M.
(2020).
Bayesian estimation of vocal function measures using laryngeal
high-speed videoendoscopy and glottal airflow estimates: An in vivo case
study.
The Journal of the Acoustical Society of America,
147(5):EL434–EL439.
[Alzamendi et al., 2022]AlzamendiPetersonErathHillmanZanartu21
Alzamendi, G. A., Peterson, S. D., Erath, B. D., Hillman, R. E., and
Zañartu, M. (2022).
Triangular body-cover model of the vocal folds with coordinated
activation of the five intrinsic laryngeal muscles.
The Journal of the Acoustical Society of America,
151(1):17–30.
[Bauchau and Craig, 2009]BauchauCraig09
Bauchau, O. A. and Craig, J. I. (2009).
Structural analysis: with applications to aerospace structures,
volume 163.
Springer Science & Business Media.
[Beck and da Silva Jr, 2011]BeckDaSilva11
Beck, A. T. and da Silva Jr, C. R. (2011).
Timoshenko versus euler beam theory: Pitfalls of a deterministic
approach.
Structural Safety, 33(1):19–25.
[Braak and Braak, 2000]BraakBraak00
Braak, H. and Braak, E. (2000).
Pathoanatomy of parkinson’s disease.
Journal of neurology, 247(2):II3–II10.
[Chan and Titze, 1999]ChanTitze99
Chan, R. W. and Titze, I. R. (1999).
Viscoelastic shear properties of human vocal fold mucosa: Measurement
methodology and empirical results.
The Journal of the Acoustical Society of America,
106(4):2008–2021.
[Chhetri and Neubauer, 2015]ChhetriNeubauer15
Chhetri, D. K. and Neubauer, J. (2015).
Differential roles for the thyroarytenoid and lateral cricoarytenoid
muscles in phonation.
The Laryngoscope, 125(12):2772–2777.
[Chhetri et al., 2012]ChhetriNeubauerBerry12
Chhetri, D. K., Neubauer, J., and Berry, D. A. (2012).
Neuromuscular control of fundamental frequency and glottal posture at
phonation onset.
The Journal of the Acoustical Society of America,
131(2):1401–1412.
[Choi et al., 1993a]ChoiBerkeYeKreiman93
Choi, H.-S., Berke, G. S., Ye, M., and Kreiman, J. (1993a).
Function of the posterior cricoarytenoid muscle in phonation: in vivo
laryngeal model.
Otolaryngology—Head and Neck Surgery, 109(6):1043–1051.
[Choi et al., 1993b]ChoiYeBerkeKreiman93
Choi, H.-S., Ye, M., Berke, G. S., and Kreiman, J. (1993b).
Function of the thyroarytenoid muscle in a canine laryngeal model.
Annals of Otology, Rhinology & Laryngology, 102(10):769–776.
[Dejonckere and Kob, 2009]DejonckereKob09
Dejonckere, P. H. and Kob, M. (2009).
Pathogenesis of vocal fold nodules: new insights from a modelling
approach.
Folia Phoniatrica et Logopaedica, 61(3):171–179.
[Geng et al., 2020]GengPhamXueZheng20
Geng, B., Pham, N., Xue, Q., and Zheng, X. (2020).
A three-dimensional vocal fold posturing model based on muscle
mechanics and magnetic resonance imaging of a canine larynx.
The Journal of the Acoustical Society of America,
147(4):2597–2608.
[Hanson et al., 1984]HansonGerrattWard84
Hanson, D. G., Gerratt, B. R., and Ward, P. H. (1984).
Cinegraphic observations of laryngeal function in parkinson's
disease.
The Laryngoscope, 94(3):348–353.
[Hillman et al., 2020]HillmanSteppVanStanZanartuMehta20
Hillman, R. E., Stepp, C. E., Van Stan, J. H., Zañartu, M., and Mehta,
D. D. (2020).
An updated theoretical framework for vocal hyperfunction.
American Journal of Speech-Language Pathology, pages 1–7.
[Hocevar-Boltezar et al., 1998]HocevarBoltezarJankoZargi98
Hocevar-Boltezar, I., Janko, M., and Zargi, M. (1998).
Role of surface emg in diagnostics and treatment of muscle tension
dysphonia.
Acta oto-laryngologica, 118(5):739–743.
[Hong et al., 1997]HongYeKimKevorkianBerke97
Hong, K. H., Ye, M., Kim, Y. M., Kevorkian, K. F., and Berke, G. S. (1997).
The role of strap muscles in phonation: in vivo canine laryngeal
model.
Journal of Voice, 32.
[Hunter and Titze, 2007]HunterTitze07
Hunter, E. J. and Titze, I. R. (2007).
Refinements in modeling the passive properties of laryngeal soft
tissue.
Journal of Applied Physiology, 103(1):206–219.
[Hunter et al., 2004]HunterTitzeAlipour04
Hunter, E. J., Titze, I. R., and Alipour, F. (2004).
A three-dimensional model of vocal fold abduction/adduction.
The Journal of the Acoustical Society of America,
115(4):1747–1759.
[Min et al., 1995]MinTitzeAlipour95
Min, Y. B., Titze, I. R., and Alipour-Haghighi, F. (1995).
Stress-strain response of the human vocal ligament.
Annals of Otology, Rhinology & Laryngology, 104(7):563–569.
[Miri, 2014]Miri14
Miri, A. K. (2014).
Mechanical characterization of vocal fold tissue: a review study.
Journal of Voice, 28(6):657–667.
[Miri et al., 2013]MiriHerisTripathyWisemanMongeau13
Miri, A. K., Heris, H. K., Tripathy, U., Wiseman, P. W., and Mongeau, L.
(2013).
Microstructural characterization of vocal folds toward a
strain-energy model of collagen remodeling.
Acta biomaterialia, 9(8):7957–7967.
[Morrison and Rammage, 1993]MorrisonRammage93
Morrison, M. D. and Rammage, L. A. (1993).
Muscle misuse voice disorders: description and classification.
Acta oto-laryngologica, 113(3):428–434.
[Nguyen et al., 2009]NguyenKennyTranLivesey09
Nguyen, D. D., Kenny, D. T., Tran, N. D., and Livesey, J. R. (2009).
Muscle tension dysphonia in vietnamese female teachers.
Journal of Voice, 23(2):195–208.
[Palaparthi et al., 2019]PalaparthiSmithTitze19
Palaparthi, A., Smith, S., and Titze, I. R. (2019).
Mapping thyroarytenoid and cricothyroid activations to postural and
acoustic features in a fiber-gel model of the vocal folds.
Applied Sciences, 9(21):4671.
[Pillutla et al., 2022]PillutlaReddySchlegelZhangChhetri22
Pillutla, P., Reddy, N. K., Schlegel, P., Zhang, Z., and Chhetri, D. K. (2022).
Control of pre-phonatory glottal shape by intrinsic laryngeal
muscles.
The Laryngoscope.
[Rajaei et al., 2014]RajaeiBarzegarMojiriNilforoush14
Rajaei, A., Barzegar Bafrooei, E., Mojiri, F., and Nilforoush, M. H. (2014).
The occurrence of laryngeal penetration and aspiration in patients
with glottal closure insufficiency.
International Scholarly Research Notices, 2014.
[Roy, 2008]Roy08
Roy, N. (2008).
Assessment and treatment of musculoskeletal tension in
hyperfunctional voice disorders.
International Journal of Speech-Language Pathology,
10(4):195–209.
[Södersten et al., 1995]SoderstenHertegardHammarberg95
Södersten, M., Hertegård, S., and Hammarberg, B. (1995).
Glottal closure, transglottal airflow, and voice quality in healthy
middle-aged women.
Journal of Voice, 9(2):182–197.
[Timoshenko, 1921]Timoshenko21
Timoshenko, P. S. (1921).
Lxvi. on the correction for shear of the differential equation for
transverse vibrations of prismatic bars.
The London, Edinburgh, and Dublin Philosophical Magazine and
Journal of Science, 41(245):744–746.
[Titze and Alipour, 2006]TitzeAlipour06
Titze, I. and Alipour, F. (2006).
The Myoelastic-Aerodynamic Theory of Phonation.
[Titze and Hunter, 2004]TitzeHunter04
Titze, I. R. and Hunter, E. J. (2004).
Normal vibration frequencies of the vocal ligament.
The Journal of the Acoustical Society of America,
115(5):2264–2269.
[Titze and Hunter, 2007]TitzeHunter07
Titze, I. R. and Hunter, E. J. (2007).
A two-dimensional biomechanical model of vocal fold posturing.
The Journal of the Acoustical Society of America,
121(4):2254–2260.
[Yin and Zhang, 2014]YinZhang14
Yin, J. and Zhang, Z. (2014).
Interaction between the thyroarytenoid and lateral cricoarytenoid
muscles in the control of vocal fold adduction and eigenfrequencies.
Journal of biomechanical engineering, 136(11):111006.
[Yin and Zhang, 2016]YinZhang16
Yin, J. and Zhang, Z. (2016).
Laryngeal muscular control of vocal fold posturing: Numerical
modeling and experimental validation.
The Journal of the Acoustical Society of America,
140(3):EL280–EL284.
[Zañartu et al., 2014]ZanartuGalindoErathPetersonWodickaHillman14
Zañartu, M., Galindo, G. E., Erath, B. D., Peterson, S. D., Wodicka, G. R.,
and Hillman, R. E. (2014).
Modeling the effects of a posterior glottal opening on vocal fold
dynamics with implications for vocal hyperfunction.
The Journal of the Acoustical Society of America,
136(6):3262–3271.
[Zhang et al., 2006]ZhangSiegmundChan06
Zhang, K., Siegmund, T., and Chan, R. W. (2006).
A constitutive model of the human vocal fold cover for fundamental
frequency regulation.
The Journal of the Acoustical Society of America,
119(2):1050–1062.
[Zhang et al., 2007]ZhangSiegmundChan07
Zhang, K., Siegmund, T., and Chan, R. W. (2007).
A two-layer composite model of the vocal fold lamina propria for
fundamental frequency regulation.
The Journal of the Acoustical Society of America,
122(2):1090–1101.
[Zhang, 2019]Zhang19
Zhang, Z. (2019).
Structural constitutive modeling of the anisotropic mechanical
properties of human vocal fold lamina propria.
The Journal of the Acoustical Society of America,
145(6):EL476–EL482.
|
http://arxiv.org/abs/2307.00935v1
|
20230703112101
|
Examining NHD vs QHD in the GCM THOR with non-grey radiative transfer for the hot Jupiter regime
|
[
"Pascal A. Noti",
"Elspeth K. H. Lee",
"Russell Deitrick",
"Mark Hammond"
] |
astro-ph.EP
|
[
"astro-ph.EP",
"astro-ph.IM"
] |
firstpage–lastpage
OpenAPMax: Abnormal Patterns-based Model for Real-World Alzheimer's Disease Diagnosis
Yunyou Huang, Xianglong Guan, Xiangjiang Lu, Xiaoshuang Liang, Xiuxia Miao, Jiyue Xie, Wenjing Liu, Li Ma, Suqin Tang, Zhifei Zhang, and Jianfeng Zhan
Y. Huang, X. Lu, X. Liang, X. Miao, J. Xie, W. Liu, and S. Tang are with the Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, and the Guangxi Key Laboratory of Multi-Source Information Mining and Security, Guangxi Normal University.
X. Guan is with the School of Electronic and Information Engineering & School of Integrated Circuits, Guangxi Normal University, Guilin 530015, China. He is also with the Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University.
J. Zhan is with the State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100086, China.
Z. Zhang is with the Department of Physiology and Pathophysiology, Capital Medical University, Beijing, 100069, China.
L. Ma is with the Guilin Medical University, Guilin 541001, China. She is also with the Guangxi Key Lab of Multi-Source Information Mining & Security.
Correspondence authors: Jianfeng Zhan([email protected]) or Zhifei Zhang([email protected]) or Suqin Tang([email protected])
August 1, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Global circulation models (GCMs) play an important role in contemporary investigations of exoplanet atmospheres. Different GCMs evolve various sets of dynamical equations which can result in obtaining different atmospheric properties between models. In this study, we investigate the effect of different dynamical equation sets on the atmospheres of hot Jupiter exoplanets. We compare GCM simulations using the quasi-primitive dynamical equations (QHD) and the deep Navier-Stokes equations (NHD) in the GCM THOR. We utilise a two-stream non-grey "picket-fence" scheme to increase the realism of the radiative transfer calculations. We perform GCM simulations covering a wide parameter range grid of system parameters in the population of exoplanets. Our results show significant differences between simulations with the NHD and QHD equation sets at lower gravity, higher rotation rates or at higher irradiation temperatures. The chosen parameter range shows the relevance of choosing dynamical equation sets dependent on system and planetary properties. Our results show the climate states of hot Jupiters seem to be very diverse, where exceptions to prograde superrotation can often occur. Overall, our study shows the evolution of different climate states which arise just due to different selections of Navier-Stokes equations and approximations. We show the divergent behaviour of approximations used in GCMs for Earth, but applied for non Earth-like planets.
planets and satellites: atmospheres – planets and satellites: gaseous planets – methods: numerical – radiative transfer
§ INTRODUCTION
Numerical weather and climate predictions provide useful information for our daily lives, naval and aviation safety, national policy, strategy development and for research in atmospheric science. Running numerical simulations can be computationally expensive, therefore, approximations of the Navier-Stokes equations <cit.> have been proposed for global scale simulations. <cit.> proposed the basis of the hydrostatic primitive equations (HPEs). <cit.> derived a variation from Bjerknes's primitive equations to perform the first attempt at a numerical weather forecast by hand. <cit.> produced the first numerical weather model on ENIAC in 1950. Already at the dawn of numerical forecasting, <cit.> identified those approximations as an important obstacle to overcome.
The limits of the HPEs are still assessed to this day; e.g. the energy conservation in global circulation models (GCMs) for Earth <cit.>, for short-period waves at small scales <cit.>, as well as for global simulations of exoplanetary atmospheres <cit.>.
While numerical models utilizing the primitive equations have been relatively successfully applied to Earth's atmosphere, the applicability of the primitive equation set has been questioned for exoplanet atmospheres. For example, <cit.> discovered important differences in the zonal advection between simulations using the "primitive" equations and the "full" Navier-Stokes equations (according to the nomenclature of <cit.>). Those differences in the zonal advection lead, for example, to significant differences in the atmospheric redistribution of heat in simulations of the warm and tidally-locked small Neptune GJ 1214b. For hot Jupiters, <cit.> see changes of 15 to 20 % in the peak zonal winds in simulations with the non-hydrostatic, deep atmospheres (NHD) and quasi-hydrostatic, deep atmosphere (QHD) equation sets.
Atmospheric simulations can be in the interest for observations of exoplanets; the era of JWST will bring us several phase curves observations of exoplanet atmospheres, ranging from hot giants to temperate terrestrials, at higher resolutions than ever before. Continuous and long duration observations combined with a larger spectral resolution, collecting area, and a wider spectral coverage ranging from 0.6 μ m to 20 μ m will lead the studies of exoplanets and their habitability to quantum leap forward in evolution <cit.>. At the same time <cit.> highlight the importance of multidimensionality in interpreting observations. Therefore, simulations of the dynamics and the 3D structure of exoplanetary atmospheres are essential tools for helping to understand and interpret the new observation data from JWST. Moreover, phase curve data of hot Jupiters in the optical and infrared wavelength regimes can benefit from the findings of 3D simulations of exoplanetary atmospheres: the Transiting Exoplanet Survey Satellite <cit.>, CHaracterising ExOPlanet Satellite <cit.>, the Atmospheric Remote-sensing Infrared Exoplanet Large survey <cit.>, and the high altitude ballon mission EXoplanet Climate Infrared TElescope <cit.>. Since the 3D simulations of the exoplanetary atmospheres are necessary tools for the understanding of exoplanets, identifying significant differences between simulations with different dynamical equation sets is important.
<cit.> and <cit.> reviewed the shallow, deep, hydrostatic, quasi-hydrostatic and non-hydrostatic equations in GCMs. For a complete overview on the NHD and QHD equation sets, see <cit.>. Other conventions of dynamical equation sets can also be used e.g. <cit.>.
Simulations with HPEs can represent gravity-waves and nearly-geostropic motions <cit.>. For representing nearly-geostrophic or `balanced' motion much attention has been put into deriving approximations <cit.>. Several approximations can be found in the HPEs: the `hydrostatic' assumption, `shallow atmosphere', `spherical geopotential approximation' and the `traditional approximation' <cit.>.
The traditional approximation was first introduced to study the oceanic and atmospheric dynamics of Earth considering the negligible Coriolis terms in shallowness of the Earth <cit.>. In the momentum equation, several terms go to zero <cit.>: for longitudinal wind u the terms 2Ωωcosϕ (traditional approximation) and -uω/r (shallow approximation), for latitudinal wind v the term -vω/r (shallow approximation) and for the vertical wind ω the terms 2Ω ucosϕ (traditional approximation) and u^2 + v^2/r (shallow approximation). In astrophysics, the traditional approximation of rotation (TAR) might describe the dynamics of gravito-inertial waves on stars <cit.> well, but it is problematic for some exoplanets such as dynamics of the warm and tidally-locked small Neptune GJ 1214b, as <cit.> showed. The discussion of the cosϕ terms have been in contention for many years <cit.>. Studies by <cit.>, using linearized and adiabatic analysis, showed those cosϕ terms are minor given the parameters of Earth if the ratio of planetary rotation frequency to buoyancy frequency is very small (≪ 1). <cit.> regarded the terms to be unsettling, because buoyancy frequency differs across the globe and diabatic processes drive the global circulation. Furthermore, they find that the cosϕ terms are problematic if the buoyancy frequency increases through climate change. <cit.> an <cit.> showed the importance of the cosϕ terms near the equator. Moreover, the cosϕ terms become relevant for the mesoscale motion <cit.>. The traditional approximation to models simulating exoplanets varies widely in their climate regimes. Therefore, we could assume that the traditional approximation might be not valid for many exoplanets.
Models with non-hydrostatic equations (NHEs) for global simulations are used for 3 reasons <cit.>; models with HPEs cannot resolve effectively at high resolution, so <cit.> suggested to apply a single equation set for all scales. Secondly, <cit.> saw that semi-implicit methods treat acoustic waves efficiently and that more accurate NHEs should be developed. Thirdly, <cit.> judged the mathematically evolutionary derivations of HPEs as less mature compared to NHEs which are designed for classical compressible fluid dynamics. Already outside the original discipline, the meteorology, some approximations perform already less well on Earth; For the dynamics of deep oceans, the cosϕ terms become more important <cit.> because of the larger ratio of the planetary rotation frequency to the buoyancy frequency. The larger ratio is due to the smaller buoyancy frequency in the ocean, by one order of magnitude <cit.>.
For understanding the observational data better, <cit.>, <cit.>, <cit.> and <cit.> implemented GCMs for Jupiter, Saturn, Mars and Venus. Since first discovered <cit.>, several hundreds of exoplanets have been observed. Exoplanets and their central stars vary widely in their parameters which makes modelling challenging <cit.>. Hot Jupiters are of prime interest, since they represent easier targets for observation due to their large radius and the stronger thermal emitted radiation. <cit.>, <cit.> and <cit.> adapted some of the first GCMs to hot Jupiters.
Several groups have used GCMs or Radiative-Hydrodynamic models (RHD) to study atmospheres of (ultra) hot Jupiters and warm Neptunes <cit.>. Several physical processes have been added to GCMs. Regarding radiative transfer (RT), GCMs contain the Newtonian relaxation <cit.> and multi-band grey or non-grey schemes in various adaptations <cit.> in studies for hot Jupiters. Such simplified RT schemes run in GCMs efficiently. The computational efficiency enables easier benchmarking between GCMs <cit.> and to explore parameters <cit.> for investigations of dynamical regimes. <cit.>, <cit.> and <cit.> combined detailed real gas, correlated-k RT schemes to GCMs which led to more computational expensive operations. Studies such as <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.> perform GCM simulations including real gas RT schemes. In <cit.>, they compared semi-grey, non-grey picket-fence and correlated-k RT schemes and suggested to use the picket-fence scheme as simple and computationally efficient, but realistic solution.
Regarding the validity, <cit.> raises doubts about the primitive equations in relatively thick atmospheres. In such thick atmospheres, the ratio of scale height to the planetary radius gets sufficiently large so that the traditional approximation becomes inappropriate. Similarly, <cit.> and <cit.> analysed the limits of the primitive equations for Earth respectively, the traditional approximation in particular. In the past decade, a few models with the full or deep Navier-Stokes equations have been developed for exoplanets: the 3D radiation-hydrodynamics model of <cit.>, the dynamical core of THOR <cit.>, and the modified UM ENDGame of <cit.>. However, only a few studies <cit.> have investigated differences between simulations with different dynamical equations for exoplanets. While two studies uses two-stream, double-grey RT respectively, only <cit.> applied detailed real gas, correlated-k RT scheme for the comparison of the dynamical equations. They suggested to study differences emerging out of different dynamical equations by implementing a full radiative transfer solution as used in <cit.>.
In this study, we investigate the differing effects of simplified Navier-Stokes equations in a GCM. We use THOR GCM because of its computational efficiency, and update the RT using the picket fence scheme of <cit.>. THOR allows us to simulate atmospheres with different dynamical equations, as shown by <cit.> with NHD and QHD equation sets. We will focus on the NHD and QHD equation sets in our investigation similarly.
For investigating the effects between the NHD and QHD equation sets, we analyse effects in a parameter grid space appropriate for the hot exoplanet regime. We alter the gravity, rotation period and irradiation temperature at the top of the atmosphere separately to see the differences among the equations and their dependence of those parameters.
§ THOR MODEL
<cit.> developed the open-source GCM THOR for the purpose to study exoplanet atmosphere dynamics. Further model developments were published by <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. THOR simulates the global atmospheres in a full 3D icosahedral grid with a given horizontal resolution (customizable by the g_levels settings). Consequently, singularities and resolution crowding at the poles do not occur like in latitude-longitude grids.
§.§ Hydrodynamics
THOR evolves the general non-hydrostatic Euler equations <cit.>. The integration schemes are horizontally explicit and vertically implicit. <cit.> and <cit.> added a dry convective adjustment and a `sponge layer', as a form of drag for numerical stability similar to most contemporary GCMs. Furthermore, the model offers hydrostatic shallow (HSS), quasi-hydrostatic deep (QHD), and non-hydrostatic deep (NHD) equation sets <cit.>. In summary, the vertical momentum flux differs between both equation sets.
NHD and QHD vary mainly in 3 terms: Dv_r/Dt the Langrangian derivative of the vertical velocity, ℱ_r the hyperdiffusive flux and 𝒜_r the vertical component of the advection term. The terms Dv_r/Dt and ℱ_r turn to zero in the QHD case. 𝒜_r = ∇(ρv⃗⊗v⃗) becomes
𝒜^QH_r = ρv⃗_h·v⃗_h/r,
where ρ is the density of the air, v⃗_h the horizontal momentum vector and r the radial distance from the center of the planet.
For a more complete review on the NHD and QHD equation sets, see <cit.>.
§.§ Picket-fence RT scheme
A two-stream, double-grey RT scheme is available in THOR since the update made by <cit.>.
However, to increase the realism of the RT scheme, we use the non-grey "picket-fence" <cit.> translated from <cit.> which refers to the approaches of <cit.> and <cit.>. The picket-fence approach of <cit.> simulates the radiation propagating in 5 bands (3 visible, 2 infrared) through the atmospheric layers. The picket fence scheme uses two representative opacities: the molecular and atomic line opacity, and the general continuum opacity. The values of these opacities are derived from the Rosseland mean opacity computed through fitting functions <cit.>.
Ignoring the effects of multiple scattering, the net flux, F_net,i [Wm^-2], at each level i is given by the difference of the outgoing longwave flux, F_IR↑,i, to the downwards longwave flux, F_IR↓,i, and shortwave fluxes, F_V↓, i,
F_net,i = F_IR↑,i - F_IR↓,i - F_V↓, i.
Assuming hydrostatic equilibrium, the partial optical depth, Δτ_i, <cit.> is given by
Δτ_i,b=κ_R,i,b(p_i,T_i) Δ h_i ρ_i,
where the opacity, κ_R,i,b[m^2 kg^-1], for the level i and for the band b, the height difference between levels Δ h_i and the density ρ_i determines the partial optical depth. We implemented a Bézier interpolation to compute p_i and T_i from the pressure and temperature at the layers of the model from the altitude levels. We consider the atmosphere above the model grid using a ghost layer with optical depth
Δτ_ghost=κ_R,top(p_top,T_top) p_top/g.
where p [Pa] stands for the pressure and g [ms^-2] for the gravity. The Rosseland mean opacity is calculated <cit.> as
1/κ_R≡∫_0^∞1/κ_λd B_λ/d T dλ/∫_0^∞d B_λ/d T dλ,
where κ_λ [m^2g^-1] is the wavelength dependent opacity and dB-λ/dT the temperature derivative of the Planck function. In order to quantify the non-greyness of the atmosphere, κ_i,b is computed for each level as well as for each V and IR band through the relation
κ_P,i,b≡γ_bκ_R,i,b(p_i,T_i),
where γ_b is the opacity ratio coefficient <cit.> for each band, b, and κ_R(p_i,T_i) [m^2 kg^-1] the Rosseland mean opacity for each band b. Adding the opacity ratio coefficient to the Equations <ref> and <ref>, the equations become
Δτ_i,b=γ_bκ_R,i,b(p_i,T_i) Δ h_i ρ_i,
Δτ_ghost=γ_bκ_R,top,b(p_top,T_top) p_top/g,
where γ_b = 1 accounts for a grey atmosphere and γ_b >1 for a non-grey
atmosphere in the band b <cit.>. Applying the formation definition in Equation <ref>, the Rosseland mean opacity is computed from fitting function and tables in <cit.>.
The γ_b, β, and the Bond albedo ,A_B, depend on the effective temperature, T_ eff [K]. Therefore, T_ eff [K] is computed in advance according to <cit.> for each column as
T_ eff = √(T_int^4 + (1 - A_B)μ_⋆ T_irr^4),
where T_int [K] is the internal temperature, μ_⋆ = cosϕcosθ the cosine angle from the sub-stellar point, A_B the Bond albedo and T_irr the irradiation temperature at the substellar point. Equation <ref> simplifies to T_ eff = T_int for nightside profiles. We use the fit of <cit.> to the Bond albedo, A_B, which depends on g, the gravity, and T_ eff.
The RT scheme operates for each column as follows:
* Computation of the Bond albedo according to <cit.>, with T_ eff assuming μ_⋆ = 1/√(3).
* Computation of all γ_b and β with T_ eff calculated according to Equation <ref> for each column and according to the fitting coefficient tables in <cit.>
and definitions in <cit.>.
* Compute the IR band Rosseland mean opacity, κ_R(p_i,T_i), in each layer from the fits and tables of <cit.>.
* Compute the V band opacities in each layer using the γ_b and κ_R relationships as in the Equation <ref> .
* Compute the IR band opacities in each layer using the γ_b and κ_R relationships as in the Equations <ref>.
* Compute the optical depth as in the Equation <ref>.
* Compute the two-stream calculations for each V and IR band.
§.§.§ Shortwave radiation
For the stellar flux at the top of the atmosphere, F_0 [Wm^-2], is given by the irradiation temperature, T_irr [K], <cit.> as
F_0 = σ T_irr^4 = (R_⋆/a)^2σ T_⋆^4,
where σ [Wm^-2K^-4] is the Stefan-Bolzmann constant, R_⋆ [m] the stellar radius, a [m] the semi-major axis and T_⋆ [K] the effective temperature of the star.
The downward shortwave flux at each layer i is summed over the short-wave bands with the optical depth to layer i, τ_i,b
F_V↓, i = (1 - A_B)F_0μ_⋆∑_b=1^N_bβ_V,iexp(- τ_i,b/μ_⋆),
where N_b stays for the number of V bands (3 in this study), and β_V,i the fraction of stellar flux in band b (1/3 in this study).
§.§.§ Longwave radiation
We implement a two-stream solution using the short characteristic method with linear interpolants introduced by <cit.>. The downward intensity, the intensity of the ghost layer, the upward intensity and the upward intensity at the bottom I_IR, g,i[Wm^-2sr^-1], at levels i and in IR bands for a Gaussian quadrature g point are given by
I_↓,IR, g,i = (ϵ_0i -1)I_↓,IR, g,i+1 + α_i^-B_i+1,IR +β_i^-B_i,IR,
I_↓,IR, g,ghost = [1 -exp(τ_IR,top) / μ_g] B_top-1,
I_↑,IR, g,i = (ϵ_0i-1)I_↑,IR, g,i-1 + β_i^+ B_i,IR + γ_i^+ B_i-1,IR,
I_↑,IR, g,bottom = B_int + I_↓,IR, g,bottom,
where
ϵ_0i = 1 - exp(-Δτ_IR,i / μ_g),
ϵ_1i = Δτ_IR,i / μ_g - 1 + exp(-Δτ_IR,i / μ_g) = Δτ_IR,i / μ_g - ϵ_0i,
with the coefficients for linear interpolation
α_i^- = ϵ_0i - ϵ_1i / Δτ_IR,i,
β_i^- = ϵ_1i / Δτ_IR,i,
γ_i^- = 0,
α_i^+ = 0,
β_i^+ = ϵ_1i / Δτ_IR,i,
γ_i^+ = ϵ_0i - ϵ_1i / Δτ_IR,i,
and for optical depth lower than 10^-6 the coefficients are set to
α_i^- = 0.5·ϵ_0i (B_IR,i+1 + B_IR,i) /B_IR,i+1,
β_i^- = 0,
γ_i^- = 0,
α_i^+ = 0,
β_i^+ = 0.5 ·ϵ_0i (B_IR,i + B_i-1,b) /B_IR,i,
γ_i^+ = 0,
which reduces to the isothermal approximation to avoid numerical instability. μ_g is the emission angle, and B_IR,i [Wm^-2sr^-1] the wave-length integrated blackbody intensity defined as
B_IR,i = β_IR B_i = β_IRσ T_i^4/π,
where β_IR,b is the fraction of flux in band b. This forces the RT scheme to return to the isothermal approximation at low optical depths where numerical stability would be an issue.
The upward and downward longwave fluxes F_IR,i [Wm^-2] are given by
F_IR↓,i = 2π∑_b^N_IR∑_g^N_g w_g μ_g I_↓,IR, g,i
F_IR↑,i = 2π∑_IR^2∑_g^5 w_g μ_g I_↑,IR, g, i,
where N_IR is the number of IR bands (here 2), N_g the number of Gauss quadrature points (here 2) and w_g the quadrature weight.
§.§ Altitude setup
Strong temperature gradients pose a problem in the simulations with a low vertical resolution. Instead of increasing vertical resolution, which would increase numerical cost, we instead alter the relative thickness of the atmosphere layers. Where the temperature gradient remains relatively constant (e.g. deeper atmosphere), a higher thickness can be tolerated. Therefore, we create a function, which increases the vertical resolution at a chosen relative height, h_rel, defined by
h_lev(i) = z(i)h_top,
h_lay(i) = [h_lev(i) + h_lev(i+1)]/2,
where i stands for the height index, h_lev for the altitude at the levels (interfaces), h_lay for the altitude at the layers, h_top is the chosen top altitude of the model, z(i) gives the relative height and was defined by
y(i) = a(i-c)^3 + b(i-d)^2,
z(i) = y(i)+y(0)/y(N_lev-1)+y(0),
where c and d are parameterized as
c = h_rel(N_lev-1)/2 + b/3a + (N_lev-1)/4,
d = 1/2,
where a and b are parameters which can be chosen.
In this study, we set h_rel=0.7, a=1 and b=6 for our simulations. Figure <ref> illustrates the different heights of the levels in the new setting compared the standard setting. The new scheme aims to have a slightly smother T-p profiles where temperature gradients are large like at pressures p < 10^5 Pa.
§.§ Initial condition setup
We assume an initial T-p profile given by the picket-fence analytical solution at the substellar point. We implemented the suggestion of <cit.> aiming for a hot adiabatic profile for the deep atmosphere of hot Jupiters. Furthermore, a hotter T-p profile can quickly cool down towards a realistic adiabatic gradient compared to a warming up from colder temperatures. The internal temperature, T_int [K], was calculated in advance, using the expression of <cit.>. A pressure grid with 1'000 grid points is generated by
p(x) = p_refe^-20(x)/10^3,
where p_ref is the reference pressure.
The opacity at the layer i is defined as
τ_i=τ_i+1 + κ( p_i+1,T_i+1) (p_i - p_i+1)/g.
The scheme of the initial conditions operates as follows:
* Computation of the Bond albedo according to <cit.>, with T_ eff assuming μ_⋆ = 1/√(3).
* Computation of all γ_b, γ_p and β with T_ eff calculated according to Equation <ref> for each column and according to the fitting coefficient tables in <cit.>
and definitions in <cit.>.
* Compute the IR band Rosseland mean opacity, κ_R(p_i,T_i), in each layer from the fits and tables of <cit.>.
* Compute the temperature from the top to the bottom of the atmosphere with a first guess followed by a convergence loop.
* Compute the adiabatic correction of the initial T-p profile according to <cit.>.
* Compute an initial altitude grid in addition to the T-p profile with the hydrostatic equation in the bottom up approach.
* Interpolate the temperature with both altitude grids and the initial temperature structure.
* Compute the T-p profile with the hydrostatic equation and the reference pressure from bottom up.
§ TEST CASES
For investigating the differences between the NHD and QHD equation sets, we run simulations across a parameter grid. In the JWST mission, WASP 43b will be among the first exoplanets to be observed with the MIRI/LRS instrument <cit.> and many more exoplanets will follow in the coming years. Therefore, we used WASP 43b as role model planet and altered only the parameters for the rotation rate Ω, g and T_ eff. The T_ eff in the Equation <ref> was changed in the way that the T_irr reaches our targeted values. Additionally, we analyse the effects rising from altering Ω, g and T_ eff in regard to the differing terms Dv_r/Dt, ℱ_r and 𝒜_r in the NHD and QHD case. Due to the lack of computational resources, we performed simulations across 9 parameter sets. Figure <ref> illustrates the grid values with the altering Ω, g and T_ eff one by one. Table <ref> lists the other parameters for the simulations. For the divergence-damping and hyperdiffusion coefficients, we follow the suggestions by <cit.>. The simulations are computed over 5'100 days. We take the mean of the last 10 outputs covering 100 days. Each pair of NHD-QHD simulations share the same altitude grid. To compare the 18 simulations, the outputs are interpolated and extrapolated to pressures ranging from 10^8 Pa to 10^3 Pa. For the first 100 days, D_div and D_hyp,v was increased by a factor of 10 to damp waves caused by initial instabilities.
In our results, we compare and contrast the NHD and QHD T-p profiles, maps showing the temperature and horizontal wind velocity at 10^4 Pa, mean zonal wind, vertical and horizontal momenta-pressure profiles, Outgoing Longwave Radiation (OLR), OLR phase curve, radiative and zonal wind timescales. Additionally, we generate further composites with NHD and QHD equation sets which we present in the supplementary file; temperature, horizontal and vertical wind at 10^4 Pa, the streamfunction Ψ, the tidally-locked streamfunction Ψ ', the components of the Helmholtz decomposition, vertical and horizontal density acceleration and the sign of the vtan(Φ)/10w - 1 for quality assessment <cit.>. The vertical and horizontal (zonal) density acceleration is computed as in <cit.>. In the discussion, we classify the results into climate states based on the simulations with the NHD case and relate the results to the literature. Furthermore, we computed (large-scale flow) characteristic quantities and scales including the scale height H, Rossby number Ro, Rossby deformation radius L_D, Rhines scale and the Brunt–Väisälä frequency N. We relate these characteristic values to climate states in the discussion.
The sections <ref>, <ref>, <ref> and <ref>, <ref> and <ref> of the appendix describe how the tidally-locked coordinates and wind, the streamfunction Ψ, the tidally-locked streamfunction Ψ ', Helmholtz decomposition, the OLR phase curve, the radiative and zonal timescales and the large-scale flow quantities and scales are calculated.
§ RESULTS
§.§ Altering Rotation Rate
Figure <ref> shows T-p profiles (vertical temperature-pressure profiles) for the NHD and QHD equation sets with g = 10 ms^-2, T_irr = 2'000 K and altering Ω. Looking at the differences between the NHD and QHD equation sets at the slow rotation rate, the regions around the eastern terminator and antistellar point reach much lower temperatures in the NHD case at pressures < 50'000 Pa. In contrast, the areas around the poles and western terminator are warmer in the NHD case. At the fast rotation rate, the temperature differences between the NHD and QHD cases increase by two times in many regions. The temperatures at antistellar point, eastern terminator and at the western terminator differ more than 1'000 K, 800 K and 450 K at pressures < 10^5 Pa. In general, the differences in temperatures diminish at higher pressures. In the lower atmosphere, the high rotation rate produces larger temperature differences. At the low rotation rate, temperature differences almost vanish in the deep atmosphere.
Figure <ref> shows the temperature and horizontal wind at 10^4 Pa for the NHD and QHD equation sets with g = 10 ms^-2, T_irr = 2'000 K and altering Ω. The NHD case shows a hotspot shift to the east at low Ω. Increasing Ω leads to smaller hotspot shifts to the east. The QHD case leads to the opposite effect with a larger shift to the east with higher Ω. Regarding the horizontal wind, we see strong divergence at the substellar point at low Ω in the NHD case. Higher Ω cause more deflection by Coriolis forces. Furthermore, jets have evolved at high latitudes on the eastern hemisphere, while a retrograde equatorial jet occurs on the western hemisphere. The QHD case has evolved a large jet spanning from pole to pole at low and high Ω, but a different wind field at moderate Ω interestingly. The wind field at moderate Ω looks similar to the NHD case, but varies at different pressures. The different wind field to the NHD case leads to different advection at low and high Ω. Therefore, the NHD case has lower temperatures at the nightside and higher temperatures at the poles than the QHD case.
Figure <ref> shows the zonal mean wind for the NHD and QHD equation sets with g = 10 ms^-2, T_irr = 2'000 K and altering Ω. We see a 3 prograde jet system at all Ω in the NHD case and at some Ω in the QHD case. The QHD case seems to be in transition to a 2 prograde jet system with superrotation at low Ω. We ignore the very top layers because they might be affected by extrapolation and boundary conditions in some simulations. The QHD case has much higher horizontal wind speeds which increase with Ω, except for the moderate Ω. There is a deep retrograde jet at low Ω in both cases, but more pronounced in the NHD case. The height of the westerlies decreases the faster the rotation rate gets in the NHD case at pressure p<10^6 Pa (in the upper atmosphere) as observed in <cit.>.
Figure <ref> shows the zonal momenta [kg/m^3 m/s] along vertical profiles at each grid point for NHD and QHD equation set with g = 10 ms^-2, T_irr = 2'000 K and altering Ω (without the deep atmosphere). Throughout all profiles and simulation cases, the range of the momenta get smaller with higher altitude mainly due to decreasing density. The QHD case would follow the same trend at pressure p<10^6 Pa, if the simulation of the moderate rotation rate did not resemble the NHD case. In the NHD case at the poles, the zonal momenta changes from a divergent to a more zonal field of momenta (see divergent component of the Helmholtz decomopostion in the supplementary file). The balance between eastward acceleration and vertical advection of westward momentum <cit.> favour westward winds above major westerly jet at lower latitudes in the upper atmosphere at higher rotation rates.
The QHD simulations show two regime changes at pressure p<10^5 Pa with increasing rotation rate; At high rotation rates, high positive momenta dominates at pressure p<10^7 Pa and the flow pattern varies qualitatively to the NHD simulations. Interestingly, the flow pattern in the QHD case is qualitatively much more similar to that of the NHD case at moderate rotation rate at pressure p<10^5 Pa (in the upper atmosphere). But in the deep atmosphere (at pressure p>10^5 Pa, the dynamical regime of the QHD case varies from that of the NHD case substantially.
Considering the entire simulated altitudes, the simulation with the QHD case has the smaller range of zonal momenta than the NHD case at low rotation rate. But at high rotation rate, range of the QHD case exceeds by around 5 times that of the NHD case at high rotation rates.
Figure <ref> shows the vertical momenta [kgm^-3 ms^-1] along vertical profiles at each grid point for the NHD and QHD equation sets with g = 10 ms^-2, T_irr = 2'000 K and altering Ω (without the deep atmosphere). The maxima of the upward momenta sinks to higher pressure the faster the planet rotates as observed in <cit.>.
Figure <ref> shows the phase curves of the upward flux at the top of the atmosphere (Outgoing Long-wave Radiation - OLR) for the NHD and QHD equation sets with g = 10 ms^-2, T_irr = 2'000 K and altering Ω. The OLR reaches the highest values in the NHD case at the lowest rotation rate, whereas the QHD case does at moderate rotation rate. Furthermore, the hotspot is shifted more eastwards in the QHD case at low and high rotation rate (see the phase curves). At high rotation rate, the gap between the hotspot shifts in the simulations with the QHD and NHD equation sets developed the largest at high rotation rate. At moderate rotation, the difference in the hotspot shifts reaches the smallest value. In the region around the eastern terminator and on the night side, the NHD case remains cooler.
Figure <ref> shows the radiative and zonal wind timescales for the NHD and QHD equation sets with g = 10 ms^-2, T_irr = 2'000 K and with altering Ω. The radiative timescales varies less than the zonal wind timescales for the NHD and QHD equations sets. The radiative timescales on the dayside is conserved more than other timescales when the planet rotates faster. Above about 10^5 Pa, the radiative timescales on the day- and nightside remain the shortest for the NHD and QHD case. Furthermore, the radiative timescales on the day- and nightside fall together in the deep atmosphere. But at pressures p<10^5 Pa, the radiative timescales on the day- and nightside start to divert more and more in both cases. Around 10^5 Pa, we see the timescales of the zonal wind become the shortest for both cases. For the slow and fast rotation rates, there may be a few switches between radiative and dynamical timescales to be the shortest.
Regarding differences between the NHD and QHD equation sets, we see most differences occurring in the timescale of the zonal wind. At the fast rotation rate, the QHD equation set shortens the timescale of the zonal wind throughout the atmospheres and especially in the deep atmosphere. We see a slightly higher radiative timescales on the dayside respectively lower on the nightside which speaks for an higher heat transport in the QHD case. There is an increase in the timescales of the zonal wind in the NHD case when the planet rotates faster. Whereas, the QHD equation set leads to an decrease of the zonal wind timescales in the deep atmosphere when the planet's rotation increases.
§.§ Altering Gravity
Figure <ref> shows the T-p profiles for the NHD and QHD equation sets with the same Ω = 1 · 10^-5 rad/s, T_irr = 2'000 K and with altering g. Looking at similarities between NHD and QHD equation sets, the spread of temperatures shrinks the stronger the gravity becomes. The decreasing day-night contrast occurs together with additional inversions with increasing rotation rates. The number of inversions increases in the T-p profiles around the equator with higher gravity. Furthermore, the base of the lowest inversions reach higher pressures the larger the gravity gets. Therefore, the temperatures are substantially lower in the deep atmosphere with higher gravity.
Looking at pressures p ∼ 10^5 Pa, differences between simulations with NHD and QHD equation sets, we see a decrease in the differences the stronger the gravity gets.
Figure <ref> shows the temperature and horizontal wind at 10^4 Pa for the NHD and QHD equation sets with the same Ω = 1 · 10^-5 rad/s, T_irr = 2'000 K and with altering g. While the hotspot shift has an eastern offset at low g, it gets a western offset at higher g. The hotspot shift comes along with retrograde jet ranging to high latitudes with much higher wind speeds. The offset got larger with the high wind speeds, but decreases with higher g. Differences between the NHD and QHD case decreases with higher g.
Figure <ref> shows the zonal mean wind for the NHD and QHD equation sets with the same Ω = 1 · 10^-5 rad/s, T_irr = 2'000 K and with altering g. The higher g leads to a change from the 3 prograde jet system to a one retrograde jet system. The system and climate state change brings higher wind speeds for the jet along. Furthermore, the wind flows in the deep atmosphere become weaker at higher g.
Figure <ref> shows the zonal momenta [kg/m^3 m/s] along vertical profiles at each grid point for NHD and QHD equation set with Ω = 1 · 10^-5 rad/s, T_irr = 2'000 K and with altering g (without the deep atmosphere). The zonal momenta along the vertical profiles in the simulations with NHD and QHD equation set become more similar the higher the gravity becomes. Furthermore, higher gravity leads to a change to an easterly jet (retrograde flow) in both cases. Another effect of higher gravity is the strengthening of the jet at pressures p <10^5 Pa in both cases. The jet reaches higher pressure with higher gravity in both cases. The highest momenta are found where the jet is the coldest regardless the gravity. Around the substellar point, the momenta remains still high, but the air masses get decelerated in a zone with a lot of upwelling.
Figure <ref> shows the vertical momenta [kg/m^3 m/s] along vertical profiles at each grid point for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, T_irr = 2'000 K and with altering g (without the deep atmosphere). Looking at the effects of increasing gravity, we see a wider range of vertical momenta when the gravity gets higher in both cases at pressures p <10^5 Pa.
Figure <ref> shows the OLR fluxes at the top of the atmosphere for the NHD and QHD equation set with Ω = 1 · 10^-5 rad/s, T_irr = 2'000 K and with altering g. Looking at the OLR phase curve, the maxima decrease with higher gravity in the NHD and QHD cases, although the QHD case stays much higher above the NHD case when gravity is moderate. When gravity gains strength, the minima switches to the western terminator. Furthermore, we see a westward shifted hotspot together with a retrograde flow like in <cit.>, but the retrograde flow extends to higher latitudes. At both terminators, small wave patterns occur in both cases with moderate and high gravity. When the rotational wind re-enters the daylight zone, the OLR phase curve to rises from the minimum at moderate and high gravity. Moreover, the slope of the OLR phase curves fall less on the upstream side of the maxima in both cases with higher gravity.
Figure <ref> shows the radiative and zonal wind timescales for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, T_irr = 2'000 K and with altering g. The timescale of the zonal wind shrinks at many heights the higher the gravity becomes. Similarly, the radiative timescales get shorter when gravity increases.
§.§ Altering Irradiation Temperature
Figure <ref> shows the T-p profiles for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, g = 10 ms^-2 and with altering T_irr. The range of temperatures at pressures p ≤ 10^5 Pa decreases when the irradiation temperature decreases in both cases. Regarding differences between the NHD and QHD cases, they get smaller by a magnitude with each 500 K step in temperature. Furthermore, the temperatures at the poles get the coldest when the irradiation temperatures are equal or less than 1’500 K. Inversions start to disappear when the irradiation temperature lowers. The deep atmosphere has cooled down more the lower the irradiation temperature is set.
Figure <ref> shows the temperature and horizontal wind at 10^4 Pa for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, g = 10 ms^-2 and with altering T_irr. Lower T_irr leads to a change from the 3 prograde jet system to a 1 prograde jet system. The jet is stronger in the 1 prograde jet system and ranges from pole to pole. The offset of the hotspot is higher at moderate T_irr. But the hotspot starts to vanish at low T_irr. The differences become minor at low T_irr.
Figure <ref> shows the zonal mean wind for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, g = 10 ms^-2 and with altering T_irr. At lower T_irr, the jet gets shallower and diffences between NHD and QHD case become minor. A peak of jet speeds are reached at T_irr∼ 1'500 K.
Figure <ref> shows the zonal momenta [kg/m^3 m/s] along vertical profiles at each grid point for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, g = 10 ms^-2 and with altering T_irr (without the deep atmosphere). When the irradiation temperature lowers, all zonal wind components become positive at pressures p ≤ 10^5 Pa in the NHD and QHD cases. We see an increase of zonal momenta, when the irradiation temperature decreases from 2'000 to 1'500 K. Additionally, the divergent component decreases and zonal component becomes stronger if the irradiation temperature lowers (see Helmholtz decomposition in the supplementary file). Regarding differences between the NHD and QHD cases, they become a magnitude smaller at each 500 K step in temperature.
Figure <ref> shows the vertical momenta [kg/m^3 m/s] along vertical profiles at each grid point for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, g = 10 ms^-2 and with altering T_irr (without the deep atmosphere).
Figure <ref> shows the OLR fluxes at the top of the atmosphere for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, g = 10 ms^-2 and with altering T_irr. We see an increasing shift of the OLR to the East the lower the irradiation temperature is set lower. Similarly, the minima of the OLR phase curve occur around the western terminator in both simulations with lowered irradiation temperature. Furthermore, the differences between NHD and QHD equation sets in the OLR decrease when we set the irradiation temperature lower.
Figure <ref> shows the radiative and zonal wind timescales for the NHD and QHD equation sets with Ω = 1 · 10^-5 rad/s, g = 10 ms^-2 and with altering T_irr. At a irradiation temperature of 1'500 K, the timescale of the zonal wind stays much shorter than the radiative timescales at pressures 10^4.5≤ p ≤ 10^7.5 Pa. We see a more efficient advection by the zonal wind when the divergent component weakens. But the radiative timescales get shorter than the timescale of the zonal wind when the irradiation temperature is lowed to 1'000 K. The zonal wind gets weaker and therefore less efficient of advection.
§ DISCUSSION
The difference between the NHD and QHD equation sets in THOR lies in the Dv_r/Dt, the Lagrangian derivative of the vertical velocity, ℱ_r, the hyperdiffusive flux and 𝒜_r, the vertical component of the advection terms. These terms lead to deviations in the vertical momenta in the simulations with QHD. The altered vertical momenta affects the horizontal momenta and the temperature structure indirectly. Those changes caused by a different dynamical equations set even lead to different climate states in the simulated time period.
As first approach, we can compare to the analytic solutions of <cit.> which applied the linearised shallow water equations. They designed the equation set to be the simplest as possible to cleanly identify specific dynamical processes, therefore, a two-layer model was implemented. Those analytic solutions were calculated with the zonal wavenumber k=0.5 and a rotation period of 3 Earth days. The rotation periods in our study are 7.27, 2.3 and 0.73 Earth days. Therefore, all analytic solutions lie between our simulations with Ω = 1 · 10^-5 rad/s and Ω = 1 · 10^-4.5 rad/s. The closest parameterisation between our results and those analytic solutions is τ_rad=1 d and τ_drag=1 d which corresponds to the top left plot in Figure 3 in <cit.>. In our simulations, the radiative timescales reach values between τ_rad∼0.12 d for dayside and τ_rad∼1.11 d at the height of the jet. Furthermore, τ_diff becomes 1.69· 10^-4 d when D_hyp,h=0.0025 is used in the following equation (according to <cit.>)
τ_diff∼Δ t/2^2n+1D_hyp,h.
Rossby-wave gyres do not appear in the analytic solutions <cit.> with τ_rad=1 d and τ_drag= 1 d. When τ_rad and τ_drag become higher, cyclones and anticyclones become visible in the analytic solutions. In our results, we do see Rossby-wave gyres pumping zonal momenta from higher latitudes to lower latitudes, when gravity or the rotation rate gets more intense (e.g. see figure with high g and high Ω in the supplementary file and figure <ref>). However, τ_rad is smaller in our composits with altering Ω than in the analytic solutions of the linearized shallow-water equations. The equilibrated solutions in <cit.> lead to a single maxima and minima of the geopotential gh for τ_rad=τ_drag=0.1 d . When gh for τ_rad or τ_drag become higher, 2 minima and 2 maxima evolve. In our results, we see 1 maxima and a chevron respectively 2 minima. That pattern evolves likely due to the different τ_rad on the day- and nightside compared to the uniform timescales in <cit.>.
<cit.> did run 36 experiments with a comparable setting (c_P=13'000 J kg^-1K^-1, R=3'700 Jkg^-1K^-1, a=9.437· 10^7 m, g=9.36 ms^-2 and Ω=2.078 · 10^-5 s^-1). They show the vertical and horizontal wind for different τ_drag and T_eq at 10^2 Pa. Their simulations with τ_drag≤ 10^6 s led to no superrotating jet and a more divergent flow, whereas simulations with τ_drag≥ 10^6 s show a superrotating jet. Our comparable simulation with g=10 ms^-2, Ω=1 · 10^-5 rad s^-1 and T_irr= 2'000 K falls with τ_diff=1.69· 10^-4 d below the threshold of τ_drag≤ 10^6 s and produces a similar horizontal and vertical flow pattern, although we see the similarity at 10^4 Pa instead of at 10^2 Pa. The differences to <cit.> probably occur because of different dynamical cores (dynamical equation sets) and spatially different radiative timescales.
§.§ Examination of climate states
We classify the NHD simulation outputs into climate states according to jet behaviours and manifestations of the components of the Helmholtz decomposition. The stated climate states are presented hereafter and illustrated in the Figure <ref>. We consider this classification as a first assumption to figure out parameters where the QHD case (and maybe GCMs with HPEs) perform not as accurately. So, it should not be seen as a definitive classification scheme.
Moreover, we computed large-scale flow quantities and other characteristic values and scales in Table <ref>. We discuss those indicators in relation to the climate states in the next section.
§.§.§ 3 prograde jets
When we alter the planetary rotation rate Ω at irradiation temperatures T_irr = 2'000 K and gravity g=10 ms^-2, we see a transition from a climate state with a dominate divergent component to a climate state with higher Coriolis forces. A dominate large "extra-tropical" zone expands near to the equator with higher rotation rates <cit.>. In that zone, the advection term becomes small or even negligible and the force balance is mainly made up among the Coriolis term and the pressure gradient. The Rossby number Ro for Ω= 10^-4.5 and Ω= 10^-4 rads^-1 are in the range of 0.031 to 1.39 respectively in the range of 0.0098 to 0.44. For the maximum horizontal wind speeds in our simulations, we get Ro=0.19 and Ro=0.15 for the NHD and QHD case for Ω= 10^-4.5 rads^-1. For Ω= 10^-4 rads^-1, we get Ro=0.052 and Ro=0.14 for the NHD and QHD case for the horizontal winds in our simulations. The too high wind speed in the simulation with the QHD equation set prevents the Coriolis force to act on the jet structures at Ω= 10^-4 rads^-1. At Ω= 10^-4.5 rads^-1, the horizontal wind in QHD case is more moderate than at lower Ω and therefore the balancing regarding the Coriolis force is more similar to the NHD case.
Looking at the Helmholtz decomposition at 10^4 Pa, all components are weaker than at lower rotation rates (see the plots of the Helmholtz decomposition in the supplementary file). The divergent component is still dominant compared to simulations with higher g or lower T_irr. The rotational eddy and rotational jet components evolved moderate weakly.
The scale height H and the Brunt-Väisälä frequency are 525.24 km and N = 0.00816 s^-1 for the 3 altered Ω at T_irr = 2'000 K and g=10 ms^-2. The Rossby deformation radius L_D are 1.32 and 0.42 R_p for Ω= 10^-4.5 and Ω= 10^-4 rads^-1. So, we expect smaller eddy sizes at higher Ω. The Rhines scales L_RH vary between 0.46 and 3.11 R_p. For our maximum wind speeds in our simulations, L_RH becomes 1.14 respectively 0.06 R_p, for Ω= 10^-4.5 and Ω= 10^-4 rads^-1. Such small scales let small scale vortices boost the larger atmospheric flow with their energies <cit.>. The values for L_RH increases with higher latitude and the likelihood for the appearance of Rossby waves. At higher latitudes, we do see planetary waves at 10^4 Pa.
The NHD case shows the emergence of high-latitude prograde jets in addition to the deeper, prograde and primary superrotating equatorial jet. <cit.> and <cit.> observed the 3 jet structure in their GCM simulations as well, but for both HD 189733b respectively HD 209458b with non-synchronous rotation rates.
We see differences in our simulations between the NHD and QHD equation sets growing with increased rotation rate. Wind speeds and momenta in the QHD simulations underlay those in the NHD simulations at slow rotation. But the zonal momenta in the QHD case exceed by about 5 times those in the NHD case at high rotation rates. The differences in the momenta lead to significant differences in the advection and the temperature structure at pressures p ≤ 10^6 Pa at slow and high rotation rates. The differences in the temperature range grow from about 600 K at the slow rotation to 1'200 K at fast rotation rate. The difference between the NHD and QHD equation sets do not behave linearly and include dynamical regime and climate changes in the QHD case. We see even very similar regimes and climates at moderate rotation rate at pressures p ≤ 10^4 Pa, but the dynamical equations lead to totally different regimes in the deep atmosphere. We noticed two dynamical regime and climate state changes by altering the rotation rate in the QHD case at pressures p ≤ 10^4 Pa. The QHD case changes from a 2 jet system with superrotation to a 3 jet system with weak extra-tropical conditions and then back to the state with 2 jets and superrotation when we alter the rotation rate. There might further dynamical regime changes and multiple stable climate states at different parameters which we did not simulate. Considering deeper atmosphere layers with pressures p> 10^5 Pa, the range of the zonal momenta is lower in the QHD case than the NHD case at low rotation rate, but larger at high rotation rate.Furthermore, we see a slow down of overturning circulation in the standard and tidally locked coordinates with increasing Ω (see the plots of the overturning circulation in the supplementary file). The overturning circulations of NHD cases differs quantitatively and qualitatively from circulations in the QHD case.
In the QHD case, the terms Dv_r/Dt, ℱ_r=0 and 𝒜_r lead to different vertical and indirectly to higher horizontal momenta. Therefore,the QHD case implies that GCMs with HPEs simulate too high zonal velocities at these parameterisations. The higher zonal wind speeds encounter the Coriolis forces. We expect a range of critical wind speed at a given rotation rate at which the climate switches to another climate state when the extra-tropical zone is relatively large. Higher wind speeds in combination with the smaller Rossby deformation radius L_D, moderate Coriolis forces may cause totally different climate states at certain parameters. Consequently, the models show different shifts of hotspot in simulations with different hydrodynamic equation sets depending on the parameters.
The faster rotation rates cause deviations as well with other approximations; As <cit.> has already proved for terrestrial regimes, the traditional approximation gets increasingly less valid, when the rotation becomes faster. Regarding another Coriolis term, -2Ωωcosϕ can be neglected if 2Ω Hcos(ϕ)U^-1≪ 1 as <cit.> did show. For our simulation at low g and at Ω= 10^-4 rads^-1, 2Ω Hcos(ϕ)U^-1 is about 0.21 and 0.11 for a wind speed of 500 m^-1 at the equator respectively for the mid-latitudes. Therefore, the term -2Ωωcosϕ gets more relevant in this climate state with extra-tropical conditions and GCMs with the traditional approximation in their dynamical equation sets may predict incorrectly. <cit.> has shown that increased rotation rate leads to significant differences in the flow and the flow becomes dominated by the Coriolis forces. Furthermore, a higher rotation rate result in a net warming on the dayside and a net cooling on the nighside in their simulation although the more complete equations manifest less those warming and cooling effects. At higher pressures, they noticed only temperature changes by a few degrees. <cit.> suggested to analyse and compare different dynamical equation sets with a full radiative transfer solution as used in <cit.>. Similarly as in <cit.>, we see a net warming on the dayside and a net cooling on the nightside at pressures p ≤ 10^5 Pa. But in the deep atmosphere, temperatures start to vary increasingly by increased rotation rates in our simulations. That difference among both studies in the deep atmosphere may arise from different type of planet: hot, fast rotating Jupiters may respond differently than on slowly rotating and warm Neptunes. Furthermore, we simulated a much larger fraction of the deep atmosphere than <cit.>. Regarding the radiative transfer, we expect effects on the dynamics and temperature structure due to different radiative transfer implementations. Additionally, we expect some differences in the GCM implementations which leads to varying results when comparing to other studies.
§.§.§ Radial flow
This idealised climate state has a radial and divergent flow on the dayside as well as a convergent flow on the nightside in the upper atmosphere, analogous to a global Hadley or Walker cell. Vica versa for some deeper layers. The Helmholtz decomposition would show a dominant divergent component. That climate state is an idealised and needs higher ratio of T_irr to Ω which is likely unrealistic compared to the observed exoplanets so far. At lower rotation rates Ω, at T_irr = 2'000 K and g=10 ms^-2, we see a transition to a climate state with a dominant divergent component, a moderate weak rotational eddy component and weak rotational jet component (see the plots of the Helmholtz decomposition in the supplementary file and Figure <ref>). The 3 jet system is still present in this transitional phase. As the Coriolis forces get weaker, winds get less deflected and can flow more direct from the dayside to the nightside. We see wind flows deflected less and crossing more directly over the poles to the nightside (e.g. at 10^4 Pa). There are certainly more simulations in this parameter space needed to characterise that area in the parameter grid. The circulation state may change at lower Ω. It cannot excluded if there is a retrograde superjet at lower Ω and if radial flow is evolved due to a balance between prograde and retrograde tendencies (similar to the simulations with higher g). A similar radial flow pattern was found by <cit.> for tidally locked ExoEarths (TRAPPIST 1b, TRAPPIST 1d, Proxima Centauri b and GJ 667 C f) at relativly low Ω.
The gradual transition to the climate state is seen at T_eq=1'414.21 K and H=525.24 km. The Rossby number Ro varies between 0.098 and 4.39 from winds of 100 to 4'500 ms^-1. The Rossby deformation radius and Rhines scale are 4.19 and 0.83 - 5.54. The Brunt-Väisälä frequency remains the same as at higher Ω, N = 0.00816 s^-1.
§.§.§ Prograde superrotation
This circulation and climate state occurs on the one side at high g and high T_irr, on the other side at low and high g, at relatively high T_int compared to the T_irr.
The T_int lies above the value computed according to the expression in <cit.>, 300 K respectively 400 K. The high T_int is debatable; High T_int might be the reality as strong magnetic fields have been detected by <cit.>. The magnetic field strength determines the T_int substantially <cit.>. <cit.> excluded T_int= 100 K for planets with clouds because of the cold trap, especially for Teq ∼ 1100 to 1600 K <cit.>. Nevertheless, higher T_int can be realistic because of a significant higher entropy which cause a higher internal heat flux <cit.>. Regarding the cooling rate of hot (ultra) Jupiters, <cit.>, <cit.>, <cit.> and <cit.> predicted a downward heat transport by the atmosphere. As a theoretical proof, <cit.> found heat transport from the upper into the deeper atmosphere by the atmospheric circulation. Similarly, <cit.> saw the coupling of internal evolution and atmospheric structure with the atmospheric dynamics in their simulations.
The Rossby number Ro lies between 0.098 to 4.39. For the maximum wind speeds in our simulation, we get Ro < 1.18. The climate state have Rossby deformation radii L_D≤ 3.63 R_p. The Rhines scales L_RH vary between 0.83 to 5.54 R_p, and smaller than 2.9 R_p for the maximum wind speeds in our simulation. The Brunt-Väisälä frequency is N > 0.01 s^-1. The scale height is H=55.42 km for g=47.39 ms^-2 and H≤393.93 km for g=10 ms^-2.
Differences between simulations outputs from NHD and QHD equation sets are quantitatively relatively small and negligible at T_irr= 1'500 K and g=47.39 ms^-2 respectively g=10 ms^-2. Qualitatively, the differences are more pronounced in the circulation pattern at 10^4 Pa. In this transitional phase, the QHD performs not so well compared to clear distinguishable circulation and climate states.
A dominant rotational jet component, a dominant rotational eddy component and a weaker divergent component characterise that climate state (see the plots of the Helmholtz decomposition in the supplementary file). Comparable simulations were computed by <cit.> and <cit.> for WASP 43b with the hydrostatic primitive equations (HPEs), although our parametrisation differs by slightly higher T_irr and slightly higher Ω. Our results with the parametrisation Ω = 10^-4 rads-1, T_irr= 2'000 K and g=47.39 ms^-2 partially agree to those of <cit.> and <cit.>. We see a prograde jet and Rossby gyres which transport zonal momenta to low latitudes as well as retrograde flow at high latitudes like in their study (e.g. see Figure with simulation computed with g=47.39 ms^-2 T_irr= 2'000 K and altering Ω in the supplementary file). But the speed of the jet remains with ∼ 1'800 ms^-1 for the NHD and QHD case much lower than the wind speeds of 5'500 ms^-1 in studies of <cit.> and <cit.>. At this parametrisation, it seems the HPEs predict too high wind speeds compared to the NHD and QHD equation sets.
The differences between the NHD and QHD case are less than 100 K and minor compared to the low gravity.
§.§.§ Retrograde superjet
In this circulation and climate state, a retrograde superjet leads to a westward offset of the hotspot. This climate states occurs at high gravity and low rotation rate (T_irr= 2'000 K, Ω= 10^-5rads-1 and g=25 ms^-2 respectively g=47.39 ms^-2; see Figure <ref>). This climate state has a dominant rotational jet component, a weak divergent component and a weak rotational eddy component (see the plots of the Helmholtz decomposition in the supplementary file). We see a transition from retrograde to prograde superrotation in the simulations with T_irr= 2'000 K, Ω= 10^-4.5rads-1 and g=47.39 ms^-2 respectively partially in the simulation with T_irr= 1'500 K, Ω= 10^-5rads-1 and g=47.39 ms^-2 (see Figure <ref>).
The equilibrium temperature lies around T_eq=1'414.21 K. The scale height is 110.83 respectively 210.1 km. The Rossby number is around 0.098 - 4.39. The high winds in our simulations imply encountered Coriolis forces, partially tropical conditions. The Rossby deformation radius is 4.19, while the Rhines scales varies in a range of 0.83-5.54.
Differences between simulations outputs from NHD and QHD equation sets are less than 200 and less than 100 K at T_irr= 2'000 K, Ω= 10^-5rads-1 and g=25 ms^-2 respectively g=47.39 ms^-2 for pressures larger than 10^4 Pa. The smaller temperature differences come along with a stronger retrograde superjet.
§.§ Implication for the superrotation
We see a complete shift in the climate regime in our simulations towards a retrograde jet spreading to high latitudes at pressures p ≤ 10^5.5 Pa and at low Ω when gravity increases. Many studies (e.g. <cit.>) have shown that tidally locked hot Jupiters produce an equatorial eastward wind jet in 3D simulations. The equatorial eastward jet transports heat to the nightside and shifts the hotspot to the east <cit.>. Nevertheless, there are several exceptions among hot Jupiters; <cit.> observed a westward shifted hotspot in CoRoT 2b. Similarly, <cit.> made observations of a westward shift for WASP 140b. Several factors can counter superrotation <cit.>; clouds <cit.>, including variability in the cloud coverage <cit.>, higher metallicity in the planet's atmosphere <cit.> and magnetic fields <cit.> may affect the circulation significantly. Moreover, planets may evolve retrograde flow because of non-synchronous planetary rotation <cit.>. We suggest that the choice of the dynamical equation set may counter superrotation as well as lead to different jet systems and climate states. Furthermore, we assume additional physical schemes may alter the balances for the evolution of jet systems and climate states.
Many of the previous studies used simplified Newtonian cooling or grey RT solutions, <cit.> showed the improvements for more realistic RT solution We consider more realistic RT solution in GCMs and other schemes in addtion to the dynamical cores as a key consideration when investigating differences between dynamical equation sets.
WASP 43b orbits its host star with 0.8315 days relatively quickly <cit.> and is unusually dense. <cit.> simulated WASP 43b and got varying results compared to <cit.>, <cit.> and <cit.>; The simulations of WASP 43b in <cit.> show westward (retrograde) flow in the upper thermal photosphere (p≤ 8'000 Pa) as soon as the model simulates deep wind jets. They found a strong tendency of an equatorial westward flow in the eddy-mean-flow analysis for p < 10^4 Pa for WASP 43b. <cit.> concluded that the deep atmosphere may significantly influence the atmospheric flow in the observable middle and upper atmosphere of hot Juptiers. <cit.> stated as well a retrograde flow at 10^6 Pa in the simulations of HD 189733b. Investigating eddy transport, <cit.> noticed a deceleration of the superrotating jet due to the evolution of the deep atmosphere (the model did not reach steady state after 10'000 Earth days). In their study, air masses sink over the poles and rise over the equator. The horizontal temperature gradient at greater depths (p > 10^6 Pa) powers the deep circulation.
Retrograde flow has been noted in simulation in few cases; <cit.> performed simulations for HD 189733b altering irradiation (warm and cool Jupiters) and rotation periods (0.55, 2.2, and 8.8 Earth days). Their simulations with fast rotation or low irradiation show retrograde flow in the zonal-mean wind. More retrograde flow patterns were found for tidally locked exo-Earths with fast rotation <cit.>. <cit.> showed that retrograde flow over the equator can appear on dense and hot Jupiters. <cit.> highlighted that vertical angular momentum in balance of horizontal interactions plays a crucial role for the evolution of superrotation. <cit.> identified unusually deep wind jets <cit.> accompanied by deeper convective layers. Those deep wind jets may impact the upper atmosphere ( p< 10^6 Pa) by zonal momentum transport at depths ( p> 10^6 Pa) that supposed to increases with faster rotation. More studies are required to understand the exact mechanisms and regimes that can produce retrograde flow.
<cit.> analyzed indirectly the effect of gravity on the dynamic equation set via temperature contrast and the scale height. They concluded that the maximum variation appears between varying and constant g, when the temperature contrast is altered, and their view when g is supposedly altered as well (scale height). The deep (equation) case varies roughly 30 % to the full (equation) case at the top of the atmosphere. <cit.> stated that the resulting flows in the simulations with the primitive and deep equation set respond independently of the treatment of g. Our results support the idea of independence of g partially; At high gravity differences among NHD and QHD nearly vanish. It can be explained by the growing dominance of the gravity term over other terms. But at low gravity, the other terms in the NHD and QHD equation set reveal their effects and the related differences which cannot anymore be encountered by the gravity term. We cannot comment how the full equation responses in comparison to other equation sets, since the THOR model does not provide the option for varying g yet. Only an extensive study on the effects of the gravity term with different dynamical equation sets can provide a full answer. The combination of high gravity in the deep atmosphere with decreasing gravity in the upper atomsphere may even lead to total different climate states than presented in here.
We have to note that the Bond albedo changes with g with altered gravity and with that the incoming shortwave radiation. Therefore, we see effects of g combined with radiative effects on the dynamics.
<cit.> showed that increased planetary temperature contrast lead to an accelerated zonal flow while comparing the primitive with the full equation set. They see significant changes in the thermal structure. As a consequence the regime becomes advectively dominated. The changes in the zonal flow and advection end in changed temperature structure <cit.>.
We see growing differences between the NHD and QHD equation set in the zonal momenta in our simulations, when we increase the irradiation temperature. At lower irradiation temperatures, the differences nearly vanish and a superrotation is evolved. The deviations in the temperature remain much smaller at lower temperatures. That is not surprising, since the temperature is not included directly in the altered terms in the QHD case, Dv_r/Dt, ℱ_r and 𝒜_r. Therefore, the deviations have to rise from the changed dynamics which alters the temperature advection and therefore the temperature structure at higher irradiation temperature more significantly. At higher irradiation temperatures, the spread in the T-p profiles (day - night contrast) increases with higher irradiation temperatures. Hence the temperature advection gets a more decisive role in the temperature structure of planets. In the comparable study of <cit.>, only minor differences appear in the temperature among simulations with NHD and QHD equation sets in the simulations of HD 189733b. They stated slightly higher velocities in the NHD case and differences of jet velocity of roughly 5 %. THOR produces a superrotation as well in their simulation.
<cit.> compared Spitzer phase curves and showed evidence for a trend of increasing phase offset with increasing orbital period at 4.5 μ m (for T_eq≡ 1'300 K), as already shown in <cit.>. Our results show larger offsets with larger orbital periods for the NHD case when gravity is low (for T_eq≡ 1'414.21 K). This comes along with a weaker overturning circulation with increasing Ω (see the plots of the overturning circulation in the supplementary file). The QHD case does not show a trend in this regard and the offset changes more due to climate state changes at low g. At higher gravity, the offset switches direction due to climate state changes. We see a decrease of the eastward offset of the hotspot when superrotation is prograde, g is high and Ω increases.
Moreover, <cit.> suggested that only the radiative and advective timescales affect the hotspot offset. So, the radiative timescale should not be changed by the rotation rate. Consequently the rotation rate should change the wind speed in tidally locked hot Jupiter when the rotation rate is altered because of the trend of the offset. Therefore, faster rotation rates should lead to weaker equatorial jets. In our simulations, we see the radiative timescales changing in the NHD and QHD case when the rotation rate is altered, due to temperature advection. Moreover, the radiative timescales on the nightside vary much more than those on the dayside when the rotation rate alters. Nevertheless, we see a weakening of the equatorial jets with higher rotation rates in the NHD case. The QHD case does not show weakening, much more a strengthening with higher rotation rates. Looking at the entire parameter grid we simulated, the offset changes, when we altered g, T_irr and Ω. We see the offset changes due to several parameters. Similarly, <cit.> showed dependence of the offset on a nondimensional parameter, which is related to the radius, scale height, gravity and rotation rate. <cit.> observed the dependence of the offset is not only bound to the rotation rate as in hot Jupiters, but also to gravity for cooler Jupiters with consistent nightside temperature near ∼ 1'000 K. The different jet structures and offsets of the hotspots in our simulated parameter grid imply a dependence on multiple parameters as <cit.> and <cit.> suggested.
Comparable simulations to ours in the studies of <cit.> (SPARC/MITgcm) and <cit.> (expeRT/MITgcm), but computed with hydrostatic primitive equations (HPEs), show 3 times higher wind speeds for WASP 43b than our results. Unfortunately, the lower wind speeds in our simulations are mostly due to the limit imposed by the model top. Moreover, our parametrisation differs by slightly higher T_irr and slightly higher Ω. The GCM with HPE in <cit.> predicts a superrotation with high wind speeds up to 4'800 ms^-1. The wind speeds in our simulations lie around ∼1'000 and ∼ 500 ms^-1 for our QHD case with T_irr= 2'000 K, g=10 ms^-2 and Ω = 1 · 10^-5 respectively Ω = 1 · 10^-4.5. The QHD case already predicts too high wind speeds compared to the NHD case depending on the parametrisation. The HPEs seem to predict even much higher wind speeds at this parametrisation, but it needs to be studied more extensively. Furthermore, the simulation for HD 209458b in study of <cit.> can be classifed in a transitional state between our 3 prograde jets and the radial flow. Therefore, we expect elements of a 3 prograde jets combined with a dominant divergent component if computed with the NHD equation set.
On the other hand, if we compare simulations for HD 189733b, the THOR model (with the double-grey dual band radiative transfer scheme) produces a prograde superrotation in the study of <cit.> with wind speeds up to ∼ 5'600 ms^-1 in the NHD case, even higher than in the QHD case. <cit.> simulated HD 189733b as well. The zonal mean wind speed goes a bit beyond 3'200 ms^-1, but it remains lower than in study of <cit.>. Although <cit.> and <cit.> predict a superrotation, the jet maxima is found at 2 magnitude higher pressures in <cit.>. We consider different physical scheme as well combination with different dynamical equation sets have an effect on the jet structure and the climate state, but is has to be investigated further. In a comparison of radiative schemes, <cit.> showed different radiative transfer schemes can lead to different wind speed and temperature structures.
Hot Jupiter climates are often associated with a equatorial prograde superrotating jet (see <cit.> for full review). That concept is often supported by GCM simulations which show a prograde superrotation. Comparing jet systems in different studies, most simulations for hot Jupiters (e.g.<cit.>, <cit.>, <cit.> and <cit.>) show only prograde superrotation. So far, only <cit.> predicts a retrograde flow for WASP 43b, embedded in a strong superrotation, with the GCM MITgcm with HPEs. Like in <cit.>, we see retrograde flow in similar cases depending on the parametrisation, but we did not explicitly simulate WASP 43b. Nevertheless, we predict even a retrograde superjet in one of the 4 different circulation states. The evolution of climate states and the jet structures depend on the parametrisation and choice of the dynamical equation set. Zonal momentum transport may play a crucial role for the evolution of retro-, prograde and cross-the-poles wind flow. Such association with the momentum transport was found by <cit.>. They associate the upwards zonal momentum transport to a deep jet which leads to the retrograde flow in the upper atmosphere. Such momenta transport can be missed by HPEs, since they ignore several terms of the full equation set related to momenta transport such as 2Ωcos(ϕ), -uw/r and -uv/r <cit.>. Even the NHD case does represent the full equation set, since g does not altered with the altitude. We illustrated some effects of g on the different dynamics and outcomes by altering g. Therefore, our simulation outcome may change drastically depending on the parametrisation when the full equation set is implemented in THOR.
Regarding the evolution of different jet system, <cit.> demonstrated an interesting case of climate bistability in TRAPPIST-1e. They found 2 distinct jet systems for a 10^5 Pa nitrogen-dominated atmosphere. They characterised 1 strong equatorial prograde jet (with strong day-night contrast) and 2 mid-latitude prograde jets (with weak day-night contrast). In their numerical experiments, the bistability was highly sensitive to the model setup, such as initial conditions, surface boundary conditions, physical parameterisation of convection and cloud radiative effects. They found a balance between the zonally asymmetric heating, mean overturning circulation, and mid-latitude baroclinic instability. As not the only study, <cit.>, <cit.> and <cit.> discovered transitional states between well defined jet systems and climate states similarly to our study. Some rocky exoplanets seem to be sensitive not only to GCM setup <cit.>, but as well to the GCM choice as shown by <cit.> and <cit.>. As an addition to these studies, our study shows that choice of the dynamical equation set within a GCM leads to evolution of different climate state.
The discussion about the dynamics on hot Jupiters <cit.> together with our results have no reached a consensus yet. Further studies of the dynamics of hot Jupiters with GCMs with the full equation set are needed.
Many simulations uses HPEs which come along with shortcomings due to their approximations made for Earth. We demonstrate with our comparison that such approximations can lead to complete changes in the jet structure and climate state that just arise from the choice of the dynamical equation set. In several parameter settings, we see prograde superrotation, but as well deviations from prograde superrotation, such as retrograde superjet, disrupted superrotation and a 3 jet instead of 1 jet structure. Nonetheless, we should be careful since climate state and observational features may change over long integration times (e.g. 50'000 -250'000 Earth days) as <cit.> has shown. They saw the evolution of 2 prograde off-equatorial jets to a single prograde equatorial jet ranging up to the poles. Also, they found the hotspot shift becomes eastward after long integration times. Regarding the reason of the long convergence, they hint to the long radiative timescales in the deep atmosphere. They run simulations for the warm sub-Neptune GJ 1214b with the GCM LMDZ with HPEs and with two-stream grey gas RT scheme. Our comparable simulations have too high T_int in comparison to GJ 1214b and might be in a different climate state.
We assume the climate states on hot Jupiters are more diverse than the simple superrotation. <cit.> found a westward shift of the hotpsot and brightness peak with Kepler measurements of HAT-P-7b. Similarly, <cit.> observed a westward offset of the hotspot for WASP 140b. Moreover, <cit.> presented thermal phase observations of the hot Jupiter CoRoT 2b obtained with the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. They detected a westward offset of the hotspot of 23 ± 4^∘. The large westward offset in <cit.> might be another evidence of retrograde flow or even retrograde superjet in hot Jupiter atmospheres. Simulations including magnetohydrodynamics (MHD) predicted a westward flow <cit.>. A more recent study <cit.> showed simulations with MHD which led to westward shifts of the hotspot for HAT P-7b and CoRoT 2b. For these reasons, we conclude hot Jupiter atmospheres might be more diverse than so far assumed.
§.§ Limitations and future improvements
The GCM THOR can encounter numerical instability when the gradient between the nightside and dayside temperatures is too large <cit.>, most problematic when modelling ultra hot Jupiters. As a consequence, we could not simulate pressures lower than ∼ 7 · 10^2 or 10^3 Pa (depending on the parametrisation) which affects the dynamics and temperature structure to some degree. Future updates to the THOR GCM will address the issue of large day-night temperature gradients.
<cit.> performed simulations for warm, tidally locked and slowly rotating Neptunes and super Earths with a duration of 1'000 Earth days. They saw the evolution of the maximum zonal wind speed and structure ceased in their simulations at lower pressures (pseudo-steady). The deep, high pressure atmosphere still evolve slowly in their simulations after 1'000 Earth days. The slow evolution of the deep atmosphere does not appear to have a significant effect on the dynamics of the upper, low pressure atmosphere for hot Jupiters <cit.>. Contrary, <cit.> suggest advection of zonal momenta upwards from the deeper atmosphere.
In this study, we run the simulations for 5'000 Earth days and for a certain number of Earth days and did not set the duration according to a convergence condition. The computation time would take too long for 2 dozens of simulation cases to finish the study in a meaningful time. We simulated the deep atmosphere to 10^8 Pa which needs significantly more time to converge <cit.>. However, we simulated the deep atmosphere to stabilize THOR, especially for the first few hundreds days. Regarding sufficient time periods for convergence to steady state, <cit.> simulated GJ 1214b for 50'000 days to observe the transition from 2 equatorial jet into 1 jet. Such long integration times are beyond our current computational resources for parameter grid we computed. <cit.> set the simulation time on basis of evolved features which different models create early on. Important feature such as the equatorial jet can be evolved in 7'800 days <cit.> for GJ 1214b. A shorter run time was used in <cit.>, but with a shallower atmosphere and a surface pressure of 10^6 Pa. A more detailed analysis on the convergent time for deep atmosphere is done by <cit.>. They did run simulations with a surface pressure of 10^8 Pa for WASP 43b and HD 209458b for 12'000 Earth days. While HD 209458b did converge within the 12'000 Earth days, WASP 43b did evolve steadily during the full simulation time. The temperature change rate drops from ∼ 1.5 to ∼ 0.05 Kd^-1 at the end of the simulation. Regarding the final state of the deeper atmosphere, <cit.> confirmed the independence of the initial conditions for WASP 43b. As <cit.> showed the high sensitivity of the model setup in relation to the evolution of the distinct climate states, more studies are needed to examine bistability and even multistability of exoplanets.
At lower resolution, THOR approaches steady state around 2'500-3'000 Earth days for simulations of HD 189733 b, while high-resolution simulations converge after 10'000 Earth days (<cit.>, indicated by the superrotation index according to <cit.>). The zonal flow undergoes a quick development and changes only very little after 2'000 Earth days <cit.> in the simulations of HD 189733b. They showed as well that the upper atmosphere reached steady state, although the lower atmosphere did not reach it in their simulations with g_level=5 (around 2^∘). For hot Jupiters, we expect even shorter convergence times due to higher temperatures so that 5'000 days are sufficient to observe differences between NHD and QHD equation sets at pressures p ≤ 10^6 Pa.
Higher resolutions conserve mass better as <cit.> noted that THOR conserves mass at g_level=5 (around 2^∘) slightly less well than at g_level=6 (around 1^∘), although the output looks qualitatively very similar. Moreover, terms such as the cosϕ become relevant for the mesoscale motion <cit.> on Earth. Furthermore, more complex atmospheric motions may appear if the model resolution increases like on Jupiter <cit.>. On exoplanets, a higher resolution may lead to larger differences among simulations with different dynamical equation sets. Furthermore, mass, energy, numerical dissipation and integration errors lead to gradual changes of the total axial momentum <cit.>.
Regarding the gravity, THOR has a constant value throughout the atmosphere. A decreasing gravity with height would change the simulation outputs and their realism. We expect further implications for the QHD equation set and other approximations, especially at higher altitudes respectively at lower pressures (p<10^5 Pa), since we find the largest differences at low gravity.
§ SUMMARY AND CONCLUSIONS
For exoplanet atmosphere GCMs, several hydrodnymaic equation sets are used across the literature. However, only a few studies have compared the differences between equation sets and their effects on the atmospheric dynamical properties <cit.>.
This will be important to consider as spectral phase curve data is produced by JWST.
In this study, we compared the NHD and QHD equation sets <cit.> in the GCM THOR. We simulated atmospheres across a parameter grid to reveal the validity of the equation sets for a wide range of the exoplanet population. Additionally, we implemented a two-stream non-grey "picket-fence" scheme to THOR which increases the realism of the radiative transfer in the model.
Our results show significant differences between the NHD and QHD equation sets in the GCM THOR for fast rotation rates, lower gravity and higher irradiation temperatures. The NHD and QHD equation sets in THOR differ only in the terms Dv_r/Dt, the Lagrangian derivative of the vertical velocity, ℱ_r, the hyperdiffusive flux and 𝒜_r, the vertical component of the advection term. But those terms cause significantly different results in the dynamics and the vertical temperature structure in several regimes. Depending on the parameters, the NHD and QHD equation sets even evolve to different dynamics, radiative regime and climate state.
Overall, our study shows the evolution of different climate states which arise just due to different selection of Navier-Stokes equations and approximations. We show the implications of approximations made for Earth, but used for non Earth-like planets. Our results agree qualitatively to comparable studies of <cit.> and <cit.>. <cit.> made a similar comparison, but with the Met Office Unified Model. They compared simulations of slow-rotating, small Neptune-sized planets with the primitive and deep equation set. <cit.> used THOR in a similar comparison of the NHD and QHD equation sets and showed already significant differences in the dynamics in two regimes (Earth like case and HD 189733b). We showed that differences between the NHD and QHD equation sets can vary depending on the parametrisation and choice of the dynamical equation set. Finally, our results show the relevance in the use of different dynamical equation sets depending on planetary and system properties.
Future investigations may extend this study by comparing the full equation set, NHD equation set and hydrostatic, shallow approximations in GCMs. Additionally, <cit.> suggested to implement chemical equilibrium <cit.> and a cloud scheme like in <cit.>. A more sophisticated spectral RT scheme like <cit.> may also alter our findings. Longer simulations times, similar to <cit.>, and GCMs with the full equation set may reveal new circulation and climate states as well as multistabilities.
§ ACKNOWLEDGEMENTS
P.A. Noti and E.K.H. Lee are supported by the SNSF Ambizione Fellowship grant (#193448).
Financial support to R.D. was provided by the Natural Sciences and Engineering Research Council of Canada (NSERC; Discovery Grant RGPIN-2018-05929), the Canadian Space Agency (Grant 18FAVICB21), and the European Research Council (ERC; Consolidator Grant 771620).
M.H. gratefully acknowledges funding from Christ Church, Oxford.
Data and plots were processed and produced using PYTHON version 3.9 <cit.> and the community open-source PYTHON packages Bokeh <cit.>, Matplotlib <cit.>, cartopy <cit.>, jupyter <cit.>, NumPy <cit.>, pandas <cit.>, SciPy <cit.>, seaborn <cit.>, windspharm <cit.> and xarray <cit.>. Calculations were performed on UBELIX (<http://www.id.unibe.ch/hpc>), the HPC cluster at the University of Bern. We thank the IT Service Office (Ubelix cluster), the Physikalisches Institut and the Center for Space and Habitability at the University of Bern for their services.
§ DATA AVAILABILITY
We used the development version of the GCM THOR (available on <https://github.com/exoclime/THOR>). The code for the picket-fence scheme and the new mode for the initial conditions were uploaded on the lead author’s GitHub: <https://github.com/PA-NOTI/THOR_picket_fence_scheme>. The code on the lead author’s GitHub was used to run the GCM THOR simulations. The added features got integrated in the main Github of the GCM THOR. The input and output files of the GCM THOR are available on Zenodo, https://doi.org/10.5281/zenodo.7620774DOI: 10.5281/zenodo.7620774 and https://zenodo.org/record/8014271DOI: 10.5281/zenodo.8014271. All other data and code are available from the authors on a collaborative basis.
mnras
§ TIDALLY LOCKED COORDINATES AND VELOCITIES
For analysis of symmetries in the atmosphere of a tidally locked
planet, we make use the ‘tidally locked coordinate system’ suggested by <cit.>. In the transformation, the traditional latitude-longitude system (ϑ,λ) get replaced by the ‘tidally locked coordinate system’ (ϑ', λ') as the following:
The coordinates are effectively a rotation of regular latitude-longitude
coordinates, so that the polar axis runs from the substellar
point to the antistellar point. They define the tidally locked latitude
ϑ0 to be the angle to the terminator, and the tidally locked longitude
to be the angle about the substellar-antistellar axis. That rotation of the coordinate system results into the tidally locked coordinates according to <cit.> as
ϑ' = sin^-1(cosϑcosλ) ,
λ' = tan^-1(sinλ/tan(ϑ)),
where ϑ' is the tidally locked latitudes, λ' the tidally locked longitude, ϑ the original latitude and λ' the orginal longitude.
The tidally locked wind velocities consist of fractions of the original zonal and meridional wind components. The fractions change depending on the coordinates. According to <cit.>, the tidally-locked zonal and meridional wind u' and v' are defined as
u' = cosϑ(∂λ '/∂λu/cosϑ + ∂λ '/∂ϑv ) ,
v' = ∂ϑ'/∂λu/cosϑ + ∂ϑ'/∂ϑ v,
where u and v are the zonal and meridional wind components of the original coordinate system.
§ STREAMFUNCTION AND TIDALLY-LOCKED STREAMFUNCTION
For analyzing the mass flow, we performed the tidally-locked streamfunction Ψ ' and the Eulerian mean meridional streamfunction Ψ in the same fashion as <cit.> as
Ψ = 2 π a cosϑ/g∫_0^p[v]_λ dp ,
Ψ ' = 2 π a cosϑ '/g∫_0^p[v']_λ ' dp,
where g declares the gravity, a the equatorial radius, [v]_λ averaged wind over longitude and the [v']_λ ' an averaged wind over tidally-locked longitude.
§ HELMHOLTZ DECOMPOSITION
We performed a Helmholtz decomposition according to <cit.> to analyse changes due to altered parameters in our grid which might be discovered in the components of the total circulation such as the overturning circulation, stationary waves, and superrotating jet. In the Helmholtz decomposition, the total circulation is split up into the divergent and rotational components u_d and u_r <cit.>:
u = u_d+u_r=
= ▽χ + k ×▽ψ,
where χ stands for the velocity potential function and ψ for a streamfunction which are defined as:
▽ ^2χ = δ,
▽ ^2ψ = ζ,
where δ is the divergence and ζ the vorticity.
§ OLR PHASE CURVE
<cit.> formulated the phase curve as
F=∫_λ_1^λ_2∫_ϑ_1^ϑ_2∫_-π/2^π/2 R^2F_TOA/πcos^2(θ)cos(ϑ-α) dϕ dϑ dλ,
where F_TOA is the flux at the top of the atmosphere coming from the each atmospheric column of the GCM at a given wavelength λ, ϕ and ϑ declare the latitude and longitude and α the orbital phase angle.
<cit.> introduced a formalism to calculate the F on an icosahedral grid as
F= ∑_i=1^N_gridF_TOA,i/πμ_iA_i/R^2_p,
where A_i declares the area of each control volume at the top of the atmosphere R_p, the radius of the planet and
μ_i=
cos(ϕ)cos(ϑ - α), α - π/2 < ϑ < α + π/2,
0 , ϑ > α + π/2 or ϑ < α - π/2.
We take the approach of <cit.> adapt it to a longitude-latitude grid and limit it to long-wave radiation.
<cit.> defined the surface area of a grid-cell in a longitude-latitude grid on the sphere as
A_S=∫_ϕ1^ϕ2∫_ϑ1^ϑ2 R_p^2cos(ϑ) dϕ dϑ = R_p^2(ϑ2-ϑ1) (sin(ϕ_2) -sin(ϕ_1)).
By switching to longitude-latitude grid, we modify the Equation <ref> with Equation <ref> and reformulate as OLR phase curve as
F_OLR= ∑_i=1^N_gridF_OLR, TOA, i/πμ_i (Δϑ) (sin(ϕ+Δϕ) -sin(ϕ-Δϕ)),
where Δϑ is the longitudinal width of a grid cell, Δϕ the latitudinal width and we defined μ_i as
μ_i=
cos(ϕ)cos(ϑ - α), cos(ϕ)cos(ϑ - α)≥ 0,
0 , cos(ϕ)cos(ϑ - α) < 0.
§ RADIATIVE AND ZONAL TIMESCALES
We computed the radiative timescale as in <cit.> as
τ_rad∼P/gc_p/4σ_B T^3,
where P [Pa] declares the pressure, g [ms^-2] the gravity, σ_B the Stefan-Boltzmann constant, c_p [J kg^-1K^-1] the heat capacity at constant pressure and T [K] the temperature.
The zonal timescales was calculated as well like in <cit.> as
τ_zonal≳R/u_max,
where R is the planetary radius and u_max the maximum of the zonal wind speed.
We computed the radiative and zonal timescale for each layer with the related values.
§ LARGE-SCALE FLOW QUANTITIES
For the analytics, we used several quantities for the large-scale flow characteristics. The scale height H is defined in <cit.> and we reformulated it as
H =k_BT/mg=R_dT/g,
where k_B [m^2 kg s^-2 K^-1] is the Bolzmann constant, T [K] the temperature of the gas, g [ms^-2] the gravity, m [kg] the mass of the gas and R_d [J kg^-1 K^-1] the specific gas constant.
The Rossy number indicates the balance in the momentum equation between the Coriolis and the advection term <cit.>:
Ro ≡U/fL,
where L [m] is the typical horizontal scale, U [m/s] the typical wind speed and f [rads^-1] the Coriolis parameter as
f=2Ωsin(ϑ),
where Ω [rad]s^-1 represents the rotation rate of the planet and ϑ [rad] the latitude. The typical horizontal scale is typically calculated as the Rossby deformation radius L_D [m] (see hereafter). The coriolis force becomes negligible and the advection, pressure gradient and dissipation remain the terms relevant in the force balance, when the Rossby number is much larger than one <cit.>. On the other hand, a much smaller Rossby number indicates a force balance among pressure gradient and Coriolis force.
Pressure gradients may equalised by gravity waves, unless the gravity waves are not deflected by Coriolis force. The Rossby deformation radius L_D [m] defines the distance at which the gravity waves get deflected by the Coriolis force <cit.>:
L_D = ND/f,
where N [s^-1] is the Brunt-Väisälä frequency (actually, the oscillation frequency of gravity waves) and D [m] the vertical length scale of the atmosphere. The vertical length scale of the atmosphere is calculated at the order of one scale height, so D=H. The the Brunt-Väisälä frequency is defined in an isothermal atmosphere as <cit.>:
N = √(c_pg/R_dH),
where c_p [JK^-1] represents the heat capacity (we corrected a typing mistake in <cit.>).
The Rhines scale L_Rh indicates the scale at which the transition from dominant linear advection to the appearance of an inverse cascade occurs. The inverse cascade is the energy injection of small scales vortices into larger atmospheric flow. The Rhines scale is also known as an indicator for flow reorganization into the bands of alternating zonal jets, often called zonation <cit.>. In unsteady flow regimes, the Rhine scale might e associated with the moving energy front propagating towards the decreasing wavenumbers. The Rhines scale is defined as <cit.>:
L_Rh = π√(U/β),
where β corresponds to the meridional gradient of the Coriolis force, also known as the "β-effect, and defined as <cit.>:
β = 2Ωcos(ϑ)/R_p,
where R_p is the radius of the planet.
|
http://arxiv.org/abs/2307.02622v1
|
20230705194957
|
Hofstadter-like spectrum and Magnetization of Artificial Graphene constructed with cylindrical and elliptical quantum dots
|
[
"Maryam Mansoury",
"Vram Mughnetsyan",
"Aram Manaselyan",
"Albert Kirakosyan",
"Vidar Gudmundsson",
"Vigen Aziz-Aghchegala"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
Department of Physics, Urmia University of Technology, Urmia, Iran
Department of Solid State Physics, Yerevan State University, Alex Manoogian 1, 0025 Yerevan, Armenia
[email protected]
Department of Solid State Physics, Yerevan State University, Alex Manoogian 1, 0025 Yerevan, Armenia
[email protected]
Department of Solid State Physics, Yerevan State University, Alex Manoogian 1, 0025 Yerevan, Armenia
Science Institute, University of Iceland, Dunhaga 3, IS-107 Reykjavik, Iceland
Department of Physics, Urmia University of Technology, Urmia, Iran
In this paper a comparative study of the electronic and magnetic properties of quasi-two-dimensional electrons in an artificial graphene-like superlattice composed of circular and elliptical
quantum dots is presented. A complete orthonormal set of basis wave functions, which has previously been constructed in the frame of the Coulomb gauge for the vector potential has been implemented
for calculation of the energy dispersions, the Hofstadter spectra, the density of states and the orbital magnetization of the considered systems, taking into account both the translational symmetry of the
superlattice and the wave function phase-shifts due to the presence of a transverse external magnetic field.
Our calculations indicate a topological change in the miniband structure due to the ellipticity
of the quantum dots, and non-trivial modifications of the electron energy dispersion surfaces in
reciprocal space with the change of the number of magnetic flux quanta through the unit cell of
the superlattice. The ellipticity of the QDs leads to an opening of a gap and considerable modifications
of the Hofstadter spectrum. The orbital magnetization is shown to reveal significant oscillations
with the change of the magnetic flux. The deviation from the circular geometry of quantum dots
has a qualitative impact on the dependencies of the magnetization on both the magnetic flux and
the temperature.
Hofstadter-like spectrum and Magnetization of Artificial Graphene
constructed with cylindrical and elliptical quantum dots
Vigen Aziz-Aghchegala
============================================================================================================================
§ INTRODUCTION
The unique properties of graphene, which are a direct consequence of its two dimensional (2D) lattice with underlying triangular symmetry, have attracted a great interest in recent two decades <cit.>.
Advanced methods such as atom-by-atom assembling <cit.>, optical trapping of ultracold atoms in crystals of standing light-waves <cit.> and nanopatterning of 2D electron gas in semiconductors <cit.>, make it possible to design and fabricate artificial honeycomb lattices or artificial graphene, which are a unique playground for investigation and manipulation of a wide class of systems displaying massless Dirac quasiparticles and topological phases.
To replicate in a tunable manner the massless Dirac fermion
physics authors of Ref. <cit.> used high resolution electron beam lithography and reactive
ion etching in order to construct artificial honeycomb lattices with periods as small as 50 nm in GaAs/AlGaAs quantum
wells hosting a 2D electron gas.
The lack of an energy bandgap in graphene or artificial graphene constrains their widespread application because the small band gap means a large off-current and a low on/off ratio. Many attempts have been made to create an energy gap between the conduction and the valence bands of graphene. Cutting graphene into nanoribbons <cit.>, application of strain on graphene <cit.>,
hydrogenating graphene with a certain pattern <cit.>, and growth
of graphene on various substrates <cit.> are examples of such attempts just to mention few works in this field.
In artificial graphene composed of semiconductor quantum dots (QD) there are additional possibilities of band structure manipulation via variations of the QD shapes, sizes and external factors such as transverse magnetic and in-plane electric fields <cit.>.
It is well-known that the description of the motion of an electron in a magnetic field is significantly modified when considering a periodic modulation of the electron's potential energy. This fact is connected with the commensurability conditions of the two characteristic length-scales describing the structure, namely, the magnetic length and the lattice constant. It has been shown by Azbel <cit.> and Hofstadter <cit.> that the original unit cell (UC) of the superlattice (SL) can not describe its translational periodicity when a homogeneous transverse magnetic field is applied. In this case one has to introduce so called magnetic UC which simultaneously contains an integer number of magnetic flux quanta and UCs of the original lattice. As a result, the energy spectrum of an electron displays a fractal structure known as the “Hofstadter's butterfly", that has been obtained theoretically <cit.> as well as observed experimentally <cit.>.
The description of the electron motion in graphene subjected to a transverse homogeneous magnetic field is usually based on the Peierls substitution in tight binding models, or the Dirac Hamiltonian <cit.>. This approach relies on the assumption that the magnetic field effects on the tunneling of an electron through the sites of the graphene lattice only by means of the addition of corresponding magnetic phases in the hopping parameters. The Dirac Hamiltonian is applicable when there is only one conducting electron in each site of the lattice leading to the emerging of relativistic electrons near the band's touching points. These assumptions being well justified for graphene, are not so for artificial graphene-like semiconductor structures. For more complete description of the 2D electron's motion in such artificial systems with taking into account the effect of the magnetic field on the degree of the confinement of electron in each QD as well as on the magnitudes of the hopping parameters we develop our theoretical study in the frame of the basis functions proposed initially by Ferrari <cit.> and used thereafter by several authors for calculations of band structure and magneto-optical properties of modulated 2D electrons <cit.>.
Based on this method we have developed in the present paper a comparative study of the electronic states and the magnetization of the honeycomb artificial graphene-like lattices composed of cylindrical and elliptical QDs to explore the effect of the structure symmetry-breaking on the measurable equilibrium properties. Our calculations indicate on a topological change in the miniband structure as well as qualitative modifications to the Hofstadter spectrum, and the magnetization of the honeycomb SL due to the ellipticity of QDs.
The paper is organized as follows: section <ref> is devoted to the description of the theoretical model, in section <ref> the obtained results are discussed, and finally, in section <ref> the conclusions are presented.
§ THEORY
Let us consider a 2D lattice composed of planar QDs exposed to a transverse homogeneous magnetic field with induction 𝐁=ê_zB, where ê_z stands for the unit vector in the direction perpendicular to the lattice plane. The spinless one-electron Hamiltonian of such a system in the effective mass approxiamtion is
H=H_0+V(𝐫),
where
H_0=1/2m(𝐩+e𝐀/c)^2,
and V(𝐫)=V(𝐫+n_1𝐚_1+n_2𝐚_2) is the periodic potential of the SL with lattice vectors 𝐚_1 and 𝐚_2, n_1 and n_2 are integers, 𝐩=-iħ∇ is the momentum operator, ħ is the reduced Planck's constant, m is the effective mass, and c the speed of light. We assume that the SL consists of circular or elliptical QDs (see Fig. <ref>) with a rectangular potential profile. Namely, v(𝐫)=0 inside each QD and v(𝐫)=v_0 in the surrounding medium. Note, that both SLs with circular and elliptical QDs have the same translation vectors and all the elliptical QDs have the spatial orientation along the “x" axis. Using the symmetric gauge for the vector potential 𝐀=(B/2)(-y,x) the Hamiltonian (2) reads as
H_0=ħ ^2/2m (( -i ∂/∂ x-y/2l_B^2)^2+(-i ∂/∂ y+x/2l_B^2)^2),
where l_B=(cħ/e B)^1/2 is the magnetic length.
The eigenfunctions of the Hamiltonian (3) are
φ_n_L(r)=1/√(2π l_B^2 n_L!)(x+iy/√(2) l_B)^n_L e^- r^2/4l_B^2,
where n_L indicates the corresponding Landau level.
It is well known that the translation operator T(𝐑)=exp(i𝐑𝐩/ħ) with 𝐑=n_1𝐚_1+n_2𝐚_2 does not commute with the Hamiltonian (2). Instead, the so-called magnetotranslation operator S(𝐑)=exp((i e/ħ c)𝐀(𝐑)𝐫)T(𝐑)=exp((i/2l_B^2)(𝐑×𝐫)ê_z) T(𝐑) which commutes with the Hamiltonian (2) can be used for construction of a complete and orthogonal set of basis functions for description of the motion of an electron with the Hamiltonian (1). On the other hand, magnetotranslation operators for any two lattice vectors 𝐑_1 and 𝐑_2 commute in the only case when there is an integer number of magnetic flux quanta in the area |𝐑_1×𝐑_2|
[S(𝐑_1),S(𝐑_2)]=0, if |𝐑_1×𝐑_2 | = 2 π u l_B^2
where u is an integer.
If one expresses the magnetic flux per unit cell of the SL as Φ/Φ_0=pq/h_1h_2,
where p,q,h_1 and h_2 are integers, the vectors satisfying the condition (5) will be related with original lattice vectors as follows: 𝐑_1=h_1𝐚_1 and 𝐑_2=h_2𝐚_2.
As is shown in Ref. <cit.> a complete set of basis functions can be constructed out using the primitive magnetotranslations S(𝐜) and S(𝐝) with 𝐜=𝐑_1/p and 𝐝=𝐑_2/q. Taking into account the conditions of periodicity
S(𝐑_1) ϕ= e^i θ _1ϕ, S(𝐑_2) ϕ= e^iθ_2ϕ
the basis functions can be expressed as
ϕ^n_1,n_2_n_L (r)=
(pq)^-1/2∑_m,n=-∞^∞ [S(𝐜) e^-i μ]^m [S(𝐝) e^-i ν]^n ϕ_n_L(𝐫),
where
[ μ = (1/p) (θ_1+2 π n_1) , n_1=0,...,p-1,; ν= (1/q) (θ_2+2 π n_2) , n_1=0,...,p-1.; ]
In the absence of magnetic field, θ_1 and θ_2 are proportional to the components of the wave vector in the SL.
It has been shown that the norm of the wave function (7), is nonzero when (μ,ν)≠ (π,π) and can be expressed as <cit.>
∥ϕ^n_1,n_2_n_L∥ =∑_m,n=-∞^∞ (-1)^ mn e^i(μ m+ ν n) e^-| nc+md | ^2/4l_B^2.
A periodic SL potential can be expanded in a Fourier series
V(𝐫)= ∑_𝐆 v(𝐆)e^i𝐆𝐫,
where 𝐆= G_1𝐠_1+G_2𝐠_2 are the reciprocal lattice vectors with site-vectors 𝐠_1 and 𝐠_2, and integers G_1 and G_2.
For the periodic array composed of cylindrical or elliptical QDs, respectively
v(𝐆)_cyl=
{[ v_0/s_0 2 π r_d^2if G_1=G_2=0;; v_0/s_0 e^-i2π/3(G_1+G_2)(1+e^-i2π/3(G_1+G_2)) ×; 3r_da/√((G_1+G_2)^2+3(G_1-G_2)^2)×; J_1(2π r_d√((G_1+G_2)^2+3(G_1-G_2)^2)/3a),; otherwise ].
and
v(𝐆)_el=v_0/s_0 e^-i2π/3(G_1+G_2)(1+e^-i2π/3(G_1+G_2)) ×
∫_0^r_e∫_0^2π e^-i2π/3ar((cosϕ+√(3) sinϕ)G_1+(cosϕ-√(3) sinϕ)G_2) r dr dϕ,
where
r_e=r_sr_l/√(r_s^2cos^2ϕ+r_l^2sin^2ϕ),
r_s(l) is the small(large) semi-axis of the elliptical QD, a is the distance between the nearest QDs and s_0 is the area of the SL unite cell. Now, the calculation of the potential matrix elements will simply be reduced to the calculation of the ones for the exponent in Eq. (10). These matrix elements are not zero when the following conditions are fullfilled
[ G_1h_1+n_1-n_1'=Mp; G_2h_2+n_2-n_2'=Nq,; ]
with M and N integers, and can be expressed as
⟨ n_1',n_2',n_L'| e^i𝐆𝐫| n_1,n_2,n_L⟩ =
Y(G)^n_L',n_LT^n_1',n_2'_n_1,n_2(G) exp(-|G|^2/4l_B^2)/ϕ ^ n_1',n_1'_n_L'ϕ ^ n_1,n_1_n_L,
where
T^n_1',n_2'_n_1,n_2(G)=
∑_Λ , Ω =- ∞^∞ (-1)^ΛΩ e^i (μ' Λ + ν' Ω)e^-(i/2)G(Λ c + Ω d) ^∗×
exp(-(1/4l_B^2) |Λ c+Ω d|^2),
Y^m,n(G)=
{[ √(m!/n!) e^(-1/4)|G|^2(iG^∗/√(2))^n-m L^n-m_m(|G|^2/2),; n≥ m,; √(n!/m!) e^(-1/4)|G|^2(iG/√(2))^m-n L^m-n_n(|G|^2/2),; m≥ n,; ].
and L^β_α(x) are the Laguerre polynomials <cit.>.
In the right-hand side of Eqs. (15), (16) and (17) a complex notation for the vectors has been used (G=G_x+iG_y, c=c_x+ic_y, d=d_x+id_y).
The density of states (DOS) is defined as
ρ(E)=1/S∑_iδ(E-E_i),
where the summation is carried out over all the quantum states and S is the area of the sample. Considering the quasi-continuous energy spectrum inside each miniband, Eq. (18) can be transformed as
ρ(E)=1/(2π)^2S∑_j∫_FBZ dθ_1dθ_2δ(E-E_j(θ_1,θ_2)),
where the integration is carried out over the FBZ and j is the number of a miniband. In numerical calculation we have replaced the Dirac delta function δ by a Lorentzian function with small energy width Γ=10^-3 meV.
We calculate the orbital magnetization as
ℳ=1/(2π)^2∑_j∫_FBZ dθ_1dθ_2 f_B(E_j(θ_1,θ_2))ℳ_j(θ_1,θ_2),
where f_B(E) is the Fermi function with energy E, and the magnetization for each point in the reciprocal space is
ℳ_i(θ_1,θ_2)= 1/2 c S∫_S d 𝐫 (𝐫×𝐣_i,θ_1,θ_2(𝐫)) ê_z
with
𝐣(𝐫)=-e/2(𝐯̂ | ψ(r)⟩⟨ψ(r)|+| ψ(r)⟩⟨ψ(r)|𝐯̂),
the current density operator.
The velocity operator is 𝐯̂=(𝐩̂+(e/c)A(𝐫))/m.
Note, that the use of the formula (20), which is different from one we have previously used in <cit.>, is connected with the discrete integer values of magnetic flux which one has to consider in a Hofstadter-like problem.
§ DISCUSSION
The numerical calculations are carried out for SLs composed of GaAs/Ga_1-xAl_xAs QDs with the following values for the parameters: the radius of a circular QD r_ d= 120 Å, the small and large semiaxes of elliptical QD r_s=0.8 r_d and r_l= r_d, respectively, the distance between two nearest QDs a = 250 Å. When considering non-zero magnetic field we have chosen a shallow potential for each QD: v_0 = -16 meV to have more clear picture of the magnetic field effect, while for the case of no magnetic field a value of the confining potential v_0=-150 meV is chosen. The electron effective mass m=0.067m_0, where m_0 is the free electron mass.
In Fig. <ref> the dispersion surfaces for a SL composed of circular (the upper figure) and elliptical (lower figure) QDs are presented in the absence of external magnetic field. Here k_x and k_y are the Cartesian components of electron quasimomentum. It is obvious the qualitative coincidence of the dispersion surfaces for the SL of circular QDs with ones for graphene. As was expected, there is an energy gap between the two minibands for the SL composed of elliptical QDs. The gap opening and the topological change of the dispersion surfaces near the Dirac points are consequences of the triangular symmetry breaking of the system. Instead, the SL with the elliptical QDs reveals a rectangular symmetry which is expressed on the dispersion surfaces as well.
Fig. <ref> represents the density plots of the dispersion surfaces for a SL composed of circular QDs. The considered values of the magnetic flux per UC are Φ / Φ_0 = 1, 3/2, and 2, respectively for the left, the middle and the right columns in the figure. The upper row of the figure corresponds to the 1st, while the lower row is for the 2nd miniband. First of all it is obvious that the dispersion surfaces retain their triangular symmetry when there is an integer number of magnetic flux quanta per UC (the left and the right columns of the Fig. <ref>). However, the fractional number of flux quanta per UC leads to the destruction of the triangular symmetry. For Φ/Φ_0=3/2 one of the lattice vectors of the system with magnetic field is twice that of the corresponding lattice vector of the original lattice, which leads to the contraction of the FBZ in the perpendicular direction (see the middle column of Fig. <ref>). Another interesting phenomenon is the shift of the positions in the FBZ of maxima and minima of the dispersion surfaces corresponding to even and odd numbers of flux quanta per UC with regard to each other (compare the upper figure in Fig. <ref> with the left and the right columns of Fig. <ref>). In all the cases with non-zero magnetic field a finite gap between the minibands is opened. This result is the consequence of the magnetic-phase interference between the states localized in two QDs in the same UC.
Fig. <ref> represents the same as Fig. <ref>, but for a SL composed of elliptical QDs.
In this case the SL has a rectangular symmetry, which is not destroyed by the magnetic field when there is an integer number of flux per UC. The energy values shown on the legends of the figure indicate on the decrease of the gap between the minibands with the increase of the magnetic flux. One can also observe that the rectangular symmetry is better expressed for larger integer numbers of magnetic flux per UC (compare the left and the right columns of the Fig. <ref>).
The dependencies of the density of states on the electron energy for different values of magnetic flux quanta per UC is shown in Fig. <ref>. The red curves are plotted for a SL with circular and the blue ones are for a SL with elliptical QDs. As is expected, the minibands, and hence, the DOS are shifted to higher energies for elliptical QDs as the size-quantization in elliptical QDs is stronger. In all the cases there is a finite-length energy region where DOS is zero, which corresponds to the gap between the minibands. It is clear that the DOS which corresponds to circular QDs always have one maximum in each miniband. In contrast, the DOS for the SL with elliptical QDs has two obvious maxima in each miniband when Φ/Φ_0=3/2 or 2. When Φ/Φ_0=4 the maxima near the energy gap are disappeared. For Φ/Φ_0=1 one can observe three maxima in each miniband. The change in the number of the DOS maxima significantly affects the optical characteristics of the system, which means that magnetic field can be used as an efficient tool for manipulations of the optical parameters of a honeycomb SL.
We demonstrate the fractal structure of the energy as a function of inverse magnetic flux per UC in Figs. <ref> and <ref>. In order to save the computational time, as well as to make figures more readable we present here the energies of an electron only for θ_1,2= ± 0.99π, 0. Fig. <ref> is obtained by using only two Landau bands with n_L=0 and n_L=1 in the expansion of the wave function by the basis functions (7), while the results shown in Fig. <ref> correspond to a basis with six Landau bands (n_L=0 - 5). Note, that the basis with six Landau bands provides results with high enough accuracy (the estimated relative error is around 1 - 2%). Nevertheless, we present here also the case with two Landau bands in the basis to illustrate the evaluation of the Hofstadter spectrum going far above the approximation of the Harper's Hamiltonian <cit.>. As is known the Harper Hamiltonian describes the motion of electron in a discrete 2D lattice in the transverse homogeneous magnetic field in the frame of the approach of hopping parameters. In other words magnetic field does not effect on the quantization strength in each QD and on the tunneling between the QDs but only on the phase shifts of the wave function due to the translations from one cite of SL to another. It means that the results obtained in our work would approach to ones obtained in the framework of Harper Hamiltonian for small enough values of magnetic field and when there is no mixing between the Landau bands due to the SL potential. For a honeycomb lattice this conditions can be fulfilled taking only two Landau bands in the expansion of the wave function as a minimal basis for the description of two “graphene-like" minibands. Indeed, the right half (where the magnetic field is comparatively small) of the energy spectrum in the upper panel of Fig. <ref> is very similar with the known Hofstadter spectrum of graphene. However, with the increase of magnetic flux (with decrease of Φ_0/Φ) the energies undergo an up-shift due to the quantizing effect of magnetic field. Comparing the upper and the lower panels in Fig. <ref> or in Fig. <ref> one can observe an opening of a gap in the graphene-like Hofstadter spectrum due to the ellipticity of QDs and with an oscillating width along with the change in the magnetic flux. As is seen from Fig. <ref>, the Hofstadter-like spectrum is “deformed" over all the considered range of the values of magnetic flux. One can observe here larger energy gaps for smaller values of magnetic flux when one considers six Landau bands in the expansion of the wave function per UC comparing with ones in Fig. <ref>. The slight up-shift of energies corresponding to SL with elliptical QDs (lower panels in Figs. <ref> and <ref>) is in accordance with the results shown in Fig. <ref>.
In Fig. <ref> the magnetization in honeycomb SLs which are composed of circular (upper panel) and elliptical (lower panel) QDs versus inverse magnetic flux (in the unites of inverse flux quantum) is presented. The calculations are performed for 18 different rational values of Φ_0/Φ. These values are mentioned by vertical dashed lines, while the values of Φ/Φ_0 are mentioned near the graphs. We consider low temperatures (1K, 2K and 3K), so only the 1st and the 2nd minibands have significant contribution in the magnetization. As is obvious from the figures, the magnetization is always negative, that is the system is a diamagnetic. Generally, magnetization undergoes strong oscillations which are especially pronounced in the mid values of the magnetic flux. These oscillations are connected with the pq-fold splitting of the Landau bands in subbands with smaller widths and with the change of the periodicity of the system depending on h_1 and h_2. Interestingly, the magnitude of the magnetization is comparatively larger for fractional values of the flux compared to its integer values. This is a consequence of almost flat minibands with very weak dispersion at fractional valiues of Φ/Φ_0. Moreover, for even values of Φ/Φ_0 the magnitude of the magnetization is less than for its odd values. This is because of the degenerated Landau orbitals in a UC of SL mutually compensate each other. When pq=4, the four Landau orbitals in a UC are almost totally compensated in SL with circular QDs and the magnetization is nearly zero (see the upper panel of Fig. <ref>). The elliptical shape of QDs makes the effect of compensation weaker leading to non-zero magnetization for the same value of Φ/Φ_0 (see the lower panel of the figure). For large enough values of magnetic flux the values of the magnetization magnitude decrease and its oscillations weaken. This is due to the vanishing values of the thermal distribution function corresponding to the higher values of electron energy. Note, that the effect of temperature on the magnetization significantly depends on the magnetic flux per UC. Namely, for the SL with circular QDs the increase of the temperature leads to an obvious increase of the magnitude of magnetization for Φ/Φ_0= 7/6, 5/4, 4/3, 7/2 and to its decrease for Φ/Φ_0= 6/5, 5/3, 7/4, 7/3, 3, 5, 6, 7. One can also note that the arrangement of the values of magnetization corresponding to different values of temperature in a SL with elliptical QDs differs from ones in a SL with circular QDs when Φ/Φ_0=6/5 and Φ/Φ_0=7/5.
§ CONCLUSIONS
Summarizing, we present a comparative study on electron energy dispersions and the magnetization of artificial graphene-like honeycomb SL composed of cylindrical and elliptical QDs.
We develop our theoretical model in the frame of the method proposed earlier by Ferrari, where a complete orthonormal set of basis wave functions is used, which reflects both the SL translational symmetry and the wave function phase-shifts due to the transverse magnetic field in the symmetric gauge of the vector potential.
Our calculations indicate a topological change in the miniband structure due to the ellipticity of QDs.
We observe non-trivial displacements in the reciprocal space of the energy dispersion surfaces and transformations in the translational symmetry of the system when passing through different rational values of the number of magnetic flux quanta per UC of the SL.
The maxima of the dependencies of the DOS are duplicated due to the ellipticity of QDs for some values of the magnetic flux.
The Hofstadter spectrum of the SL with circular QDs qualitatively coincide with one for graphene for comparatively small values of magnetic flux and when two Landau bands are considered in the expansion of the wave function. However, the consideration of higher Landau bands leads to a significant modification of the Hofstadter spectrum. The ellipticity of QDs leads to a gap opening and to a considerable modification in the Hofstadter spectrum.
The magnetization reveals non-trivial oscillations depending on the magnetic flux. The fact of the magnetic flux being integer or fractional plays a crucial role in the diamagnetic behaviour of the system. The oscillations in the magnetization, as well as the arrangement of its values corresponding to different values of temperature considerably depend on the geometry of QDs.
§ ACKNOWLEDGEMENT
This work was financially supported by the Armenian
State Committee of Science (grants No 21SCG-1C012,
No 21T-1C247, 20TTWS-1C014 and No 21AG-1C048), by the Research Fund of the University of Iceland, and the Icelandic Infrastructure Fund.
The computations were performed in the Center of Modelling and Simulations of Nanostructures at Yerevan State University.
99
Geim A.K. Geim, K.S. Novoselov, Nature Materials 6, 183 (2007).
GomesK.K. Gomes, W. Mar, W.Ko, F. Guinea, H.C. Manoharan, Nature 483, 306 (2012).
TarruellL. Tarruell, D. Greif, T. Uehlinger, G. Jotzu, T. Esslinger, Nature 483, 302 (2012).
SinghaA. Singha, M. Gibertini, B. Karmakar, S. Yuan, M. Polini, G. Vignale, M. I. Katsnelson, A. Pinczuk, L.N. Pfeiffer, K.W. West, V. Pellegrini, Science 332, 1176 (2011).
Scarabelli D. Scarabelli, S. Wang, A. Pinczuk, S.J. Wind, Y.Y. Kuznetsova, L.N. Pfeiffer, K. West, G.C. Gardner, M.J. Manfra, V. Pellegrini, J. Vac. Sci. Technol. B, 33(6), 06FG03 (2015).
Han M.Y. Han, B. Özyilmaz, Y. Zhang, P. Kim, Phys. Rev. Lett. 98, 206805 (2007).
Guinea F. Guinea, M.I. Katsnelson, A.K. Geim, Nat. Phys. 6, 30 (2010).
Gui G. Gui, J. Li, J. Zhong, Phys. Rev. B 78, 075435 (2008).
Pereira V.M. Pereira, A.H. Castro Neto, N.M.R. Peres, Phys. Rev. B 80, 045401 (2009).
Cocco G. Cocco, E. Cadelano, L. Colombo, Phys. Rev. B 81, 241412(R) (2010).
Gao H. Gao, L. Wang, J. Zhao, F. Ding, J. Lu, J. Phys. Chem. C 115, 3236 (2011).
Zhou S.Y. Zhou, G.H. Gweon, A.V. Fedorov, P.N. First, W.A. de Heer, D.H. Lee, F. Guinea, A.H. Castro Neto, A. Lanzara, Nat. Mater. 6, 770 (2007).
Giovannetti G. Giovannetti, P.A. Khomyakov, G. Brocks, P.J. Kelly, J. van den Brink, Phys. Rev. B 76 073103 (2007).
Mughnetsyan1 V. Mughnetsyan, A. Manaselyan, M. Barseghyan, A. Kirakosyan, D. Laroze, Phys Rev. B 100, 195132 (2019).
Mughnetsyan2 V. Mughnetsyan, Superlattices and Microstructures 147, 106700 (2020).
Azbel M. Ya, Azbel, JETP 19, 634 (1964).
Hofstadter D.R. Hofstadter, Phys. Rev. B 14, 2239 (1976).
Guillement J.P. Guillement, B. Helffer, P. Treton, Journal de Physique, 50, 2019 (1989).
Beugeling W. Beugeling, N. Goldman, and C. Morais Smith
Gumbs1 G. Gumbs, A. Iurov, D. Huang, L. Zhemchuzhna, Phys. Rev. B 89, 241407(R) (2014).
Rokaj V. Rokaj, M. Penz, M.A. Sentef, M. Ruggenthaler, A. Rubio, Phys. Rev. Lett. 123, 047202 (2019).
Yang W. Yang, X. Lu, G. Chen, S. Wu, G. Xie, M. Cheng, D. Wang, R. Yang, D. Shi, K. Watanabe, T. Taniguchi, C. Voisin, B. Plaçais, Y. Zhang, G. Zhang, Nano Letters 16 2387 (2016).
Dean C. Dean, L. Wang, P. Maher, et al. Nature 497, 598 (2013).
Goerbig M.O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011).
Ferrari R. Ferrari, Phys. Rev. B 42, 4998 (1990).
Silberbauer H. Silberbauer, J. Phys.: Condens. Matter 4, 7355 (1992).
Gudmundsson1 V. Gudmundsson, R. Gerhardts, Phys. Rev. B 52, 16744 (1995).
Gudmundsson2 V. Gudmundsson, R. Gerhardts, Phys. Rev. B 54, R5223 (1996).
Gudmundsson3 V. Gudmundsson, V. Mughnetsyan, N. R. Abdullah, C.S. Tang, V. Moldoveanu, and A. Manolescu, Phys. Rev.
B 106, 115308 (2022).
Mansoury M. Mansoury, V. Aziz-Aghchegala, V. Mughnetsyan, A. Kirakosyan, V. Gudmundsson, Phys. Lett. A 448, 128324 (2022).
CastroNeto A.H. Castro Neto, F.Guinea, N.M.R. Peres, K.S. Novoselov, A.K. Geim, Rev. Mod. Phys. 81, 109 (2009).
|
http://arxiv.org/abs/2307.01257v1
|
20230703180001
|
Polyakov blocks for the 1D CFT mixed correlator bootstrap
|
[
"Kausik Ghosh",
"Apratim Kaviraj",
"Miguel F. Paulos"
] |
hep-th
|
[
"hep-th"
] | |
http://arxiv.org/abs/2307.00345v1
|
20230701135029
|
Microcanonical phase transitions for the vortex system
|
[
"Dario Benedetto",
"Emanuele Caglioti",
"Margherita Nolasco"
] |
math-ph
|
[
"math-ph",
"math.MP",
"82M30 35Q82 82B26 35Q35"
] |
plain
Microcanonical phase transitions]
Microcanonical phase transitions for the vortex system
D. Benedetto]Dario Benedetto
Dario Benedetto Dipartimento di Matematica, Università di Roma `La Sapienza'
P.le Aldo Moro 2, 00185 Roma, Italy,
INdAM - Istituto Nazionale di alta Matematica GNFM, Roma, Italy
[email protected]
E. Caglioti]Emanuele Caglioti
Emanuele Caglioti Dipartimento di Matematica, Università di Roma `La Sapienza'
P.le Aldo Moro 2, 00185 Roma, Italy,
INdAM - Istituto Nazionale di alta Matematica GNFM, Roma, Italy
[email protected]
M. Nolasco]Margherita Nolasco
Margherita Nolasco Dipartimento di Ingegneria e Scienze dell’informazione e Matematica,
Università dell’Aquila
Via Vetoio, Loc. Coppito, 67100
L'Aquila, Italy,
INdAM - Istituto Nazionale di alta Matematica GNAMPA, Roma, Italy
[email protected]
We consider the Microcanonical Variational Principle for the vortex
system in a bounded domain. In particular
we are interested in the thermodynamic
properties of the system in domains of second kind,
i.e. for which the equivalence of ensembles does not hold.
For connected domains close to the union of disconnected disks
(dumbbell domains),
we show that the
system may exhibit an arbitrary number of fist-order phase transitions,
while the entropy
is convex for large energy.
[2020]
35Q35
35Q82
82B26
82M30
[
[
August 1, 2023
==================
headings
§ INTRODUCTION
Starting form the pioneering paper by Onsager <cit.>, statistical
mechanics of point vortices has been widely studied, in particular in
the mean field limit <cit.>. In the case of a
fluid confined in a bounded simply connected set Λ in ^2,
the structure of the mean field variational principle and of the
related mean field equation presents many interesting features. First
of all, as suggested by Onsager, negative temperatures are allowed
and, according to the Microcanonical Variational Principle (MVP) the
entropy S decreases in the energy E, if E is
sufficiently large, and the vorticity density
converges to a delta function as E→∞.
Moreover, depending on the shape of the domain, the equivalence of
ensemble can be broken. If the domain is a disk or a
domain close to a disk then the entropy S(E) is a concave function of
the energy and the canonical variational principle and the
microcanonical variational principle are equivalent. Conversely, for
sufficiently long and thin domains, the two ensemble are not anymore
equivalent: the entropy is not a concave function of the energy and
the mean field equation associated to the canonical variational principle (CVP)
has not a unique solution for some values
of the inverse temperature β<-8π <cit.>.
These two kinds of domains are called
respectively domains of first and second type.
Recently the behavior of the MVP in domain of second kind
has been
carefully analyzed <cit.>, and
the natural question if the
entropy is definitively convex for large
values of the energy has been posed.
In <cit.>
it has been proved that if a second kind domain is convex
then the branch of solutions of the MVP is regular and
the entropy is definitely convex.
In the present paper we construct non convex connected second kind
domains for which the entropy is definitely convex.
Moreover the branch is not regular and
the entropy exhibits first order phase transitions, i.e. jumps
of the derivative w.r.t. the energy.
To construct these solutions, we first solve the MVP for disconnected domains.
In particular we consider domains union of disks or of slightly
deformed disks. In the case of more than three disks,
the entropy has a first order phase transition at low energy.
Considering N suitably deformed disks we also construct
disconnected domains which exhibit N first order phase
transitions at high energies.
Then, connecting the (deformed) disks with thin channels,
we obtain connected domains (dumbbell domains), for which
the branches of solutions and the phase transitions of the entropy
are preserved.
.3cm
Now we give some
definitions and recall some results.
Let Λ⊂^2 be a bounded open set with smooth boundary.
For a distribution probability with density ρ∈ L^1(Λ),
the entropy and the energy functionals are defined respectively as
S(ρ) = -∫_Λρlnρ
E(ρ) = 1/2∫_ΛρΨ =
1/2∫ |Ψ|^2,
where the stream function Ψ solves
-Ψ = ρ, with .
Ψ|_Λ = 0.
Denoting with G(x,y)
the Green function of the Poisson problem in Λ
with homogeneous Dirichlet boundary conditions,
Ψ(x) = ∫_Λ G(x,y) ρ(y) y.
We write
G(x, y ) = - 1/2ln |x - y| + γ(x, y)
where γ is the regular part of G.
We also define the free-energy functional
F(β, ρ) = E(ρ)-1/β S(ρ)
where β∈ is the inverse of the temperature. Note that in
the case of the statistical mechanics of vortices, β
can be also negative.
.3cm
The MVP - Microcanonical Variational Principle
is the problem of finding the distribution ρ
which maximizes S(ρ) among the probability distribution with fixed
energy E>0:
S(E) = sup_ρ∈ P_E S(ρ),
where P_E = {ρ∈ L^1(Λ)| ρ≥ 0, ∫_Λρ = 1, E(ρ) = E}.
When needed, we specify
with S_Λ, S_Λ, P_Λ,E
etc. the dependence on the set Λ.
We indicate with ρ_m = 1/|Λ|
the uniform probability distribution,
with energy E_m.
We recall the following results from <cit.>.
Let Λ be a bounded open connected set.
For any E>0, -∞ < S(E) ≤ln |Λ|
and there exists ρ∈ P_E such
that S(E) = S(ρ). Moreover ρ>0 and there exists
an unique β∈ such that the stream function
satisfies the mean-field equation (MFE)
-Ψ = ρ = 1/Z^-βΨ, Z = ∫_Λ^-βΨ,
with homogeneous Dirichlet boundary conditions.
The maximum of S(E) is reached at S_m
= S(ρ_m) = ln |Λ|.
S(E) is
continuous,
strictly increasing for E< E_m,
strictly decreasing for E>E_m
and S(E)→ -∞ for E→ 0 and E→ +∞.
Notice that the MFE (<ref>) is also the Euler equation associated
to the CVP
- Canonical Variational Principle, introduced in <cit.>, that is
the extremal problems for the free-energy functional F(β,ρ):
F(β) = inf_ρ∈ P F(β,ρ) if β >0
sup_ρ∈ P F(β,ρ) if β < 0
where
P= {ρ∈ L^1(Λ)| ρ≥ 0, ∫_Λρ = 1}
Therefore we have two different variational principles, CVP and MVP,
which have the same Euler equation, but clearly this fact is
not sufficient for the equivalence of the two sets of solutions (equivalence of ensembles).
We now summarize some known results on the MFE,
the CVP and the equivalence of ensembles,
which will be useful in the sequel.
Fixed β≥ 0, the MFE has a unique solution.
The case β <0 is more delicate to deal with.
For β∈ (- 8 π, 0] and simply connected
domain, the uniqueness and the regularity of the branch of solutions
of the MFE (<ref>) was proved in <cit.>, and then extended to
multiply connected sets in <cit.>. Moreover, from <cit.>
(see also <cit.>), we also know that for β = - 8 π
the MFE has at most one solution for connected domain.
Let E(β) E(ρ_β) be the energy of the
(unique) solution of the MFE at inverse temperature β > - 8 π
and define
(0,+∞] ∋ E_-8π = lim_β→ (-8π)^+ E(β).
As
in <cit.>, we say that a bounded open
connected Λ is a first kind domain
if E_-8π= + ∞ and is a second kind domain if
E_-8π < + ∞.
Let Λ be a bounded open connected set with smooth boundary.
* F(β) is finite iff β≥ -8π. For
β> -8π the CVP has a unique solution ρ_β, and the
stream function Ψ_β solves the MFE (<ref>).
Moreover the set of solutions
{Ψ_β : β > - 8 π} is a regular branch.
* β↦β F (β)
is a strictly concave
increasing and continuously differentiable function,
(- 8 π, + ∞) ∋β↦
E(β) ∈ (0,E_-8π) is decreasing and continuously
differentiable,
S(E) = inf_β (β E - β F (β) )
is a smooth concave function of E∈ (0,E_-8π) and
S'(E) = β.
* If E ∈ (0, E_-8π) and ρ_E is solution of the
MVP then ρ_β = ρ_E(β) uniquely solves the CVP and
MFE.
In view of ii) of the theorem, if the domain is of first kind
S(E)
is a strictly concave function of the energy for any E>0.
Differently, it is possible to show that,
if the domain is of second kind, for some range of
values of the energy larger than
E_-8π, the solution of the MVP is not unique, and the
entropy cannot be a concave function;
in particular there is no equivalence of the
ensembles (we refer the reader to
<cit.> for further details).
A characterization of the domains of second kind can be summarized as follows (see <cit.>)
Let Λ be a bounded open connected set of class C^1,
and let ρ_β be the unique solution of the mean field equation
for β> -8π,
then the following facts are equivalent:
* Λ is a second kind domain.
* F (-8π) is attained and the unique branch of
maximizers ρ_β with β > - 8 π converges uniformly
to the maximizer for β = - 8π.
* The mean field equation (<ref>) has a (unique) solution
for β = -8π.
Conversely, Λ is a first kind domain iff the unique branch
of maximizers ρ_β with β > - 8 π blows up as
β→ (-8π)^+. In particular ρ_β converges weakly
to the δ-measure in the point x_0 which is the unique
maximum point of γ(x, x) in Λ.
Recalling that _E S = β, phase transitions
occur if {ρ_E}_E is not connected
in E. More precisely, the solution of the MVP jumps
between different branches of solutions
of the mean-field equation, or between different sections of branches.
We prove the following results.
.3cm
Theorem (Low energy phase
transitions)
If Λ is union of N≥ 3 disks
with sufficiently close radii,
then
there exists E_*
such that S(E) has a first order phase transition for E=E_*
(see Theorem <ref>).
Moreover, there exist connected “dumbbell domains” obtained
by
joining
the N disks with thin channels,
such that S(E) has a first order phase transition near E_*
(see Theorem <ref>).
.3cm
Theorem
(High energy phase transitions)
There exist domains Λ, disjoint unions of N suitable deformed
disks
such that
S(E) has N first order phase transitions,
for sufficiently large values of E (see Theorem <ref>).
Moreover, there exist connected dumbbell domains obtained by
joining the components of Λ with thin channels,
such that S(E)
has N first order phase transitions
for sufficiently large values of E
(see Theorem <ref>).
.3cm
In Section 2 we introduce the MVP for disconnected domains,
and in Section 3 we analyze the particular case of domains made of N
disks.
In Section 4 we show the the phase transitions,
for disconnected domains.
In Section 5 we use perturbative arguments to
extend the results of Sections 4 to connected dumbbell domains.
§ MVP FOR DISCONNECTED SETS
In this section we face the case of disconnected
domains, in which we reduce to consider the MVP
restricted to each connected component, where the probability mass
is in general less than 1.
To this aim, we first extend the variational principle to
the case of distributions of mass M>0.
The MVP becomes
S(M,E) = sup_ρ∈ P_M,E S(ρ),
where P_M,E = {ρ∈ L^1(Λ)| ρ≥ 0, ∫_Λρ = M, E(ρ) = E}
Now we show that S(M,E) is simply
given in terms of S(E)S(1,E), and
we compute the derivative of S.
S(M,E) = M S(E/M^2) - M ln M,
and this value is attained in ρ∈ P_M,E
which solves
ρ = -Ψ = M/Z^-βΨ, with
Z = ∫_Λ^-βΨ.
Moreover
S(M,E) = M ln (Z/M) + 2β E.
Given ρ∈ P_M,E, ρ̃= ρ /M
is a probability density with
E(ρ̃) = E(ρ)/M^2, and
S(ρ) = M S(ρ̃) - M ln M.
We conclude observing that
ρ∈ P_M,E iff ρ̃∈ P_1,E/M^2.
Eq. (<ref>) follows from the definition
of S(ρ) and the mean-field equation (<ref>).
If Λ is a connected bounded open set of first kind,
S depends regularly in E and M and
(i) _E S(M,E) = β
(ii) _M S(M,E) = ln (Z/M) - 1.
Since Λ is of first kind,
S(E) is regular for E∈ (0,+∞). Then
_E S(M,E) = 1/M_E S(E/M^2)
and (i) follows from the fact that if ρ solves
(<ref>), ρ/M satisfy the MFE equation (<ref>) with inverse
temperature Mβ.
By using (<ref>), (i), and (<ref>)
we obtain (ii).
We remark that (i) and (ii) hold also for domains of second kind if E/M^2 < E_-8π.
.3cm
Recalling that
for a bounded connected set Λ
the solution ρ of equation (<ref>) is unique
if Mβ > -8π,
we can
define
E(M,β ) = E (ρ), and Z(M,β) = ∫_Λ^-βΨ.
We set E(β) E(1,β) and
Z(β) Z(1,β).
Since ρ/M and Ψ/M solves the MFE (<ref>) with inverse
temperature Mβ, we have
E(M,β) = M^2 E(Mβ), Z(M,β) = Z(Mβ).
.3cm
Now we can state our result on the MVP for
a disconnected set.
Let Λ be a set given by
Λ = ⋃_i=1^N Λ_i
where Λ_i are open connected bounded sets of first kind, which do not
intersect.
Given E>0,
there exists ρ∈ P_E which maximize S_Λ.
The restricted densities ρ_i . ρ|_Λ_i
solve the MVP in the sets P_Λ_i, E_i, M_i
for some E_i>0 and M_i>0,
and the entropy S_Λ(E) satisfies
S_Λ(E) = sup_lE_i>0, ∑ E_i= E
M_i>0, ∑ M_i= 1 ∑_i=1^N S_Λ_i (M_i, E_i).
Moreover, ρ >0 and there exists
β∈ such that the stream function Ψ
satisfies the MFE (<ref>) in Λ;
the restricted density ρ_i (i=1,… N) satisfies
ρ_i = -Ψ_i =
M_i/Z_i^-βΨ_i in Λ_i ,
with
Z_i = ∫_Λ_i^-βΨ_i ,
and Z_i/M_i = Z = ∫_Λ^-βΨ, for all i=1,… N.
Easily adapting the proof of
Theorem (<ref>) to Λ, it can be proved
that there exists ρ∈ P_E which maximize S_Λ, and that
the support of ρ is the whole set Λ. This allow us to
prove that the MVP in P_Λ_i,E_i,M_i is solved by the restricted
density ρ_i where E_i and M_i are the energy and the mass of ρ_i.
As a consequence, S_Λ(E) is given by (<ref>).
Again from
Theorem
<ref> and lemma <ref>, each ρ_i solves
(<ref>) for some β_i.
We conclude the proof by showing that β_i
and Z_i/M_i do not depend on i,
which implies that the partition function in Λ is
Z = ∑_j=1^n Z_j = ∑_j=1^n Z_j/M_j M_j = Z_i/M_i, ∀ i=1, … N
and ρ solves the MFE (<ref>) with β = β_i.
Indeed, the maximum problem (<ref>)
can be solved by looking for the critical points of
∑_i=1^N S_Λ_i (M_i, E_i ) -(α - 1)
( ∑_i=1^N M_i - 1)
- β( ∑_i=1^N E_i - E)
where α and β are Lagrange multipliers.
By deriving w.r.t. E_i and M_i the function (<ref>), using Lemma <ref>, for any i=1, … N,
we get
β_i = _E_i S_Λ_i(M_i,E_i) = β, ln (Z_i/M_i) = _M_i S_Λ_i(M_i,E_i) +1 = α.
We remark that if ρ is a probability density with
energy E and solves the
MFE (<ref>) with inverse temperature
β,
then the restricted density ρ_i with energy E_i
solves the equation (<ref>),
with M_i = ∫_Λ_iρ and Z_i = M_i Z.
Conversely, if ρ_i satisfies (<ref>)
for some β, ρ solves the MFE (<ref>),
if the ratios ratios Z_i/M_i do not depend on i.
As for the case of a bounded connected set, we indicate with
E_m the value of the energy for which it is achieved the maximum
of the entropy S_m = log |Λ|, corresponding to
β = 0 in (<ref>).
In the sequel we focus on energies larger than E_m,
which is the case of negative temperature β <0.
As we show in the next section, even if the domains
Λ_i are of first kind,
the MVP for disconnected
set Λ
may have solutions with inverse temperature β≤ - 8π.
.3cm
In order to solves the MVP for disconnected
domains, we need to find the solutions of MFE
and then compare the value of the entropy of the solutions,
as in (<ref>).
We now show how to construct branches of solutions.
To simplify the notation, we express the thermodynamic quantities
for a first kind domain Λ_i at inverse negative
temperature β∈ ( -8π,0], as a function of the
parameter
μ = - β/(8π)∈ [0,1): if ρ is the solution
of the MFE with mass one in Λ_i
e_i(μ) = E_Λ_i (ρ), z_i(μ) =
∫_Λ_i^-βΨ.
so that, for the scaling properties (<ref>),
setting μ = -Mβ/(8π):
E_i(M,β) = M^2 e_i(μ) = (8π)^2/β^2μ^2 e_i(μ)
Z_i(M,β) = z_i(μ)
In the hypothesis of Thm. <ref>,
for any fixed
γ∈ (0, max_i sup_μ∈(0,1)μ/ z_i(μ)),
let μ_i∈ (0,1) be the solutions of the equations
μ_i = γ z_i(μ_i).
Let ρ̃_i be the unique solutions of the MFE
(<ref>) in Λ_i
with inverse temperature -8πμ_i.
Set
β = -8π∑_i μ_i, M_i = -8πμ_i/β, ρ_i = M_i ρ̃_i.
Then
ρ = ∑_i ρ_i(x) _Λ_i(x)
solves the MFE (<ref>)
in Λ, with energy and entropy given
respectively by
E (ρ) = (8π)^2/β^2∑_i μ_i^2 e_i(μ_i),
S(ρ) = -logγ + logβ/-8π + 2β E(ρ).
By construction Z_i/M_i does not depends on i, so
the proof is an easy consequence of the previous theorem, the scaling
properties (<ref>), and (<ref>) for M=1.
In the following section
we specialize our analysis to the case of
unions of
disjoint disks.
§ THE LANDSCAPE OF THE BRANCH OF
SOLUTIONS FOR N DISKS
We recall that for a disk centered
in x=0 and radius R, with area a=π R^2, the (unique)
solution of the equation (<ref>)
Ψ(x) = 2/βln(
1+ β/8π(1 - |x|^2/R^2)),
ρ(x) = 1/Z1/(
1+ β/8π(1 - |x|^2/R^2))^2,
where
Z = π R^2/1+ β/8π,
under the necessary condition β > -8π.
The energy is
E= 8π/β^2( β/8π
- ln( 1 + β/8π) ).
Then, the quantities defined in (<ref>) are given by
e(μ) = 1/8π1/μ^2( μ - log (1-μ) ),
z(μ) = a/ (1-μ).
Note that the energy e(μ) does not depends on the area.
.3cm
Let us specialize Proposition
<ref> to the case
of a domain Λ, union of N disjoint disks, D_i,
i=1,… N,
of area a_i, such that 1=a_1 ≥ a_2 ≥…≥ a_N > 0.
Fixed γ, we have to solve
μ_i (1-μ_i) = a_i γ i=1, … N.
The solutions are
μ_i^± = 1/2( 1 ±√(1 - 4a_i γ)),
which exist if and only if γ∈ [0,1/4].
Note that as γ goes from 0 to 1/4, we have that μ_1^-
increases from 0 to 1/2 and μ_1^+ decreases from 1 to 1/2,
so that we can parametrize with μ=μ_1∈ [0,1]
all the other solution μ_i, i≠ 1, as follows:
μ_i^± = 1/2( 1 ±√(1- 4a_i μ(1-μ))),
i = 2,… N.
Note that μ_i^- ≤μ≤μ_i^+.
Any choice of {μ_i}_i≥ 2 among the 2^N-1
possibilities gives
different values {M_i,E_i}_i and
different solution ρ of the MFE (<ref>). In this way, as μ varies
in [0,1],
we obtain different branches
of solution, as we describe below.
.3cm
§.§ k-branches
We fix a subset
I^+⊂{2,… N-1},
and its complementary
I^-={2,… N-1}\ I^+:
if i∈ I^+ we
choose the solution μ_i^+, if i ∈ I^- we choose
the solution μ_i^-.
In this configuration, we put
in D_i, with i∈ I^+, a mass greater than μ, the mass in
D_1, and in D_i, with i∈ I^- a mass less than μ.
It is convenient to write all the thermodynamic quantities
as functions of the parameter μ∈ [0,1] as follows
β(μ) = -8π( μ + ∑_i∈ Iμ_i^+ + ∑_i∈ I^-μ_i^-
),
Z(μ) =
-β(μ)a_1/8π(1-μ)μ ,
E(μ) = 8π/β^2(μ^2 e(μ) +
∑_i∈ I (μ_i^+)^2 e (μ_i^+) + ∑_i∈ I^-
(μ_i^-)^2 e (μ_i^-)),
S(μ) = ln Z(μ) + 2 β E(μ).
Note that
if a_i < 1 for all i≥ 2
all quantities are regular functions of μ∈ (0,1).
For |I^+| = k there are N-1k
different branches of solutions of the MFE (<ref>),
that we call k-branches.
In particular there
exists only one 0-branch, corresponding to I^+=∅.
Note also that if a_i
are equal for some i, the functions S(μ),
E(μ), β(μ)
can be the same on different k-branches.
It is easy to show that on the k-branches with k≥ 1 we
have always β < -8π.
It is useful to extend the definition of domain of
first and second kind to disconnected set.
In the case of union of disconnected disks,
we say that
Λ is a first kind set if
the 0-branch entirely lies in the region β > -8π;
if otherwise,
Λ is a second kind set.
If n=2 and a_2∈ (0,1), Λ
is of the first
kind, S
is a concave function, decreasing for E ≥ E_m,
and there is equivalence of ensembles.
On the 0-branch β > -8π and
E(β) is a decreasing function,
diverging when β→ -8π^+.
This fact can be used to easily extend
the Theorem <ref> to this case.
Note that in the case of two identical disks
the set is of the second kind.
For N≥ 3 we can have both first or second kind sets, depending
on the values of a_i, as we show below.
Even if S, E, β are quite simple functions of the variable μ,
the precise
behavior of the branches
is not really easy to study except,
as we will see later, in the particular case a_i = 1,
for all i=1, … N. So that let us
give a numerical picture.
If μ→ E(μ) is invertible, we write S(E) instead of S(μ(E)),
as well as if μ→β(μ) is invertible we write E(β) instead
of E(μ(β)).
In fig. <ref> we show an example of the different branches
both in the (β,E) and (E,S) planes.
In particular, on the left, we see that E(β) is
convex on all the k-branches, with k≥ 0. Moreover,
since _E S = β, _E^2 S = 1/ _β E, on
the increasing part of E(β), the entropy S(E) is convex and on
the decreasing part is concave. On the 0-branch, S(E) = S(E)
is concave then Λ is of the first kind.
On the right,
Λ is of the second kind,
S(E) is concave up to the value E
corresponding to the turning point of the 0-branch
in the plane (β,E), in which
β<-8π reaches its minimum; then S(E) becomes convex.
Finally, by using (<ref>),
it is not difficult to prove that
If a_i are sufficiently small for i=2, … N then Λ
is of the first kind.
.3cm
The solution S(E) of the MVP
has a complex behavior when the a_i are near 1.
In order to show this fact,
we start by analyzing the degenerate case,
namely when all the disks have the same area.
.3cm
§.§ k-merged-branches
We consider
a_i=1 for all i=1,… N.
As γ goes form 0 to 1/4,
all μ_i^- are equal and increase from
0 to 1/2
and all μ_i^+ are equal and decrease from 1 to 1/2,
moreover all the k-branches coincide.
It is convenient to re-parametrize the branches as follows. We set
μ_i^- = μ and μ_i^+ = 1 - μ; extending the values of
the parameter from μ∈ [0,1/2] to μ∈ [0, 1], we get that
the k-branch (described by μ∈ [0,1/2] ) “merge” with the
(n-k)-branch (described by μ∈ [1/2,1]) in a unique branch of
solutions, that we call k-merged-branch,
Namely we get a branch of solutions of the MFE (<ref>)
for which we choose (n-k) disks with
μ_i = μ, k disks with μ_i = 1- μ and μ∈ [0,1]. Clearly
the k and the (n-k)-merged-branch coincide,
so that we consider only k≤ n/2.
The thermodynamic quantities in terms of the variable μ∈ (0,1) are
now given by
β(μ) = -8π( (n-k)μ + k(1-μ) ),
Z(μ) = -β(μ)/8π(1-μ)μ,
E(μ) = 8π/β^2( (n-k)μ^2 e(μ) + k(1-μ)^2
e(1-μ)),
S(μ) = ln Z(μ) + 2 β(μ) E(μ).
Note that all the merged-branches have a common solution,
reached for μ =1/2, for which
β = β_c -4π n, of energy E_c and
entropy S_c.
In the (β,E) plane, the merged branches are regular,
while all the k-branches loose their regularity in
β=β_c. For example,
the 0-branch is given by the union of the restriction
to the region β≥β_c of
the 0-merged branch and the
1-merged branch (see figure <ref>),
§ PHASE TRANSITIONS
In this section we exhibit disconnected sets for which
the solution of the MVP jumps from one point to another
of the branches. As a consequence, _E S(E) is discontinuous.
In this sense, S(E) has a first order phase transition.
We analyze two cases: in the first, Λ is an union of
disks, and a phase transition occurs at low energy;
in the second, Λ is an union of N deformed disks,
and S(E) has N phase transition
for large value of the energy.
§.§ Low energy phase transitions
In this subsection, Λ is an union of disks, for which we
constructed the
branches of solutions in the previous section.
If a_i ∈ (0,1) for i=2,… N,
the solution of the MVP lies on the 0-branch.
Note that, in general, if Λ̃= L^-1Λ, with L>0,
then
S_Λ̃ (M,E) - M log |Λ̃| =
S_Λ (M,E) - M log |Λ|.
Indeed,
if ρ is a probability density
with support in Λ, mass M and energy E,
then ρ̃(x) = L^2 ρ (Lx)
has the same mass M and energy E of ρ.
As a consequence, if ρ maximize
S_Λ, then
ρ̃ maximize
S_Λ̃, hence
S_Λ̃ (M,E) =
- ∫_Λ̃ L^2 ρ(Lx) logρ(Lx)
-∫_Λ̃ρ̃log L^2
= S_Λ (M,E) - M log L^2,
where L^2 =| Λ | / |Λ̃|.
We now rewrite the entropy (<ref>)
using the
scaling properties (<ref>):
S_Λ(E) = ∑_i=1^N S_D_a_i(M_i,E_i) =
∑_i=1^N S_D_a_1(M_i,E_i) + ∑_i=1^N M_i log a_i,
hence we get that if a_i < a_j then M_i ≤ M_j. Indeed, if
M_i > M_j, by exchanging the mass and the energy between the disks
D_a_i and D_a_j the entropy increases and we get a
contradiction since S_Λ(E) is the maximum over all the
possible configurations. In particular, we conclude that
M_i ≤ M_1 for all i≥ 2, condition fulfilled only on the
0-branch.
.3cm
If N≥ 3 and for all i = 2, … N, a_i is
sufficiently close to 1, then
there exists E_* > E_m
such that
β^-= S^±E^-(E_*) < β^+ =
S^±E^+(E_*).
In the case of a_i=1 for all i,
in a neighborhood of (β_c,E_c), the branches of solutions
look as in fig. <ref>, on the left,
(in which are represented
the 0 and the 1-merged-branches),
while when a_i<1, for i=2… N very close to 1,
the 0-branch
looks as in the figure on the right.
The only qualitative difference is that in the case of
identical disks the 0-branch becomes singular in
β = β_c.
We start the proof considering the case of a_i<1,
represented in fig. <ref>, on the right,
for which the solution of the MVP lies on the 0-branch
(see Theorem <ref>).
We denote by β_d(E) the function β(E)
on the 0-branch, until the local maximum
E=E̅_c, for which the entropy is S̅_c.
On the remaining part of the branch, we call E_0 the local minimum,
reached in β_0, with entropy S_0, and
we denote by β_l(E)≤β_0 and β_r(E)≥β_0
the two values of β(E) in function of E≥ E_0.
We denote by S_d, S_l, S_r
the corresponding values of the entropy, which are regular functions
of E.
First we note that the argument in Lemma <ref>-(i)
applies in general for S(ρ) when ρ solves
MFE, if it depends regularly on E. Since this is true
if E≠E̅_c,
we have
_E S(E) = β, on all parts of the branch.
Then,
for E∈ (E_0,E̅_c) we have
S̅_c-S_d(E) = ∫_E^E̅_cβ_d < ∫_E^E̅_cβ_l
= S̅_c - S_l(E),
S_l(E) - S_0 = ∫_E_0^E β_l < ∫_E_0^E β_r = S_r(E) - S_0,
hence S_l(E) < S_d(E) and S_l(E) < S_r(E). In particular
S̅_c < S_2 S_r(E̅_c) and S_0 < S_1
S_d(E_0).
Since
SE (S_r(E) - S_d(E)) = β_r(E) - β_d(E) > 0,
and
S_r(E_0) - S_d(E_0) = S_0 - S_1 < 0, S_r(E̅_c) - S_d(E̅_c) = S_2 - S̅_c > 0,
there exist only one value E_*∈ (E_0,E̅_c) such that
[ S_d(E) > S_r(E) for E∈ (E_0,E_*); S_d(E) < S_r(E) for E∈ (E_*,E̅_c). ]
In the case of identical disks,
the same proof applies,
but we have also to prove that the
the entropy of any other branch is always less than S_r(E)
(Thm. <ref> does not apply in this case).
This can be done using the same idea, using the fact
that all the branches have a common solution for β = β_c.
.3cm
Let us describe in more details the case of
a_i=a<1 for all i= 2, …, N, with the help of some figures.
When a increases from 0 to 1, the
0-branch modifies as in fig. <ref>,
indeed as a → 1^- it converges partially to the 0-merged-branch
and partially to the 1-merged-branch,
restricted to the region β≥β_c.
The entropy on the 0-branch modifies as in fig.
<ref>.
In the point A
it becomes convex,
between B and C is concave, and after C is convex, and
(E(μ),S(μ)) intersects itself.
Numerical evidence indicate that this happens in the first
concave piece, i.e. before the point A.
For given E, the solution S(E) of the MVP is the maximum value
of the entropy on the branches,
hence a phase transition appear.
We warn the reader that the numerical inspection is extremely delicate:
the distances between the three entropy branches in the last picture of
fig. <ref>
are very small compared to the distances between the points C and B.
For example, when N=3, a=1-10^-5, the ratio is about 2× 10^-3.
§.§ High energy phase transitions
In this subsection we show that for any N≥ 2
there exist disconnected sets for which
there are N first order phase transitions of the entropy,
in the large energy region.
We consider small deformation of disks as follows.
Let η>0 be a small parameter, set = η^1/2,
and consider the domain Λ_a,η
obtained by transforming the unit
circle with the conformal map
∋ z → z + z^3∈, and then scaling so that
the area becomes a.
In the following proposition we give the thermodynamic quantities defined
in (<ref>) for the deformed domain Λ_a,η
(see the proof in Appendix <ref>).
For small η, Λ_a,η is a fist kind domain,
with
e_a,η(μ) = e_η(μ) = 1/8π( 1/μ^2 (-μ - log (1-μ)) - ητ(μ)
+o(η) ),
where τ(μ) = 2/(1-2μ/3)^2,
z_a,η(μ) = a/1-μ(1+ηζ (μ) + o(η)),
where ζ(μ) =61-2μ+2μ^2/3/(1-2μ/3)^2.
We consider Λ = ⋃_i=1^N Λ_a_i,η_i,
with area a_i near 1 and decreasing
with respect to i, and parameters η_i small.
We proceed with the construction of the branch of the MFE solutions
in Λ as in Proposition <ref>,
by choosing γ small.
For any i we have to solve
μ_i (1-μ_i) = γ a_i (1+η_i ζ(μ_i)+o(η_i))
which has two solutions
μ_i^- = γ a_i (1+η_i) + o(γ)+o(η_i),
μ_i^+ = 1- γ a_i (1-3η_i) +
o(γ)+o(η_i).
We consider the branch B_i of solutions
by choosing μ_i = μ_i^+ and
μ_j = μ_j^- for all j≠ i.
Since β = -8π∑_h μ_h we have
S(ρ)
= -logγ +log(∑_h μ_h) - 16π(∑_h μ_h)
E(ρ).
In order to compare the entropy on the different branches we need
to express S(ρ) in terms of E= E(ρ).
We observe that
μ_j = μ_j^- = o(γ) for j≠ i, and that
the dependence
of μ_i (1-μ_i)
in γ is exactly linear, hence
log(1-μ_i^+) = -logμ_i +
logγ + log a_i + log (1+η_i z_a_i,η_i
(μ_i^+)) +o(η_i).
We have
8π( ∑_h μ_h )^2 E = 8πμ_i^2 e_η_i(μ_i) + o(γ)
= -1+(1-μ_i^+) +
log (1-(1-μ_i^+))
-logγ
- log a_i - log (1+η_i ζ(μ_i^+))
-η_i τ(μ_i^+)
+ o(γ) + o(η_i)
= -1 -logγ
- log a_i - η_i (ζ(μ_i^+) + τ
(μ_i^+)) + o(γ)+o(η_i)
= -1 -logγ
- log a_i - 36γ a_i η_i + o(γ)+o(η_i)
Then the entropy in terms of the energy E for the branch B_i is
given by
S_i = - 8π (1 - (1-∑_h μ_h)^2) E +1+log a_i
+γ∑_j a_j (1+6η_j) - 2γ a_i (1- 24η_i) +o(γ)+
∑_j o(η_j).
Since -logγ is of the order of E, we can drop
the term (1-∑_h μ_h)^2 E which is o(γ).
To compare S_i, i=1… N, we set a_i = 1 + α_i η,
η_i = q_i η, then
S_i + 8π E - 1 -γ∑_j a_j (1+η_j) +2γ
= η ( α_i + 2γ (24q_i-α_i)) + o(γ) + o(η).
By choosing suitable sequences of α_i and q_i
we obtain that,
as E increases (i.e. γ decreases),
the maximum of the entropies S_i is reached, in the order,
for i=N, N-1,N-2, … 1.
When S(E) = S_i we have
_E S(E) = β = -8π∑μ_h = 1
-γ +γ∑_j a_j (1+η_j) + γη (24q_i - α_i)
+o(γ) + o(η),
then _E S(E)
is discontinuous when S(E) goes
from S_i to S_i+1.
For any N, there exists a disconnected domain Λ
for which S(E) has N first order phase transitions,
for large values of E.
§ N-DUMBBELL DOMAINS
The main result of this section is the construction
of connected bounded sets Ω for which
the entropy S_Ω(E) has first order phase transitions,
similar to that in
theorems <ref> and <ref>.
We obtain the result in a perturbative setting, for which we have
to fix the notation.
We recall that if ρ
is a solution of MVP in a open bounded connected set Ω, then,
by Theorem <ref>,
the stream function Ψ,
the inverse temperature β and the normalization Z
are uniquely defined.
Moreover, the function U=-βΨ
solves on Ω
- U = λ^U
with homogeneous Dirichlet condition on Ω,
where λ = -β / Z
is a positive parameter for β < 0.
Conversely,
if U solves (<ref>),
we can define Z=∫_Ω^U, β = - Zλ,
and then the stream function Ψ = -U/β,
solves the MFE with inverse temperature β.
In the sequel we will take for granted the relation between
ρ, Ψ, U, β, Z and λ.
Let Λ = ⋃_i=1^NΛ_i, where Λ_i are
smooth open connected bounded sets of first kind which do
not intersect each other and let C_ be the union of
channels of width >0 connecting all the disjoint sets
Λ_i.
We consider _n ↘ 0 and
a sequence of
N-dumbbell domain
Ω_n, i.e. smooth connected open sets such that
Λ⊂Ω_n ⊂ (Λ∪
C__n).
If Ω_n+1⊂Ω_n for any n,
we say
that Ω_n is a decreasing sequence of N-dumbbell domain
converging to Λ and we write Ω_n↘Λ.
We also suppose that
the measure
on the curve Λ of
Ω_n ∩Λ vanishes as n→ +∞.
Consider Ω_n↘Λ.
For any E ∈ (0, + ∞) and any sequence E_n → E, we have
S_Ω_n(E_n) → S_Λ(E).
Let ρ_n be a corresponding solution of the MVP in Ω_n.
Up to subsequences,
ρ_n converges, weakly in the sense of measure, to ρ which
is a solution of MVP in Λ
of energy E.
Moreover Ψ_n→Ψ
strongly in H^1,
β_n→β, and Z_n → Z.
We use the following lemma (see the proof of Proposition 2.1 in <cit.>).
Let ρ_n be a sequence of probability densities on an
open bounded set Ω,
with S_Ω(ρ_n) bounded from below and
converging weakly in L^1(Ω)
to a probability density ρ. Then
E_Ω(ρ_n) → E_Ω(ρ).
By Theorem <ref> (MVP) for any n ∈,
S_Ω_n(E_n) is attained in some ρ_n, which solves
the MFE (<ref>) with inverse temperature β_n. Since we
have
S_Λ(E)← S_Λ(E_n)
≤ S_Ω_n(E_n) = S_Ω_n(ρ_n) ≤ln |Ω_n| →ln |Λ|
we get that,
up to
subsequences, ρ_n converges weakly weakly in L^1
to a probability density ρ.
Moreover, we have
S_Λ(E) ≤lim sup_nS_Ω_n(ρ_n )
≤𝒮_Λ(ρ). We now prove that
E(ρ) = E, so that
S_Λ(E) = S_Λ(ρ).
Let G_n(x,y) and G(x,y) be the Green functions on
Ω_n and Λ,
respectively, extended to zero on the complementary sets
Given k ∈, for any n > k and (x,y) ∈Ω_n×Ω_n we have that
G_n(x,y) ≤ G_k(x,y),
as follows from the positivity of G_k(x,y) and the maximum principle
applied to G_n-G_k.
Then
2E_n ≤∬_Ω_k×Ω_kρ_n(x) G_k(x,y) ρ_n(y).
By the lemma <ref>, for any k
2E ≤∬_Λ×Λρ(x) G_k(x,y) ρ(y).
For any x ∈Λ, the function y↦ G_k(x,y) - G(x,y)
is harmonic in Λ and
G_k(x,y) - G(x,y) = G_k(x,y) for any y ∈∂Λ.
Hence
G_k(x,y) - G(x,y) = - ∫_∂Λ
G_k(x,z) ∂ G/∂ν (z,y) dσ(z)
∀ ( x,y) ∈Λ×Λ
where ν is the outer normal to ∂Λ.
For any (x,z) ∈Λ×Λ the sequence
G_k(x,z) is decreasing, and, by the construction of Ω_n,
vanishes for k→ +∞, a.e. with respect to σ.
Therefore
G_k(x,y) - G(x,y) → 0 for any
(x,y) ∈Λ×Λ. Finally, by dominated convergence
2E ≤∬_Λ×Λρ(x) G_k(x,y) ρ(y)→∬_Λ×Λρ(x) G(x,y) ρ(y)
= 2 E_Λ(ρ).
To prove that E_Λ(ρ) ≤ E, we set
Ψ_n = (- Δ)^-1ρ_n and Ψ = ( - Δ)^-1ρ∈ H^1_0(Λ).
We have that Ψ_n ⇀Ψ
weakly in H^1, hence
E_Λ (ρ) =1/2∫_Λ | ∇Ψ |^2 ≤lim inf_n E_Ω_n (ρ_n) = E.
Since E= E_Λ(ρ) and S_Λ(E) ≤
S_Λ(ρ), we have that S_Λ(ρ) =
S_Λ(E), and consequently S_Ω_n(E_n) → S_Λ(E).
The convergence of the energy, i.e. of Ψ_n_H^1_0,
assure that Ψ_n →Ψ strongly in H^1.
We now prove the convergence of β_n to β,
which is the inverse temperature for
the MFE (<ref>) for Ψ.
For any subsequence,
there exists a subsequence such that Ψ_k(x) →Ψ(x)
a.e. in Λ. Since Ψ is continuous, positive on Λ
and 0 in Λ,
the closed set C= Ψ^-1 ([0, E/2] ) has positive measure.
For any n ∈, set C_n = {x
∈Λ̅:
0≤Ψ_k(x) ≤ E_k, ∀ k ≥ n }∩ C. We have
C_n ⊆ C_n+1 and ⋃_n ∈ C_n = C
up to a set of zero measure, therefore |C_n| → |C| as n → + ∞.
Since ∫_C_nρ_k →∫_C_nρ,
for n sufficiently large,
for any k≥ n, we have
0 < c ≤∫_C_nρ_k ≤ 1.
Now we prove that Z_k lies in a compact subset of (0,+∞).
Note
that if β_k ≤ 0, Z_k ≥ |Ω_k|,
while if β_k > 0, Z_k ∈ (0,|Ω_k|),
and recall that
𝒮_Ω_k(ρ_k) = 2 β_k E_k+ ln Z_k.
If β_k ≤ 0 we have
c ≤∫_C_nρ_k≤e^- β_k E_k /Z_k |C_n | =
e^-1/2𝒮_Ω_k
(ρ_k) /Z_k^1/2 |C_n |
then Z_k is bounded from above. If β_k> 0
1 ≥∫_C_nρ_k≥e^- β_k E /Z_k
|C_n | = e^-1/2𝒮_Ω_k(ρ_k) /Z_k^1/2 |C_n |
then Z_k is bounded from below by a positive constant.
Consequently, up to subsequence, Z_k →Z̃∈ (0,+∞), and then,
from the relation with the entropy,
β_k →β̃∈.
Moreover
ρ_k(x) →e^- β̃ψ (x)/Z̃ =
ρ(x) = e^- βψ (x)/Z a.e. in
Λ which implies β̃ = β and
Z̃ = Z.
.3cm
Now we consider Λ the union of N disjoint disks,
as in section <ref>.
We prove that for N large
the N-dumbbell
domains Ω_n have branches of
solutions of the MFE close to those obtained for the disconnected
set Λ. The proof can be easily adapted to the case of
deformed disks considered in Theorem <ref>.
We first consider the solutions of (<ref>).
Note that
the inverse temperature β is a function of
λ and of the solution U,
hence different solutions of (<ref>) with the same λ provide
solutions of the MFE with different β.
Also the energy does not depend only on the parameter λ,
and its expression in terms U if given by
E = 1/2λ^2 Z^2∫ | U|^2 =
1/2λ Z^2∫ U^U.
To clarify
what can happen,
recall that we have constructed the branches of solutions
in the parameter μ∈ [0,1]. Using the relation
between Z and μ in (<ref>)
(and in (<ref>) for the case of merged branches),
we have that
λ = 8π/a_1μ (1-μ),
hence, in particular, λ
is proportional to γ
in (<ref>).
The parameter λ has the maximum value
λ_c=2π/a_1. The values of μ∈ (0,1/2)
describes the right part of the branches in the plane (β,E),
while μ∈ (1/2,1) describe the left part.
So that, fixed λ∈ (0,λ_c),
there exist two solutions of
eq. (<ref>)
for any branch.
We show in figure <ref> how the branches of solutions
appear in the (λ,E) plane.
We consider N=3, and different values of a_2 and a_3,
(see figure <ref> and <ref>
for the analogous figures
in the (β,E) plane).
It is clear that it is convenient to parametrize
the solutions of
the MFE (<ref>) on Λ on a
k- branch (or on a k-merged-branch)
as {Ψ_μ}_μ∈ (0,1).
We set
λ(μ) = - β(μ) /Z(μ) =
8π/a_1 (1-μ)μ = - M_i β/Z_i,
and
U_μ = - β(μ) Ψ_μ.
We have the following result.
Let Ω_n↘Λ, as n → + ∞.
For any compact set W ⊂ (0, 1) ∖{1/2}
there exists r_W >0 such that
for any r∈(0,r_W) there exists n_r such that
for any n ≥ n_r and
μ∈ W there exists
U^n_μ∈
H^1_0 (Ω_n) which is the unique solution in
B_r(U_μ) of (<ref>) in Ω_n
with λ = λ(μ).
We set U^(i)_μ= U_μ|_Λ_i, for i =1, …, N.
We have
- Δ U^(i)_μ = λ(μ) e^U^(i)_μ
in Λ_i.
Consider the linear operator
T^(i)_μ: H^1_0(Λ_i ) → H^-1 (Λ_i )
given by
⟨ T^(i)_μ f, g ⟩ = ∫_Λ_i∇ f ·∇ g- λ(μ) ∫_Λ_ie^ U^(i)_μ f g , ∀ f, g ∈ H^1_0(Λ_i ).
By <cit.> we know that
if μ∈ W, Ker T^(i)_μ = {0},
and if μ_n ∈ W then U^(i)_μ_n→ U^(i)_μ_0
in C^2+α(Λ_i) ∩ C(Λ̅_i)
(up to subsequence) for some μ_0 ∈ W.
We extend U_μ to zero in the complementary of Λ,
and we define T^n_μ : H^1_0(Ω_n ) → H^-1 (Ω_n )
as follows
⟨ T_μ^n f, g ⟩ = ∫_Ω_n∇ f ·∇ g -λ(μ)
∫_Ω_ne^U_μ f g
= ∫_Ω_n∇ f ·∇ g - λ(μ) ∑_i=1^N ∫_Λ_ie^U^(i)_μ f g -
λ(μ) ∫_Ω_n∖Λ fg.
We now prove that
there exist n_W>0 and C_W>0 such that
for any n≥ n_W and any
μ∈ W
T_μ^n f _H^-1≥ C_W
f _ H^1_0 ∀ f ∈ H^1_0(Ω_n) .
If not, there exists a diverging
sequence n_k∈, a sequence μ_k →μ_0 ∈ W,
and h_k ∈ H^1_0(Ω_n_k),
h_k _H^1_0=1 such that
T^n_k_μ_k h_k _H^-1→ 0.
Then, up to subsequence,
h_k⇀ h_0
weakly in H^1 and h_k → h_0 strongly in L^2
Moreover,
since Int (⋂_n=1^+∞Ω_n_k)= Λ,
we have that
h_0 ∈ H^1_0(Λ) and
⟨ T_μ_k^n_k h_k, g ⟩ = ∫_Ω_n_k∇ h_k ·∇ g -
λ(μ_k)
∑_i=1^N ∫_Λ_ie^U^(i)_μ_k h_k g -
λ(μ_k) ∫_Ω_n_k∖Λ h_k g
→∑_i=1^N ⟨ T^(i)_μ_0 h_0, g ⟩
for any g ∈ H^1_0(Λ). Hence, setting
h^(i)_0 = h_0|_Λ_i∈ H^1_0 (Λ_i) we have
h^(i)_0 ∈Ker T^(i)_μ_0 = {0}.
We get a contradiction, since
1
= h_k ^2_H^1_0 = ⟨ T_μ_k^n_k h_k , h_k ⟩ +
λ(μ_k) ∫_Ω_n_ke^U_μ_k
|h_k|^2 ≤ T_μ_k^n_k h_k _H^-1 +
C h_k ^2_L^2→ 0.
As a consequence, for all n> n_W, we have Ker T_μ^n ={0}
and T_μ^n is invertible,
since T_μ^n ^-1 is a Fredholm operator in H^-1.
In particular, the inverse S^n_μ is uniformly bounded by C_W^-1
for μ∈ W.
Now,
for any n>n_W we consider
the C^1-map F^n_μ : H^1_0 (Ω_n) → H^-1 (Ω_n)
given by
⟨ F^n_μ (v), h ⟩ =
∫_ Ω_n∇ ( U_μ + v) ·∇ h - λ(μ) ∫_ Ω_ne^U_μ + vh
∀ v,h ∈ H^1_0 (Ω_n).
Clearly
F^n_μ (0) = T^n_μ and
⟨ F^n_μ (0), h ⟩ = ∫_ Ω_n∇ U_μ·∇ h - λ(μ) ∫_ Ω_ne^U_μh
= ∫_ (Ω_n∖Λ) ∩∂Λ∂/∂ν U_μ h - λ(μ)
∫_ Ω_n∖Λh.
Since U_μ∈ H^2(Λ) we have in particular
∂/∂ν U_μ|_∂Λ∈
L^2(∂Λ) and since
|∂(Ω_n∖Λ) ∩∂Λ |
→ 0 we get F^n_μ(0) _H^-1→ 0 as n→ +∞,
uniformly in μ∈ W.
We are interested in the zeros of F_μ^n, or equivalently,
the fixed points of the smooth map G_μ^n : H^1_0 (Ω_n)
→ H^1_0 (Ω_n)
with
G_μ^n (v)= v - S_μ^n F_μ^n(v).
We have G_μ^n (0) _H^1_0≤ C_W^-1 F_μ^n (0) _H^-1→ 0,
as n →∞, and G_μ^n(0) = 0;
moreover, by the Moser-Trudinger inequality, for any v ∈ H^1_0
(Ω_n)
G_μ^n(v) = G_μ^n(v) - G_μ^n(0) =
S_μ^n ( F_μ^n (v) - F_μ^n (0))
≤ C_W^-1 F_μ^n (v) - F_μ^n (0)≤ C e^U_μ( e^v -1) _L^2
≤ C v e^ |v|_L^2≤
C v _L^4e^|v|_L^4≤ C v_H^1_0e^1/4 π v ^2_H^1_0.
for some constant C>0 (independent on n and μ∈ W).
Hence there exists r_W >0 such that G_μ^n(v) < 1
in B_r = {v_H^1_0≤ r } for any r≤ r_W.
Since G_μ^n(0)_H^1→ 0 uniformly in W,
for any r∈ (0,r_k) there exists n_r independent
on μ∈ W, such that for any n≥ n_r the map G_μ^n(v) is a strict
contraction in B_r.
Therefore there exists (unique)
v_μ^n ∈ B_r, fixed point for G_μ^n.
Namely U^n_μ = U_μ + v_μ^n solves
(<ref>), uniquely
in B_r(U_μ), and v_μ^n→ 0 in H^1_0
as n→ +∞, uniformly for μ∈ W.
The following lemma allow us to re-parametrize in terms of the energy
the branch of solutions
{ U_μ^n}_μ∈ W on Ω_n.
Let W be a closed interval in (0,1)∖{1/2},
and {U_μ}_μ∈ W a branch of solutions for the MFE on Λ.
Let
E(μ) be the energy of U_μ.
Assume that
E'(μ)≠ 0 for μ∈ W, and denote by μ(E) its inverse.
Let {U^n_μ}_μ∈ W be the solutions of
(<ref>) in Ω_n
given by Proposition <ref>.
Then, for any n sufficiently large
there exists a C^1 function μ^n(E) such
that the energy in (<ref>) associated to
U^n_μ^n(E) is E.
Moreover μ^n(E) converges uniformly with its derivative
to μ(E), the
entropy associated to U^n_μ^n(E)
is a C^2 function on E, and converges uniformly
with its derivatives to the entropy associated to U_μ(E).
We consider the C^1 map (μ,v) ↦ F_μ^n(v)
defined in (<ref>).
By Proposition <ref>, μ↦ v_μ^n = U^n_μ - U_μ
is the curve of solutions of F_μ^n(v)=0 we can obtain via the
implicit function theorem, then it is a C^1 function
from W to H^1.
We now show that _μ v_μ^n→ 0 in H^1_0, uniformly in W.
By the implicit function theorem
_μ v_μ^n = - F_μ^n(v_μ^n)^-1[(_μ F_μ^n)(v_μ^n)
].
Since the operator F_μ^n(v_μ^n)^-1:H^-1(Ω_n)→
H^1_0(Ω_n)
is uniformly bounded, we have to prove that
(_μ F_μ^n)(v_μ^n)_H^-1 vanishes uniformly in W.
For h∈ H^1_0(Ω_n) we have
⟨ (_μ F^n_μ) (v_μ^n), h ⟩ =
∫_ Λ∇_μ U_μ·∇ h - λ'(μ) ∫_ Ω_ne^U_μ + v_μ^nh
-λ(μ) ∫_ Ω_ne^U_μ + v_μ^n_μ U_μ h
=
∫_Ω_n_μ (λ(μ)^U_μ)(1-^v_μ^n) h
- λ'(μ) ∫_Ω_n ∖Λ h +
∫_(Ω_n ∖Λ) ∩Λ_ν (_μ U_μ) h.
Proceeding as in the proof of Proposition <ref>
(see the L^2 estimate of ^v -1 in (<ref>),
and the proof that F^n_μ (0)_H^-1→ 0
after (<ref>))
we obtain that
(_μ F_μ^n)(v_μ^n)_H^-1 vanishes uniformly in W.
As a consequence, Z^n(μ) = ∫_Ω_n^U_μ^n
is a regular function in μ,
and _μ Z^n(μ) converges uniformly in W to
_μ Z(μ).
We also note that
_μ∫_Ω_n | U_μ^n|^2
= 2_μ Z^n(μ)
We denote with E^n(μ) the energy of U^n(μ).
Since λ(μ) and Z^n(μ) do not vanish in W,
from
(<ref>)
we get that _μ E^n(μ) converges uniformly in W
to _μ E(μ)≠ 0, hence for n sufficiently large
also E^n(μ)
is invertible, with inverse in C^1.
Recalling that
_E S = β,
we conclude the proof by noticing that
β^n(E) = - λ(μ^n(E)) Z^n(E)
is a C^1 function.
In the hypothesis of lemma <ref>
we can define U(E) = U_μ(E) and,
for n sufficiently large, also
U^n(E) U^n_μ^n(E).
The domain of definition of U(E) and U^n(E) are not
the same, but,
from Proposition <ref>, we get the following result.
In the hypothesis of lemma <ref>,
for any closed interval I
contained in the interior of K={E(μ): μ∈ W},
there exists r_I>0
such that for any r∈ (0,r_I) there exists n_r such
that for any n≥ n_r, and for any E∈ I,
U^n(E) is the unique solution
of (<ref>) with energy E in B_r(U(E)).
.3cm
We now show that there are bounded connected sets
for which the entropy has a first order phase transition.
We use the perturbative construction in Proposition
<ref>,
applied to the case of disks of different area, in
the hypothesis of Theorem <ref>,
for which the transition occurs for some E_*.
The construction can be adapted also to the case of Theorem <ref>.
Consider Λ
union of N≥ 3 disjoint disks with area 1=a_1> a_2 ≥… a_N,
with a_i sufficiently close to 1 as in Thm. <ref>.
If Ω_n↘Λ,
for n sufficiently large
there exists E_*^n such that for E in a neighborhood of E_*^n,
the MPV in Ω_n has a unique solution
if E≠ E_*^n,
has two solutions for E=E_*^n
and
S_Ω_n(E)
has a first order phase transition in E=E_*^n.
We fix a closed interval I⊂ (E_0,E̅_c),
where E_0 and E̅_c are defined in the proof of
Theorem <ref>
(see the graph on the right in Fig. <ref>).
We also assume that I contains
E_* in its interior.
We indicate with
{ρ^σ(E)}_E∈ I, σ=0,1,
the solutions of the MVP on the 0-branch,
with E<E_* and E>E_* respectively.
and we indicate with Ψ^σ(E),
β^σ(E), Z^σ(E), λ^σ(E),
U^σ(E) the related functions and parameters
in the MFE (<ref>) and (<ref>).
For any E∈ I,
let M̃_̃ñ(E)⊂ P_Ω_,E be
the set of the solutions of the MVP in Ω_n with energy E,
end let M_n(E)⊂ H^1_0(Ω_n) be the corresponding subset of
the functions U.
We claim that
for any r >0 there exists n_I,r∈
such that for any n ≥ n_I,r and any E ∈ I
M_n(E) ⊂ B_r(U^0(E)) ∪ B_r(U^1(E)).
If not, there exist sequences
E_k∈ I and ρ_k∈ P_Ω_k,E_k,
whose corresponding U_k is not in
B_r(U^0(E_k)) ∪ B_r(U^1(E_k)).
There exists a subsequence such that E_k→E̅∈ I,
and,
by Theorem <ref>, ρ_k →ρ which
solves the MVP in Λ with energy E̅∈ I.
Since the solution of the MVP
with energy E̅ corresponds only to
U=U^0(E̅) or U=U^1(E̅), we get a contradiction.
Now we prove that for sufficiently large n, the functions
in M_n(E) are the solutions of MFE given by
Proposition <ref>.
Namely, choosing r<r_I, for n≥ n_r as in Corollary <ref>,
for any E in I there exists a unique solution
U^n,σ(E) of the MFE on Ω_n
with energy E in B_r(U^σ(E)).
Therefore
M_n(E)⊂{U^n,0(E), U^n,1(E)}.
The existence of the (unique) value E_*^n for which
MVP has two solutions easily follow form the fact that,
as proved in Lemma <ref>,
the entropy associated to U^n,σ(E) is a C^2 function in
E and converges uniformly with its derivative to
the entropy of the U^σ(E).
§ DEFORMED CIRCLES
We prove eq.s (<ref>), recalling that
that the function e(μ) is invariant
for scaling, while z(μ) is linear in the area.
So we have to consider the deformation Λ_η
of the circle D of radius 1,
with the conformal map
z → z + z^3, with ^2 =η.
The determinant of the Jacobian of this map is
J_=1+a + b ^2, with a = 6(x^2-y^2),
and b= 9(x^2+y^2)^2.
In order to find e(μ) and z(μ),
we first calculate the free energy,
noting that, for β<0,
F(β) = sup_ρ G(β, ρ),
where
G(β, ρ) =
- E(ρ)-1/βlog Z(β,ρ), Z(β,ρ) =
∫_Λ_η^-βΨ,
(see Section 8 of <cit.>).
In fact the Euler-Lagrange equation for the
variational principle is the MFE equation,
and, if ρ is a solution of the MFE equation then
G(β,ρ) = - E(ρ) - 1/βlog Z(β,ρ) =
- E(ρ) -1/β( S(ρ) - 2β E(ρ))
= E(ρ) -1/β S(ρ)
= F(β,ρ)= F(β).
The advantage of this formulation is that
the expression of the energy is invariant for conformal transformation,
then we maximize G for ρ supported on Λ_η
if we maximize in Φ, defined on D, the functional
I_(β,Φ) = -1/2∫_D |Φ|^2
-1/βlog∫_D J_^-βΦ.
The corresponding
Euler-Lagrange equation is
-Φ_ = J_/Z_^-βΦ_, Z_ = ∫_D J_^-βΦ_,
with homogeneous Dirichlet boundary condition on D.
For =0 we clearly get the solution of the MFE on D,
that
we call Φ, with associated density ρ, and normalization
Z.
We denote
Φ' = . |_=0Φ_,
Φ” = .
d^2 d^2|_=0Φ_.
The equation for Φ' is
-Φ' = ρ (a-βΦ') -ρ C,
where
C = ∫_D ρ (a-βΦ') = -β∫_D ρΦ',
since ρ is radial and a=6r^2cos (2ϑ) in polar coordinates.
Then, by eq. (<ref>) we have
. |_=0 I_(β,Ψ_)
= ∫_D ( Φ Φ' + ρΦ' + 1/βρ a
)=0.
The second derivative in =0 if given by
I” . ^2 ^2^2|_=0 I_(β,Ψ_)
= -∫_D |Φ'|^2 + ∫_D ΦΦ”
- 1/β∫_D ρ (2b - βΦ” - β a Φ' -βΦ'(a-βΦ'))
+1/βC^2.
The terms in Φ” vanishes, since Φ solves the MFE.
Moreover
-∫_D|Φ'|^2 = ∫_D Φ' Φ' =
-∫_D ρΦ' (a-βΦ') + C ∫_D ρΦ',
and C = -β∫_D ρΦ'. Therefore
I” = -1/β∫_D ρ (2b - β aΦ') +
γ∫_D ρΦ' +
1/βγ^2 = -1/β∫_D ρ
(2b - β a Φ').
In order to compute I” we have do find Φ'.
By noticing that a is harmonic,
we can solve the equation for Φ' by searching for a solution
of the form
Φ' = a/β+ ξ(r) cos (2ϑ),
where (r,ϑ) are polar coordinates,
with the boundary condition ξ(1) = -6/β=
3/4πμ.
We
set p=μ/(1-μ), and we note that
ρ = 1/π (1-μ)1/(1+pr^2)^2.
We search for ξ(r)=c α (pr^2).
The equation for α in the variable
s=pr^2
is
(1+s)^2 ( (s _s)^2 α - α) = -2 s α.
There exists only one solution, bounded in 0 and with α(1)=1,
given by
α(s) = s/1+s + s/2.
Then
ξ(r) = 3/4πμα(p)α(pr^2).
Computing Φ' and inserting its expression in (<ref>)
we finally get
F_Λ_η(β) = F (β) - η1/βg(β) +
o(η)
where
F(β) is the free energy for D, and the correction
g is given by
g(β) = 61-μ/1-2/3μ
(we scaled η with the constant 6, for simplicity of notation).
Now we have to find the relation between E, β, S
We first define
f_η (β) = β F_η (β) = f(β) - η g(β)
+o(η).
The entropy is given by
S_η (E) = inf_β (β E - f_η(β)),
where the infimum is attained in
E_η(β) = f'_η(β) = f'(β) - η g'(β)+o(η),
from which we get the expression of e_η(μ)
in (<ref>), where τ=1/8π_β g = -_μ g.
The entropy in function of β is
S_η (β) = S(β) + η (g - β_β g)+o(η)
where S(β) is the value for η = 0, i.e. the case
of the disk.
Denoting by Z(β) the normalization factor for the disk,
we have
log Z_η(β) = S_η(β) - 2 β E_η(β)
= log Z + η ( g + β_β g)+o(η).
Since β_β g = μ_μ g,
we obtain the expression of
z_1,η in (<ref>), where ζ(μ) = g + μ_μ g.
99
B D. Bartolucci,
Global bifurcation analysis of mean field
equations and the Onsager microcanonical description
of two-dimensional turbulence
Calculus of Variations and PDE 58, 1, 18 (2019).
BaLin D. Bartolucci, C.S. Lin
Existence and uniqueness for Mean Field Equations on
multiply connected domains at the critical
parameter
Mathematische Annalen (2013) DOI: 10.1007/s00208-013-0990-6
BDM D. Bartolucci, F. De Marchis,
Supercritical Mean Field Equations on convex domains and
the Onsager's statistical description of two-dimensional turbulence
Archive for Rational Mechanics and Analysis, 217, 2, 525–570
(2015).
BJLY D. Bartolucci, A. Jevnikar, Y. Lee & W. Yang Non-degeneracy, Mean Field Equations and the Onsager Theory of 2D Turbulence Archive for Rational Mechanics and Analysis 230, 397–426 (2018).
BaMa D. Bartolucci, A. Malchiodi
Mean field equations and domains of first kind
Rev. Mat. Iberoam. 38, no. 4, pp. 1067–1086 (2022).
CLMP1 E. Caglioti E., P.L. Lions, C. Marchioro, M. Pulvirenti,
A Special Class of Stationary Flows for Two-Dimensional Euler Equations,
A Statistical Mechanics Description. Part I
Comm. Math. Phys. 143, 501–525 (1992)
CLMP2 E. Caglioti E., P.L. Lions, C. Marchioro, M. Pulvirenti,
A Special Class of Stationaty Flows for Two-Dimensional Euler Equations,
A Statistical Mechanics Description. Part II
Comm. Math. Phys. 174, 229–260 (1995)
CCLin S.Y.A. Chang, C.C. Chen, C.S. Lin
Extremal functions for a mean field equation in two dimension
Lecture on Partial Differential Equations, New Stud. Adv. Math., 2,
61–93. Int. Press, Somerville (2003)
EGP P. Esposito, M. Grossi, A. Pistoia
On the existence of blowing- up solutions for a mean field equation
Ann. I. H. Poincaré AN 22
(2005) 227–257.
ES G. Eyink, and H. Spohn, Negative temperature states and large-scale, long-lived vortices in
two-dimensional turbulence,
J. Statist. Phys. 70, 833–886 (1993).
K Kiessling, M.K.-H.: Statistical mechanics of classical
particles with logarithmic interactions. Comm. Pure Appl. Math.46(1),
27–56 (1993)
KL Kiessling, M.K.-H., Lebowitz, J.L.: The micro-canonical
point vortex ensemble: beyond equivalence. Lett. Math. Phys. 42(1),
43–58 (1997)
MJ Montgomery, D., Joyce, G.: Statistical mechanics
of negative temperature states. Phys. Fluids17, 1139–1145 (1971)
LP Lundgren, T.S., Pointin, Y.B.: Statistical
mechanics of two-dimensional vortices in a bounded container.
Phys. Fluids19, 1459–1470 (1976)
O L. Onsager, Statistical hydrodynamics,
Nuovo Cimento 6(2), 279–287 (1949).
S T. Suzuki, Global analysis for a two-dimensional
elliptic eigenvalueproblem with the exponential nonlinearity,
Ann. Inst. H. Poincaré
Anal. Non Linéaire 9(4), 367–398 (1992).
|
http://arxiv.org/abs/2307.10808v2
|
20230705072423
|
Claim Reserving via Inverse Probability Weighting: A Micro-Level Chain-Ladder Method
|
[
"Sebastian Calcetero-Vanegas",
"Andrei L. Badescu",
"X. Sheldon Lin"
] |
econ.EM
|
[
"econ.EM",
"stat.AP"
] |
Exact Solution for the Rank-One Structured Singular Value with Repeated Complex Full-Block Uncertainty
[
======================================================================================================
Claim reserving is primarily accomplished using macro-level models, with the Chain-Ladder method being the most widely adopted method. These methods are usually constructed heuristically and rely on oversimplified data assumptions, neglecting the heterogeneity of policyholders, and frequently leading to modest reserve predictions. In contrast, micro-level reserving leverages on stochastic modeling with granular information for improved predictions, but usually comes at the cost of more complex models that are unattractive to practitioners. In this paper, we introduce a simple macro-level type approach that can incorporate granular information from the individual level. To do so, we imply a novel framework in which we view the claim reserving problem as a population sampling problem and propose a reserve estimator based on inverse probability weighting techniques, with weights driven by policyholders' attributes. The framework provides a statistically sound method for aggregate claim reserving in a frequency and severity distribution-free fashion, while also incorporating the capability to utilize granular information via a regression-type framework. The resulting reserve estimator has the attractiveness of resembling the Chain-Ladder claim development principle, but applied at the individual claim level, so it is easy to interpret and more appealing to practitioners.
Claim reserving, Survey Sampling, Inverse Probability Weighting, Chain-Ladder, Survival modeling
§ INTRODUCTION
Claim reserving is a crucial aspect of insurance and risk management, and is vital for ensuring solvency, assessing risk, and setting appropriate premiums. The insurance industry employs several types of reserves but is primarily interested in the reserve for outstanding claims, which cover the estimated costs of unsettled and non-reported claims, representing the insurer's liability for future payments related to already occurred accidents. This reserve can be split into subcomponents depending on the source of the claim i.e. whether it is from a reported claim or not, and it is of interest to create this distinction for accounting purposes. These reserves play a vital role in maintaining financial stability and ensuring the availability of funds for future claim payments.
Reserving in general insurance is one of the most studied problems in actuarial research. See for e.g. <cit.> for an extended list of most of the research covering this topic. <cit.> and references therein provide a detailed overview of methods used in insurance reserving. Briefly, two primary approaches to reserving, namely micro-level and macro-level approaches, have been widely studied in the actuarial literature. The theoretical foundations of these methods can be found in the literature of stochastic claim reserving, see <cit.>.
On one hand, the macro-level approach to reserving focuses on estimating claim payments at an aggregate level. Among the macro-level approaches, Chain-Ladder-based techniques are widely employed in the insurance industry due to their ease of implementation, interpretation, and reliance on intuitive assumptions, see for e.g <cit.>, <cit.>, <cit.>. These methods avoid the use of very complex mathematical concepts, such as predictive models or stochastic processes, instead relying on simple operations that can be implemented using simple spreadsheets. Additionally, they only require estimating development factors, which can be easily obtained from aggregate data without the need for specialized software. Consequently, the Chain-Ladder method and it's variants are favored by insurance companies and regulators, with more than 90% of insurers relying on them as their primary reserving methods (<cit.>).
However, these aggregate methods overlook the actual composition and heterogeneity of the insurance portfolio. The Chain-Ladder assumes homogeneity among claims within a given group, disregarding valuable insights that can be gained by considering factors, such as attributes associated with the risk of each policyholder ( <cit.> ). In fact, the most recent literature on claim reserving (e.g <cit.> and literature therein ) highlights the importance of using all the information available (i.e the granular data) for the estimation of accurate reserves, and how ineffective is ignoring it. Consequently, the Chain-Ladder and similar macro-level models exhibit clear limitations and modest accuracy of estimation of the reserves when compared to models that do account for granular information i.e micro-level models.
On the other hand, the micro-level approach to reserving involves estimating individual claim payments by considering detailed characteristics, such as policyholder information, claim type, severity, and other relevant factors ( <cit.> ). Micro-level reserving methods utilize probabilistic models that directly capture the behavior of policyholders and their impact on reserves, resulting in accurate forecasts. See for e.g <cit.>, <cit.>, <cit.>, <cit.> for various modeling examples.
The micro-level models pose challenges in terms of complexity, portfolio heterogeneity, and size, making them difficult to implement in practice. These models incorporate both stochastic and predictive modeling, adding layers of complexity that may hinder their use in practice. Moreover, micro-level models often require assumptions about model components, such as distributions and simplifications of reality, which raise concerns about their validity. Consequently, these models are not widely adopted by actuarial practitioners due to the additional implementation effort required, and the lack of consensus on modeling practices from a regulatory standpoint, even though these provide more reliable estimations. Indeed, according to <cit.> and related studies, micro-level reserving methods are virtually absent among insurance companies worldwide, with almost no one utilizing them, either as their primary method or for internal check-up purposes.
A main obstacle to the consideration of micro-level reserving by practitioners and regulators is the significant disparity in methodologies with respect to macro-level models, in addition to the associated effort required for their construction. Macro-level models, such as the Chain-Ladder, differ significantly from micro-level models in terms of how the reserve estimation is derived. Consequently, the transition from a macro-level to a micro-level model represents a substantial and challenging undertaking for any insurance company. Furthermore, regulators face difficulties in validating and accepting a micro-level model when its underlying principles deviate significantly from the familiar idea of Chain-Ladder and the construction of the reserves via development factors. Therefore, the substantial gap between these two key reserving methodologies hinders the adoption of micro-level modeling in the insurance industry.
In this paper, we focus on bridging the gap between macro-level and micro-level models by providing a methodology that enables the use of individual information in a macro-level model, and therefore improve its performance while retaining most of its simplicity and interpretability. To do so, in this paper, we consider a novel approach to claim reserving by viewing the problem as a survey sampling problem. By treating the reported claims as a sample from a larger population of claims, we develop a statistically sound macro-level approach based on an inverse probability weighting (IPW) method. In this way, the newly proposed methodology accommodates for the introduction of individual claim information via a regression-like model on the sampling probabilities, similar to how it is achieved to propensity scores.
One of the main strangeness of the IPW method to claim reserving is its distribution-free approach to the estimation of the reserve as there is no need to specify a model for either the claim arrival process (frequency) or the claim amounts (severity). Indeed, the IPW estimator only requires the modeling of the development of the claims (i.e reporting and payment delays), just as is the case of traditional aggregate models. As a result, the modeling efforts are focused only on estimating claim-specific inclusion probabilities based on the observed distribution of the delays, which simplifies the modeling when compared to other reserving techniques.
Another attractiveness of the IPW estimator is that it exhibits a functional form reminiscent of the Chain-Ladder method and its development factors. However, it distinguishes itself by having claim-specific factors that depend on the attributes of the claims. As a result, our methodology can be viewed as a “micro-level version" of the Chain-Ladder, where the development of each claim up to its ultimate value is performed at the individual level. Hence, our proposed approach represents can be seen as an extension of traditional aggregate methods, tailored to incorporate individual claims information in a statistically justified manner and in a friendlier fashion than other methods. It is important to note that our approach is motivated independently of the Chain-Ladder method and differs from other attempts to account for heterogeneity in macro-level models, such as <cit.> or <cit.>. In these approaches, a classification of claims into homogeneous classes is conducted, followed by the application of the Chain-Ladder method within each class. In contrast, our methodology seeks to integrate individual claims information without relying on such classification procedures or applying the run-off triangle development principle.
The IPW method represents an improvement over aggregate claim reserving models based on the Chain-Ladder, while providing a cost-effective alternative to traditional micro-level reserving models. It maintains the desirable practicality and interpretability of macro-level models, making it a more appealing choice for both practitioners and regulators. This approach may serve as an initial step to encourage practitioners, who typically rely on macro-level models, to explore the potential benefits and insights obtained from incorporating individual information in the reserving process. Ultimately, it paves the way for practitioners and regulators to consider tailored-made models based on micro-level techniques.
This paper is structured as follows: Section <ref> introduces the reserving problem as a sampling problem, and shows the derivation of IPW estimator for the outstanding claims. Section <ref> extends the methodology to consider other types of reserves, such as the incurred but not reported (IBNR) and the reported but not settled (RBNS) reserves. Section <ref> discusses how to estimate the required inputs of the model. Section <ref> provides a numerical study on a real insurance dataset. Lastly, Section <ref> provides the conclusion and future research directions.
§ CLAIM RESERVING VIA INVERSE PROBABILITY WEIGHTING
In this section, we present the claim reserving problem and demonstrate how it can be effectively tackled using inverse probability weighting methods. Since there are various types of reserves in general insurance, in this section we provide the overall idea of the methodology for the total reserve of outstanding claims only. Section <ref> will delve into the specific details of the methodology for the most prevalent and significant reserves in general insurance, namely RBNS and IBNR reserves.
§.§ The claim reserving problem
Suppose an insurance company is analyzing its total liabilities associated with claims whose accident times occur between t=0 and t=τ, where τ is the valuation time of analysis as defined by the actuary. In general insurance, accidents are often not immediately reported to the insurance company for various reasons, resulting in a significant delay between the occurrence of a claimable accident and the time the insurance company is notified. Therefore, at a given valuation time τ, the insurance company only has information on the claims reported by τ and is unaware of the unreported claims. Furthermore, the complexity of the problem increases due to another delay in the payment process. When a claim is reported, it is common for it to be paid in several sub-payments over time rather than as a lump sum. This is because the impact of an accident can evolve, requiring additional payments until it is fully settled. Therefore, at a given valuation time τ, the insurance company is only aware of the claims that were reported on time, and for each one, it may have paid only a partial amount of the associated claim size, rather than the entire amount.
As a result, the insurance company is interested in estimating the total claim amount of these unreported claims, as well as the remaining payments of the reported claims, to construct the overall reserve of outstanding claims. This reserve is also known in the insurance jargon as the Incurred But Not Settled (IBNS), and it's usually decomposed into further subcomponents depending on whether the payment is associated with a reported or not reported claim. For simplicity, here we consider the estimation of the overall reserve of outstanding claims without referring to the components.
That said, let's describe the payment process as follows:
* Let N(τ) represent the total number of different payments associated with all the claims whose accident time is before the valuation time τ.
* Let Y_i, i= 1, …, N(τ) denote the sequence payments. Note that some payments may belong to the same claim/accident, but we will not make any distinction.
* Let T_i, i= 1, …, N(τ) denote the sequence of accident times associated with the claim underlying each payment; let R_i, i= 1, …, N(τ) denote the sequence of the associated reporting times; let S_i, i= 1, …, N(τ) denote the sequence of the associated times in which the payments take place. Clearly, T_i < R_i < S_i and note that the values T_i, R_i would be the same for payments associated with the same claim, but the S_i would differ.
* Let U_i = R_i-T_i, i= 1, …, N(τ) be the sequence of the reporting delay times associated with the claim underlying each payment, and V_i = S_i-R_i, i= 1, …, N(τ) be the sequence of the associated payment delay time of each payment. Note that U_i is the same for all the payments associated with the same claim.
* Let X_i, i= 1, …, N(τ) be the sequence of information/attributes of relevance, that is associated with the accident, the type of claim, the policyholder attributes, or the characteristics of the payment itself.
* Let N^P(τ) the number of payments made by valuation time τ out of the total N(τ) i.e the number of payments made to the claims reported by τ.
Along those lines, the total liability of the insurance company associated with accidents occurring before the valuation time τ, which we will denote as L(τ), is given by
L(τ) = ∑_i=1^N(τ) Y_i.
Similarly, the portion of liability that is known to the insurance company (i.e the so-called paid amount) by valuation time τ, which we will denote as L^P(τ), is
L^P(τ) = ∑_j=1^N^P(τ) Y_j.
We note that the indices of the payments made might not have the same order as all the payments, but we write it this way using the index j for the sake of simplicity of the notation.
Along these lines, an actuary is interested in estimating the remaining liability i.e the outstanding claims. We will denote this quantity as L^O(τ), and it is given by just the difference
L^O(τ) = L(τ)-L^P(τ).
This value is what the insurance company requires to set up the reserve for outstanding claims, either non-reported, non-settled, or both, and is our goal for estimation. For further details of the claim reserving problem, we refer the reader to <cit.>.
§.§ A survey sampling framework for claim reserving
Our proposal in this paper is based on a simple yet novel idea that allows us to frame the reserving problem in the context of survey sampling, enabling us to leverage techniques from this field to our advantage. Survey sampling is a statistical technique used to estimate population totals based on a smaller sample, especially in contexts where data collection from the entire population is impractical. The sampling design is the systematic process of selecting individuals or units from the population to be included in the sample. Different sampling methods are used depending on the research objectives and resources available. By using statistical techniques based on the sampling design, researchers can make reliable inferences about the population based on the sample.
Applying this concept to our reserving problem, we can consider all N(τ) payments as the population under study, while the current N^P(τ) payments made by the valuation date serve as the selected sample for understanding this population. It is important to note that the sampling design and the actual sampling process are not determined or performed by the investigator, but are purely driven by the randomness associated with whether a payment is made or not by the valuation date. Thus, the sample is given rather than being selected by the actuary. This is one of the distinctions between our setup and the typical survey sampling situations.
The sampling mechanism based on the payment data can be conceptualized as a two-stage sampling process (<cit.>). In the first stage, a Poisson sampling without replacement is employed to sample the reported claims. This means, for each of the claims in the population, a Bernoulli experiment is conducted, where success is defined as the claim being reported by the valuation time, and failure occurs if it is not reported. Refer to <cit.> for more details on the Poisson sampling.
Moving to the second stage, we focus on the payments associated with each of the sampled claims from the previous stage (i.e., the reported claims). In this case, another sampling procedure is carried out to determine which payments of a claim are made before the valuation time and which are not. This is also achieved by Bernoulli-like experiments, however, do note that these are not independent because of the ordering of the payments e.g. a second payment of a claim can be sampled as long as the first payment is sampled.
As a result of the sampling, we can assign a dichotomic random variable 1_i(τ), i=1, …, N(τ) with success probability π_i(τ), to each payment in the population. Such a variable takes the value of 1 or 0, indicating whether the payment Y_i belongs to the sample of payments made or not, respectively, by a given valuation time τ. These variables are referred to as the membership indicators of the payments and are determined based on the delay in reporting (for the first stage of sampling) and the delay in payment (for the second stage) by the valuation time. Mathematically,
1_i(τ) = 1_{ S_i ≤τ} = 1_{ T_i+ U_i + V_i ≤τ} =1_{ U_i ≤τ-T_i }1_{ V_i ≤τ-R_i }
where the indicators in the product on the right-hand side are the indicators of the first and second stages of sampling, respectively. The probabilities π_i(τ) are known as inclusion probabilities and can be interpreted as the likelihood of payment Y_i belonging to the sample or, equivalently, being paid by the valuation time τ. Mathematically, these are given by
π_i(τ) = P( U_i ≤τ-T_i ) × P( V_i ≤τ-R_i ) = π^U_i(τ) ×π^V_i(τ)
where π^U_i(τ)=P( U_i ≤τ-T_i ) and π^V_i(τ)=P( V_i ≤τ-R_i ) are the inclusion probabilities of the first and second stage of sampling, respectively. Note that the value of the second probability depends on the outcome of the first stage of sampling, and so π^V_i(τ) is in principle a conditional probability given the realization of U_i. However, we will omit this in the notation for simplicity. These probabilities are dependent on the valuation time and are likely to vary across payments due to the different attributes X_i associated with each payment. While a more formal notation would be π(τ; Y_i, X_i) to highlight this dependency, we simplify it as π_i(τ) to streamline the notation and emphasize that the indexation on i corresponds to the probabilities being specific for each payment, and determined based on their attributes. In the literature on survey sampling, this is known as the sampling being informative as the actual values of the payments may be associated with the sampling design. It is important to note that these probabilities are not predefined and are therefore unknown to the investigator. We will delve into this matter further in Section <ref>.
Finally, note that the sample size in the design is not a fixed quantity. The sample size, which in our case is equivalent to the number of payments currently made N^P(τ), is a random variable defined as N^P(τ) = ∑_i=1^N(τ)1_i(τ), which we can identify as the thinning of the counting process of the total number of payments.
§.§ Point estimation using the Horvitz-Thompson estimator
As motivated by the population sampling literature (<cit.> or <cit.>), a well-established estimator of the population total of payments (i.e the ultimate L(τ)) is provided by the Horvitz-Thompson (HT) estimator described as follows
L̂ (τ) = ∑_j=1^N^P(τ)Y_j/π_j(τ),
and therefore an unbiased estimator of the outstanding claims is the difference between the estimated total and the currently paid amount,
L̂^O(τ) = L̂(τ)-L^P(τ) = ∑_j=1^N^P(τ)Y_j/π_j(τ) - ∑_j=1^N^P(τ) Y_j = ∑_j=1^N^P(τ)1-π_j(τ)/π_j(τ) Y_j.
The intuition behind the HT estimator lies in the fact that only a portion of all payments Y_j is reported, proportionally to π_j(τ), and so each payment in the sample is “augmented" by a factor of 1/π_j(τ) to approximate the actual total amount. It is important to note that the HT estimator is non-parametric, meaning that it leads to an estimation of the reserve that does not require any assumptions on the underlying distribution of the number of claims (frequency) or the distribution of claim sizes (severity), which is a remarkable property. Additionally, we emphasize the fact that even though the estimator is based on the population level (i.e a macro-level scale), the inclusion probabilities are dependent on the individual attributes of policyholders, claims, and payments. Therefore the estimator incorporates granular information as part of the estimation.
The HT estimator is widely recognized as one of the most influential estimators in the statistics literature, having been extensively studied for over 70 years in the field of population sampling (e.g <cit.>). Consequently, the HT estimator has a solid theoretical foundation and possesses numerous desirable properties that directly inherit to the claim-reserving problem, including consistency, unbiasedness, sufficiency, among others. More recently, it has also been applied in inverse probability weighting (IPW) methods for estimation in causal inference (<cit.>), including applications in fairness in insurance. The terminology “IPW estimator" is more extended in and outside the statistics literature, and so We will mostly refer to the estimator of the reserve as the IPW estimator, and reserve the naming of HT estimator when referring to the general concept.
A specific case of interest arises when we set Y_i = 1. In this scenario, all the sums above simplifies to a count of the number of payments, allowing us to obtain an unbiased estimator for the number of payments yet to make, as
N̂^O(τ) := N̂(τ)-N^P(τ) = ∑_j=1^N^P(τ)1/π_j(τ) - ∑_j=1^N^P(τ) 1 = ∑_j=1^N^P(τ)1-π_j(τ)/π_j(τ).
It is worth noting that this particular expression coincides with the one utilized by <cit.> for the specific case of the number of incurred but not reported (IBNR) claims. In their work, they derived this expression under the assumption that the number of unreported claims follows a geometric distribution and demonstrated its unbiasedness when the number of claims is driven by a Poisson process. However, it is important to emphasize that within the framework of the HT estimator, this result is immediate and does not require of the assumption of the geometric distributions.
§.§.§ A “Micro-level" Chain-Ladder method
From an actuarial standpoint, the IPW estimator for the ultimate can be perceived as an individual-level adaptation of the Chain-Ladder method. By expressing the estimator in Equation (<ref>) as
L̂(τ) = ∑_j=1^N^P(τ) f_j (τ) Y_j,
we can interpret f_j (τ) := 1/π_j(τ) as an individual development factor assigned to each payment Y_j. These factors serve to project the payment to its ultimate value which aligns with the fundamental principle of the Chain-Ladder method. As the factors f_j(τ) are influenced by the policyholder's attributes, we can think of this methodology as a “micro-level" version of the Chain-Ladder method, as it applies the development on an individual level while retaining the essential characteristics of the Chain-Ladder. Indeed, we note that if no information about attributes is incorporated in the inclusion probabilities, then the development factors would be uniform across all claims. Consequently, the ultimate liability would be determined solely by multiplying the current paid amount by the development factor, which is nothing but the Chain-Ladder method.
This analogy provides the IPW estimator with an intuitive and interpretable estimation of the reserve that is already well-established in the actuarial community, and makes it more appealing for practitioners. We would like to note that the IPW estimator and the Chain-Ladder method have a very entangled connection, but we will deepen this discussion in <cit.>.
§.§ Confidence interval of the estimation
Non-parametric confidence intervals for the reserve can be constructed based on the sampling distribution of the HT estimator, as discussed by <cit.>. In summary, under minimal regularity conditions, the HT estimator follows approximately a normal distribution under the two-stage sampling design for large populations (<cit.>). Thus, an approximate 1-α confidence interval can be constructed using normal quantiles. However, as explained by <cit.>, the accuracy of the normal approximation relies on the sample size and the distribution of Y_i, which tends to exhibit skewness and heavy-tails. Consequently, the normal distribution might provide a suboptimal approximation for our reserving application.
Alternatively, one can construct a confidence interval by applying a log transformation to the liability. This approach utilizes the delta method to construct an interval for the logarithm of the liability, which tends to exhibit behavior closer to normality. Subsequently, the interval is transformed back to the original scale using the reverse transformation. This log-transformed confidence interval can be a more appropriate choice, considering the distribution of the data and its potential skewness and heavy-tailed characteristics.
Therefore, an approximate 1-α confidence interval for L^O(τ) can be constructed as
( exp( log( L̂^O(τ) ) - Z_α/2√( Var(L̂^O(τ) ) )/L̂^O(τ) ) , exp( log( L̂^O(τ) ) + Z_α/2√( Var(L̂^O(τ) ) )/L̂^O(τ) ) ),
where Z_α/2 is the α/2 quantile of the standard normal distribution. As noted by <cit.>, the computation of the variance of the HT estimator can be quite laborious. To address this challenge, <cit.> propose the use of a simpler estimator given by:
The variance of the estimator is traditionally estimated using the expression
V̂âr̂_1( L̂^O(τ) ) = ∑_j=1^N^P(τ) (1-π_j(τ))^3/π_j^2(τ) Y_j^2 + ∑_j=1^N^P(τ)∑_k=1,k≠ j^N^P(τ)χ_jkπ_max(j,k)- π_j π_k /π_max(j,k)1-π_j(τ)/π_j(τ)1-π_k(τ)/π_k(τ) Y_j Y_k,
where χ_jk=1 if the j-th and k-th payment in the data are generated from the same claim, and χ_jk=0 otherwise.
As noted by <cit.>, the computation of the expression for the variance above can be quite laborious. This is because, in the second term, there is a combinatorial component associated with the covariance between payments belonging to the same claim. To address this challenge, alternative estimators of the variance can be employed. <cit.> propose the use of a simpler estimator, given by:
V̂âr̂( L̂^O(τ) ) = ∑_j=1^N^P(τ)( N^P(τ)1-π_j(τ)/π_j(τ) Y_j- L̂^O(τ) )^2 /N^P(τ)(N^P(τ)-1)
The expression above can be viewed as the jackknife estimator of the variance of the HT estimator (see for e.g <cit.>). This formulation is computationally simple to obtain and tends to provide a more conservative estimate (<cit.>), which is desirable for the claim reserving problem.
Lastly, we note that another approach for the construction of confidence intervals can be obtained using the bootstrap as described in <cit.>. This approach, although computationally more expensive, is data-driven and could be a desirable alternative.
The following steps can be performed to construct confidence intervals:
* Construct an artificial population that mimics the total population of payments. To achieve this, augment the current payments data by repeating each payment observation, Y_j, a total of 1/π_j(τ) times. This results in N̂(τ) observations in the artificial population, where each observation has the same inclusion probability as the original observation it was derived from.
* Draw a series of B independent samples, referred to as "bootstrap samples," from the artificially created population, using the same sample design as the data. This is done by sampling observations Y_j with the associated inclusion probabilities π_j(τ), while ensuring that payments from the same claim are sampled together.
* For each bootstrap sample, calculate the estimator of non-reported payments, L̂_1^*(τ), …, L̂_B^*(τ), using Equation <ref>. The distribution of these values approximates the true sampling distribution of the HT estimator.
* construct a confidence interval at a specified confidence level 1-α using the percentiles method. Find the α/2 and 1-α/2 percentiles from the calculated values, denoted as L̂_l^*(τ) and L̂_u^*(τ), respectively. The resulting confidence interval will be reported as (L̂_l^*(τ),L̂_u^*(τ) ).
One of the advantages of the bootstrap approach is its ability to account for parameter uncertainty and its influence on interval construction. Let's consider estimating π_j(τ) using a consistent estimator π̂_j(τ) obtained through some estimation method. We denote the associated sampling distribution of π̂j(τ) as Fπ̂_j(τ). To incorporate parameter uncertainty, an additional step, denoted as Step 0, can be introduced in the bootstrap generation process:
0. For each j, draw a random realization from the sampling distribution F_π̂_j(τ). Let π^*_j(τ) represent the set of generated inclusion probabilities. These probabilities will be used in the subsequent steps of the bootstrap procedure.
Without going into further detail, it is worth mentioning that the bootstrap approach can be combined with the interval based on the normal distribution mentioned earlier to account for parameter estimation error. In this case, Step 0 can be applied in the same manner, but bootstrap samples are generated for the confidence interval in Equation (<ref>) itself. The resulting confidence interval would then be determined by the α/2 and 1-α/2 quantiles of the bootstrapped lower and upper limits, respectively.
§ CALCULATION OF RBNS, IBNR, INCREMENTAL CLAIMS AND OTHER RESERVES
The reserve for outstanding claims, as discussed earlier, accounts for unreported and partially paid claims. While Equation (<ref>) provides an estimator for the total reserve, it doesn't specify the allocation of the reserve to different types of payments. However, for accounting purposes, cash management, and risk assessment, actuaries need to specify the components of the overall reserve, commonly known as the IBNR (Incurred But Not Reported) reserve, the RBNS (Reported But Not Settled) reserve, and incremental payments over specific time periods.
In this section, we present how the survey sampling framework can be adapted to decompose the estimation of the total reserve (Equation (<ref>)) into these sub-components as per the actuary's requirements. To do so, we introduce the “change of population principle" as a general approach to accomplish this decomposition within the IPW framework, and then demonstrate its application in deriving the RBNS, IBNR, incremental claims, and potentially other relevant calculations.
§.§ The change of population principle
In Section <ref>, we used the fact that the currently paid amount can be considered as a sample of the total amount of payments. As a result, we defined a sampling design within the total amount of payments along with its corresponding inclusion probabilities. However, it is important to recognize a simple yet crucial fact: the currently paid amount can also be regarded as a sample from various sub-populations within the total amount of payments.
Figure <ref> demonstrates a method of partitioning the total liability at a given valuation time τ into sub-populations associated with specific reserves of interest. This figure provides a visual representation, akin to a run-off triangle, distinguishing reported and non-reported payments at τ. The x-axis represents the development time, which goes from t=T (i.e. the accident time) up to time t= T+ω, being ω the maximum settlement time of a claim. This figure can be thought of as a screenshot of the classification of all the payments at a given valuation date τ.
To illustrate the concept, let's examine Figure <ref>. Combining all regions (A to G) yields the population discussed in Section <ref>, representing total liabilities. Focusing on the lower half of the figure, regions A to D, encompasses payments associated with reported claims at τ. Additionally, by narrowing our focus to the lower half of the figure and considering payments up to a specific time, such as t=t_2 (the union of regions A, B, and C), we obtain a truncated version of payments. Specifically, this includes total payments made for currently reported claims, excluding those made after t=t_2. Note that the current paid amount is a sample from all of this subpopulation.
Adopting this approach, estimating the total liability for a specific subpopulation involves treating it as the main population from which current payments are sampled. Consequently, the IPW estimator, under a different sampling design, can be employed to estimate the liability. We refer to this approach as the “change of population principle”.
The different sample design under the change of population principle leads to different inclusion probabilities. Nevertheless, these probabilities can be easily determined using elementary conditional probability arguments. Specifically, when we limit the analysis to a subpopulation 𝒮, we denote the inclusion probability under this restriction as π^𝒮_j(τ). This probability represents the likelihood of payment Y_j being reported at τ, given its membership in subpopulation 𝒮. Note that these probabilities differ in meaning and in value to those defined in Section <ref>. Bayes' rule allows expressing this probability as
π^𝒮_j(τ) = π_j(τ)/P_j(𝒮),
where P_j(𝒮) denotes the probability of payment j being sampled in subpopulation 𝒮 according to the original sampling design, which is influenced by factors like reporting delay and claim evolution. It is important to note that in this context, we assume the subpopulation 𝒮 is a subpopulation encompassing the current payments (region A in Figure <ref>).
We will observe that for the reserves of interest, these probabilities can be straightforwardly expressed in terms of the probabilities associated with the previously defined delay time random variables U_j and V_j. Therefore, no additional estimations are necessary.
§.§ Calculation of the RBNS reserve
The Reported But Not Settled (RBNS) reserve represents payments that are yet to be made for claims already reported at valuation time τ. This reserve corresponds to the combined regions B, C, and D in Figure <ref>.
To estimate the reserve, we apply the change of population principle and define the population 𝒮 as the total payments associated with reported claims at τ. This population corresponds to the lower half of Figure <ref>, specifically 𝒮=A ∪ B ∪ C ∪ D. In reserving terminology, this corresponds to the ultimate of incurred losses for claims reported prior to τ.
Next, we determine the inclusion probabilities. The selection of the subpopulation depends on claim reporting, which occurs with probability P_j(𝒮)=P(U_j ≤τ -T_j). Utilizing the previously mentioned result derived from Bayes' rule, the new inclusion probabilities for this population are
π_j(τ)/P_j(𝒮)
= P(U_j ≤τ -T_j) × P(V_j ≤τ -R_j)/P(U_j ≤τ -T_j) = P(V_j ≤τ -R_j) = π_j^V(τ).
This result is intuitive since the region only considers claims already reported at τ, and the remaining randomness pertains to the evolution of payment occurrences only. Consequently, we can utilize the IPW estimator to obtain an unbiased estimator for the total payments of reported claims as
∑_j=1^N^P (τ) Y_j/π_j^V(τ).
Hence, the RBNS reserve of interest can be obtained by subtracting this quantity from the current paid amount:
L̂^RBNS(τ) = ∑_j=1^N^P (τ) Y_j/π_j^V(τ) -L^P(τ) = ∑_j=1^N^P (τ)1-π_j^V(τ)/π_j^V(τ) Y_j.
§.§ Calculation of the pure IBNR reserve
To estimate the Incurred But Not Reported (IBNR) reserve, we cannot directly apply the change of population principle since the current paid amount is not a subpopulation of the not reported claims population (Figure <ref>). However, we can easily overcome this by considering the current paid amount as the difference between two populations: the total payments (all regions in Figure <ref>) and the total payments of currently reported claims (lower half of Figure <ref>). Estimations for the liabilities associated with these populations have been discussed in Sections <ref> and <ref>, respectively. Therefore, the IBNR liability can be estimated as the difference between these two estimations.
L̂^IBNR(τ) = L̂^O(τ) - L̂^RBNS(τ) = ∑_j=1^N^P (τ)( 1/π_j(τ) - 1/π_j^V(τ))Y_j = ∑_j=1^N^P (τ)( 1-π_j^U(τ)/π_j^U(τ)) Y_j/π_j^V(τ).
We would like to highlight that this approach to estimating the IBNR is analogous to the conventional actuarial method using run-off triangles, where the total reserve is estimated using the incurred claims triangle and subtracting the reserve obtained from the paid claims triangle. Unlike aggregate approaches that may yield negative reserve estimates, our method ensures non-negative estimations.
§.§ Calculation of cumulative and incremental payments
The estimator we have presented so far provides the ultimate amount of liabilities, but insurance companies require projections of the reserve payments over specific periods. These payments, known as incremental claims, can be estimated within our framework as follows: we utilize the change of population principle to estimate cumulative claims for different periods and then calculate the incremental claims as the difference between these cumulative claims. Notably, our model is continuous rather than discrete, allowing for the accommodation of any desired periodicity for incremental claims. We will illustrate this process for the total reserve only, however, it can be similarly applied to the RBNS.
Let's consider an insurance company assessing claims incurred before the valuation time τ and interested in estimating the incremental claims associated with a future period between t_1 and t_2 (τ < t_1 < t_2), denoted as L(τ, t_1, t_2). Visually, L(τ, t_1, t_2) corresponds to C and F in Figure <ref>.
We start by considering the population of cumulative claims up to time t_1 ≥τ, where only payments made up to t_1 are included i.e L(τ, 0, t_1) (regions A, B, and E in Figure <ref>). Using the change of population principle, a payment belongs to this population if its payment time is before t_1, which occurs with probability P_j(𝒮) = π_j(t_1). Thus, the inclusion probability is:
π_j(τ)/P_j(𝒮)
= π_j(τ)/π_j(t_1) ,
and so the IPW estimator for cumulative claims is given by
L̂(τ, T, t_1)= ∑_j=1^N^P(τ)π_j(t_1)/π_j(τ)Y_j.
The incremental claims between t_1 and t_2 are then given by L(τ, t_1, t_2) = L(τ, 0, t_2) - L(τ, 0, t_1), and so an unbiased estimator for incremental claims is
L̂(τ, t_1, t_2)= L̂(τ, 0, t_2)- L̂(τ, 0, t_1)= ∑_j=1^N^P(τ)π_j(t_2)-π_j(t_1)/π_j(τ)Y_j.
This is a very intuitive expression: The denominator, π_j(τ), scales the observed claims Y_j to the total amount, while the difference in probabilities in the numerator, π_j(t_2) - π_j(t_1), captures the proportion of the total observed between t_1 and t_2.
L̂^RBNS(t_1,t_2)(τ) = ∑_j=1^N^P (τ)π_j^V(t_2)-π_j^V(t_1)/π_j^V(τ) Y_j
L̂^IBNR(τ) = L̂^O(τ) - L̂^RBNS(τ) = ∑_j=1^N^P (τ)( 1/π_j(τ) - 1/π_j^V(τ))Y_j = ∑_j=1^N^P (τ)( 1-π_j^U(τ)/π_j^U(τ)) Y_j/π_j^V(τ)
L̂^IBNR (t_1,t_2)(τ) = L̂^(t_1,t_2)(τ) - L̂^RBNS(t_1,t_2)(τ) =
The variance of the estimator would be:
Var( L̂^(t_1,t_2)_ij (τ) ) = ∑_j=1^N^(t_2)(τ)( π_j(t_2)-π_j(t_1)/π_j(τ))^2 π_i(τ)(1-π_j(τ))Y^2_j
which can be estimated as:
Var( L̂^(t_1,t_2)_ij (τ) ) = ∑_j=1^N^(t_2)(τ)( π_j(t_2)-π_j(t_1)/π_j(τ))^2 π_i(τ)(1-π_j(τ))Y^2_j
Similarly, the covariance between the estimators, on disjoint interval is given by:
Cov(L̂^RBNS(t_1,t_2)_ij (τ), L̂^RBNS(t_3,t_4)_ij (τ) ) =
∑_k=1^N_ij∑_l=1^N_ij( π_ijk(t_2)-π_ijk(t_1))/π_ijk(τ)( π_ijl(t_4)-π_ijl(t_3))/π_ijl(τ)( π_ijkl(τ) - π_ijk(τ)π_ijl(τ) )Y_ijkY_ijl
which can be estimated by
Ĉôv̂(L̂^RBNS(t_1,t_2)_ij (τ), L̂^RBNS(t_3,t_4)_ij (τ) ) =
∑_k=1^N_ij^Obs(τ)∑_l=1^N_ij^Obs(τ)( π_ijk(t_2)-π_ijk(t_1))/π_ijk(τ)( π_ijl(t_4)-π_ijl(t_3))/π_ijl(τ)(π_ijkl(τ) - π_ijk(τ)π_ijl(τ)/π_ijkl(τ) ) Y_ijkY_ijl
§.§ Others applications
The IPW framework and the change of population principle extend beyond the reserves discussed thus far, allowing for the estimation of other types of reserves based on the specific needs of the actuary. For instance, the incurred but not paid (IBNP) reserve can be estimated as a portion of the current RBNS estimation. In this case, the payments themselves serve as the population for analysis using the change of population principle, with an inclusion probability linked to the occurrence of the first payment in the sequence.
Another example, though less explored, is the unearned premium reserve (UPR). This reserve pertains to payments for claims where the accident occurs after the valuation time τ, but only for policies in force at τ. To estimate the UPR, the change of population principle can be applied by defining a larger superpopulation consisting of all payments associated with claims from policies in force at τ, regardless of when the accidents occurred.
Finally, it is important to note that the IPW framework provides estimations without assuming specific meaning for Y_i. The quantity of interest represented by Y_i can be diverse as to the need of the actuary. For example, setting Y_i = 1 provides an estimation of the number of payments. Alternatively, the actuary can define Y_i as fees, commissions, policy management costs, etc., enabling a cost decomposition analysis of the reserve estimates.
A particular construction that we believe practitioners may find interesting is the use of case estimates. Here, we describe it as follows:
§.§.§ Calculation of IBNR and RBNS reserves under case estimates
On certain occasions, insurance companies make estimates of the total amount that will be paid for specific claims based on a case-by-case analysis. These estimates are commonly referred to as case estimates. In such situations, the insurance company can use the case estimate information to construct the RBNS without relying on a model. For example, in a car crash resulting in total damage to the vehicle, the insurance company already knows that it will have to pay for the entire vehicle, regardless of the number or timing of the payments. In such a case, the RBNS is just set as the difference between the case estimate and the paid amount. This can be utilized to obtain a better estimation of the total liability and, consequently, a more accurate estimation of the IBNR reserve.
In this case, within the two-stage sampling mechanism described in Section 2, we can assume that only the first stage is performed. This means that either a claim is reported or not, and the ultimate value of the claim as based on the case estimates, which we will denote as Ŷ_i, is observed. Alternatively, this can be viewed as if there was only one payment per claim, where that payment represents the full amount of the claim.
Following this analogy, with this "single payment" per claim, we can estimate the total outstanding claims and the IBNR reserve using the IPW framework as follows:
L̂(τ) = ∑_j=1^N^P (τ)Ŷ_j/π_j^U(τ)
L̂^IBNR(τ) = ∑_j=1^N^P(τ)1-π_j^U(τ)/π_j^U(τ)Ŷ_j
where in this case N^P(τ) denotes the number of these "single payments" made by valuation time τ, which coincides with just the number of reported claims.
The resulting estimator for the IBNR provides an intuitive interpretation of the estimator of the previous section. Indeed, we can informally interpret the quantity Y_j/π_j^V(τ) in the equation above as an HT estimator of the ultimate amount of the claim Ŷ_j, so leading to the estimator in the previous equation.
L̂(τ) = ∑_j=1^M^I (τ)Ŷ_j/π_j^U(τ)
L̂^IBNR(τ) = L̂(τ) -L^I(τ) = ∑_j=1^M^I (τ)1-π_j^U(τ)/π_j^U(τ)Ŷ_j
§ ESTIMATION OF THE MODEL
In order to implement the IPW estimator, the key input required is the unknown inclusion probabilities π_i(τ). These probabilities are associated with the evolution of a claim, including reporting and settlement delays and depend on various attributes of the payment, the claim, and the policyholder, denoted as X_i, as well as the claim amount Y_i itself. In this section, we outline a data-driven approach to estimating these values. As explained in Section <ref>, the inclusion probabilities consist of two separate components: the probability of reporting and the probability of settlement. Each one is estimated separately, so we discuss different strategies in Sections <ref> and <ref>.
§.§ Estimation of the reporting delay times probabilities P(U_j ≤τ -T_j)
To estimate probabilities, it is common to assume that the reporting delay times, conditioned on claim attributes X_i, follow a common distribution function (see, for example, <cit.>), that we will denote with F_U | X_i(u) and that will be the target of estimation in some type of regression framework to include the dependence on covariates.
The variable of interest, U_i, is a time-to-event random variable commonly studied in survival modeling. Therefore, existing approaches in survival modeling can be utilized to estimate the overall distribution function and the desired probabilities. Proportional hazard models, also known as Cox regression models have been widely studied for this purpose in the statistical literature (e.g <cit.>). These methods aim to directly model the log-hazard function of the random time variable, while accounting for the attributes in the modeling using a linear regression-like model
log( λ_U | X(u) ) = log(λ_0(u)) + ⟨ X, β⟩ +ε,
where, λ_0(u) represents the baseline hazard function, which can be chosen from a parametric family or modeled nonparametrically e.g. using B-spline representation. ⟨ X, β⟩ represents the regression formula involving the covariates X with corresponding regression parameters β, and ε captures unobserved effects as an error term, also known as a random effect or frailty. Depending on the analysis, various structures for the random effect can be considered, such as autoregressive structure to capture trends and dependencies over time, or correlated effects to account for dependencies between claims that evolve together. Further guidance on specifying models for the hazard function can be found in <cit.>.
Consequently, the desired probability is derived using the relationship
π_i^U(τ) = Pr(U_i ≤τ-T_i)=F_U | X_i(τ-T_i) = 1-exp( -∫_0^τ-T_iλ_U | X_i(u) du).
A crucial aspect in the estimation of the model above is accounting for the right truncation of the data. Indeed, due to the delay in the reporting times, our observations are limited to the conditional random variables: U_i | U_i ≤τ - T_i, and ignoring this fact would result in a downward bias in the overall distribution. Fortunately, the literature on survival analysis has widely explored this issue and provided solutions that the user can adopt for the estimation of the model. See for e.g <cit.>, <cit.>, <cit.>, <cit.>, <cit.>).
It is worth noting that not all survival models use linear regression structures or aim to describe the hazard function, and alternative approaches can offer different and flexible structures inspired by the machine learning literature. For instance, <cit.> consider non-linear regression on covariates via deep learning approaches, <cit.> propose a flexible model based on mixture of experts, and other approaches utilize survival trees such as <cit.>. These alternatives provide increased flexibility compared to proportional hazard models, but may require additional expertise for model fitting and interpretation. The choice of the model must be achieved in a data-driven fashion aiming for the best fit to the data. Regardless of the methodology, careful consideration should be given to estimation under the right truncation of the data.
Lastly, due to the popularity of survival analysis in statistics applications, most of the methods described above have already been implemented in statistical software packages and are readily available for its use in our applications. For example, in , there are various implementations, including Cox models as in <cit.>, mixture of experts as in <cit.>, and deep survival models, survival trees, forests, and more in <cit.>.
§.§ Estimation of the payment times probabilities P(V_j ≤τ -R_j)
Similarly to the previous case, we assume that the payment times, conditioned on claim attributes X_i (including the reporting delay time U_i), follow a common distribution function denoted as F_V | X_i(v) that we aim to estimate via a regression framework. We will denote with ω to the maximum settlement time of a claim. While estimating this probability might seem similar to the previous case, a significant difference arises due to the recurrent nature of payment events for a given claim, as opposed to the one-time event of claim reporting. This recurrent event process (e.g. <cit.>) necessitates an appropriately adapted modeling approach. This section will discuss two closely related, yet distinct approaches, to address this estimation.
§.§ Counting processes
Recurrent events are closely related to counting processes, where the former focus on event times and the latter on the number of events. In our case, we can consider the number of payments over time to be governed by a stochastically defined point process. Numerous works in insurance have explored modeling such processes in the context of reserving (e.g <cit.>, <cit.>, <cit.>).
Let's define M(t), where t ∈ (0, ω), as the counting process associated with the number of payments for a single claim. This is the counting process associated with the payments times V_j for a given claim. For the sake of readability, we will omit the dependence on covariates in the notation, although it is important to acknowledge that all these quantities are dependent on them.
Since our objective is to determine probabilities of the form P(V ≤ t), our goal is to express the desired probability in terms of the process M(t). To do so, we work with the reversed time version of the counting process where the new time is defined as Ṽ = ω - V. The reversed time process can be seen as a mortality process (see for e.g. <cit.>), where the initial number of lives is M(ω) and the lifetime random variable of a newborn follows the same distribution as Ṽ. Then
P(V ≤ t) = P( Ṽ≥ω - t) = E ( P( Ṽ≥ω - t | M(ω) ) ) = E ( E( M(t) | M(ω) )/M(ω)) = E ( M(t)/M(ω))
where the second last equality holds by the traditional life table relationship _tp_0 = l_t/ł_0, and the last equality is the tower property of conditional expectations.
Equation <ref> reveals that the desired probability possesses an intuitive expression associated with the evolution of a claim up to settlement. In essence, the right-hand side of Equation (<ref>) represents the expected proportion of payments made by time t out of the total of payments. Another equivalent interpretation of this quantity is as the inverse of a development factor for the number of payments from time t to the ultimate value at time ω. It is worth noting that this expression can be analytically computed only for certain processes. One such example is the widely used Poisson process (and some extensions), as illustrated in Example <ref>.
Suppose the M(t) is a non-homogeneous Poisson process with intensity rate μ(t), then
P(V ≤ t) = E ( M(t)/M(ω)) = E ( E( M(t) | M(ω) )/M(ω)) = E ( M(ω) ∫_0^t μ(s)ds/∫_0^ωμ(s)ds/M(ω)) = ∫_0^t μ(s)ds/∫_0^ωμ(s)ds
where we use the fact that M(t) | M(ω) ∼Binom( n=M(ω), p=∫_0^t μ(s)ds/∫_0^ωμ(s)ds) in the third equality.
Along those lines, the actuary must select in a data-driven fashion an appropriate counting process (that incorporates the use of attributes) to model the number of payments per claim, and then proceed to compute the desired probability using Equation (<ref>). Fitting counting processes could be a complex task and can vary depending on the approach used. For a comprehensive discussion on this matter, we refer to <cit.>.
§.§ Reversed time counting process
By reversing the time of the counting process for the number of payments using the transformation Ṽ = ω - V, we can interpret the resulting process as a mortality process, as discussed by <cit.>. This analogy allows us to describe Ṽ using a mortality model, which simplifies the fitting of the counting process. Reversing the time is a well-studied approach in the survival modeling literature <cit.>.
Most mortality models belong to the class of survival models, and so can be embedded into the framework described in Section <ref>. The advantage of working with the reversed time process and mortality models over counting processes is the wider range of options available in terms of statistical modeling, as explored in the literature.
In this case, we assume that the reversed hazard function ( e.g <cit.>) of the time random variable Ṽ = ω - V, denoted as λ̃_V | X(t), is described using a Cox regression-like model that incorporates attribute information
log( λ̃_V | X(t) ) = log(λ̃_0(t)) + ⟨ X, α⟩+ε.
As before, λ̃_0(t) represents a baseline reversed hazard function, ⟨ X, α⟩ represents a regression formula involving the covariates X with parameters α, and ε represents a random effect. This modeling approach is analogous to the one described in Section <ref>, so we refer the reader to it.
As a result, the desired probability can be derived as:
π_i^V(τ) = P(V_i ≤τ-R_i)= P(Ṽ_i ≥ω-(τ-R_i)) = exp( -∫_0^ω-(τ-R_i)λ̃_V | X_i(t) dt)
Similar to the case of the reporting delay time, we face a right truncation problem when considering payments occurring after the valuation date. However, when reversing the time, this issue transforms into a left truncation problem and observations are only available if V_i ≥ω-(τ-R_i). Therefore, it is important to estimate the survival model for the reversed time random variable using an algorithm that allows for the left truncation of the data. Fortunately, modern implementations of survival models often include this capability as discussed in Section <ref>.
§.§ Goodness of fit and other considerations
Since the inclusion probabilities are the sole inputs for the IPW estimator, it is essential to have a well-fitted model for optimal performance. In this section, we discuss the assessment of the model's goodness of fit using pseudo residuals. Additionally, we comment on the possible instability of the resulting IPW estimator and discuss ways of addressing such an issue.
§.§.§ Pseudo-residuals
One approach to assess the accuracy of the fitted distributions is to use uniform pseudo-residuals based on the probability integral transform (<cit.>). These pseudo-residuals are constructed by evaluating the fitted distribution function at the observed values. They are widely used for goodness of fit assessment in various model families (e.g <cit.>). Considering that the observations come from a truncated distribution, the truncated version of the distribution should be taken into account. These pseudo residuals can be expressed as:
r_i^U= F̂_U | X (U_i)/F̂_U | X (τ-T_i) r_i^V= F̂_V | X (V_i)/F̂_V | X (τ-R_i)
The uniform pseudo-residuals should exhibit approximate uniformity if the fitted model adequately represents the data. The uniform pseudo-residuals can be transformed to the normal scale using the quantile function of the standard normal distribution, denoted as Φ^-1:
r̃_i^U= Φ^-1(r_i^U) r̃_i^V= Φ^-1(r_i^V)
These transformed normal pseudo-residuals allow for easier visualization and detection of deviations from the expected distribution when compared with a uniform scale, nevertheless, they are equivalent. Note that the Cox-Snell residuals, commonly used in survival analysis, are obtained by employing the quantile function of an exponential distribution instead of the normal distribution. See for e.g <cit.>.
The normal pseudo-residuals can be utilized to assess the goodness of fit of the model through graphical analysis, such as scatter plots, etc, in the same fashion as with ordinary residuals in linear regression. The focus of the assessment is to determine whether the distribution of these residuals resembles a normal distribution, which can be achieved through QQ and PP plots, or hypothesis testing techniques.
§.§.§ Adjustments to the IPW estimate
A key issue that makes the claim reserving estimation process to be a very challenging one, is that the inclusion probabilities can vary significantly impacting the stability of the estimator. An extreme case is when the estimated inclusion probability of a claim is close to 0, which mostly occurs when a claim is recently reported, which represents the majority of the claims included in the IBNR reserve. Such circumstances can lead to instability in the estimator, potentially resulting in abnormally high values of the reserve when compared with experience on previous reserving exercises. This behavior has been widely documented in the survey sampling literature of the HT estimator. See for e.g., <cit.>, <cit.> and references therein.
Trimming the inclusion probabilities is a method proposed to address the behavior of such extreme values. In this approach, if the inclusion probability is too small, it can be replaced with a larger value to get rid of the instability (<cit.>). For reserving applications, the probabilities can be trimmed by artificially assuming a slightly later valuation date, which would increase the inclusion probabilities. Alternatively, data-driven methods, such as the algorithm <ref> proposed by <cit.>, illustrated below, offer a more systematic approach. They showed that the mean square error of the IPW estimator is less or equal to its counterpart when such an adjustment is performed.
It is important to note that replacing the inclusion probabilities with larger values may introduce a downward bias in the reserve estimation. Therefore, it is recommended to perform any adjustment only if the estimation displays sensitivity to the changes in the inclusion probabilities. Indeed, if the estimation remains nearly unchanged after adjustments, retaining the original estimation would be preferred over the trimmed one.
§ NUMERICAL STUDY WITH REAL DATA
In this section, we showcase the application of the IPW estimator using a real data set obtained from a European automobile insurance company. The data-set comprises information on Body Injury (BI) claims from January 2009 to December 2012.
In line with our methodology's primary objective of serving as an alternative to traditional macro-level models, while being simpler than fitting a micro-level model, we maintain a simplified approach to emphasize the practicality of the method in real-world applications.
§.§ Description of the data
The dataset contains detailed information about claim settlements, policyholder attributes, and automobile characteristics within the aforementioned time period. This information encompasses factors such as car weight, engine displacement, engine power, fuel type (gasoline or diesel), car age, policyholder age, and region (a total of five regions). Furthermore, the dataset includes details related to the accidents themselves, such as the time of occurrence and the type of accident (type 1 and type 2). Additionally, information regarding the progression of claim payments is available, including reporting time, settlement amounts, and the corresponding occurrence times.
Our statistical modeling study focuses on the evolution of claims, specifically related to reporting delay time and payment times. We illustrate some characteristics of these quantities, such as the distributions in Figure <ref> and summarize key statistics in Table <ref>.
Upon reviewing the information presented in Figure <ref> and Table <ref>, it is evident that the reporting delay tends to be relatively short, with an average duration of less than a month. However, there is notable variability in the tail behavior of this variable. In contrast, the progression of claim payments typically spans a few months on average, but there are instances where settlement times can extend over several years. It is important to note that the distributions of these variables exhibit complexities that are challenging to capture using simple parametric models. Specifically, they display significant temporal fluctuations, indicating that historical data may not adequately represent future events. To account for this, we define the maximum development time for future analysis as ω=24 months, with approximately 99.95% of claims being settled within this timeframe.
§.§ Fitting of survival models
To model the reporting delay time, we employ a Cox regression-type model, as outlined in Section <ref>. Likewise, we adopt the reversed-time approach discussed in Section <ref> to develop a model for claim evolution. As the maximum development time is 24 months, then the time window from 2009 to 2010 can be only used for training purposes, while the time window from 2011 to 2012 will be used for testing.
For the sake of illustration, we calculate reserves on a monthly basis i.e. several valuation dates each one month apart from the other. To ensure that the estimation captures the most recent evolution of claims as much as possible, we re-fit the models employing a rolling window approach, where only the last two years of data preceding a valuation date are used for the fitting. This approach aligns with the practices employed by industry professionals in their daily work. We do not consider a time-series model via correlated frailties due to the short time window of the training sets i.e only 2 years. Although there may be some variations in model parameters across different dates, the overall fit behaves similarly across time. We proceed to present the fitting processes for only the first valuation date in the testing period.
§.§.§ Reporting delay time
For the reporting delay time, U, we fit a Cox regression model as in Equation (<ref>) using the attributes of the policyholder as covariates:
log( λ_U | X(u) ) = log(λ_0(u)) + β_1Car-Weight+β_2Engine-Power+β_3Fuel-Type+β_4Age+β_5Car-Age
+β_6Accident-type+β_7(Claim-Amount)+S_8(Accident-day)+β_9Region
For the sake of interpretability, we work with the standardized version of the continuous covariates. The baseline hazard function, denoted as λ_0(u), is estimated using a B-Spline representation. Additionally, we incorporate a non-linear effect of the covariate “Accident-day" with the term S_8(Accident-day), which is also estimated using a B-Spline representation. This inclusion of a non-linear effect associated with calendar time provides the model with a dynamic alike structure, allowing it to account for some temporal changes.
To estimate the parameters while considering the right truncation of the data, we utilize a generalized additive model implementation via the piece-wise exponential modeling approach, as described by <cit.>. This approach is available in packages such as , or . The results of the estimation are presented in Table <ref> and Figure <ref>.
Table <ref> shows that all the policyholder attributes included in the model are statistically relevant to describe the behaviour of the reporting delay time. The same conclusion applies to the categorical variable “Region", although the detailed results are not presented in the table due to its numerous categories. The right panel of Figure <ref> illustrates the non-linear effect associated with the accident day within a year. Time 0 is the beginning of the year i.e. January 1 and time 1 indicates the end of the year i.e. December 31. We can appreciate a quarterly seasonal pattern. Specifically, the hazard rate for the reporting delay time is higher in the second and fourth quarters compared to the other two quarters. Furthermore, the left panel of Figure <ref> displays the baseline hazard function, indicating a large hazard rate during the initial months, suggesting a concentration of reporting delays within this period. The hazard rate then decreases rapidly but remains nonzero for large delay times, indicating a heavy tail.
To assess the adequacy of the model fit, we employ normal pseudo residuals, as introduced in Section <ref>. Figure <ref> presents QQ and PP plots, comparing these normal pseudo residuals against the theoretical normal distribution. From both plots, it is evident that the normal pseudo residuals exhibit no significant deviations from their expected theoretical counterparts. Hence, there is no evidence suggesting a lack of fit in the fitted distribution function.
§.§.§ Payment delay time
Next, we present the model for claim evolution utilizing the reversed time-counting process estimation approach, in conjunction with a Cox regression model similar to the one previously described. In this case, we consider a maximum settlement time of ω=24 months, as mentioned earlier, and proceed to model the reversed time random variable Ṽ = ω - V. The Cox regression model for the reversed hazard function is defined as follows:
log( λ̃_V | X(v) ) = log(λ̃_0(v)) + α_1Car-Weight+α_2Engine-Power+α_3Fuel-Type+α_4Age+α_5Car-Age
+α_6Accident-type+α_7(Payment-Amount)+S_8(Reporting-day)+α_9Region
+α_10Reporting-delay-time
Once again, we work with the standardized version of the continuous covariates for interpretability. The baseline reversed hazard function, denoted as λ̃_0(v), is estimated using a B-Spline representation. The non-linear effects of the covariate “Reporting-day" are captured through the term S_8(Reporting-day), also estimated using a B-Spline representation.
This modeling approach mirrors the methodology presented in the previous section, and thus, we refrain from delving into further details. However, we highlight two key distinctions in this regression model. Firstly, we incorporate the reporting day as a non-linear effect aiming to capture time-related effects. Secondly, we include the observed reporting delay time as a covariate. We remark that these probabilities are based on all the information available at the payment time, which encompasses any additional information obtained during the reporting process.
The fitted model is presented in Table <ref> and Figure <ref>. The results exhibit similarities to the previous case, as shown in Table <ref>, where all policyholder attributes included in the model are statistically significant in describing the behavior of the reversed payment time. The right panel of Figure <ref> displays the non-linear effect associated with the reporting day, indicating a quarterly seasonal pattern. However, the pattern is not as distinct as observed for the reporting delay time. The right panel of Figure <ref> illustrates the baseline reversed hazard function, which should be interpreted in reverse. In this plot, time 0 and 24 months correspond to time 24 and time 0 months, respectively in the original scale. It is evident that the hazard function is initially high during the first couple of months (in the original scale), implying a significant number of payments occurring within this period. Subsequently, the hazard rate decreases rapidly, approaching zero, indicating the occurrence of some payments after several months from reporting.
To assess the goodness of fit, we employ again the normal pseudo residuals and validate them accordingly. Figure <ref> displays QQ and PP plots, comparing these normal pseudo residuals against the theoretical normal distribution. Similar to the previous analysis, there is no significant deviation observed between the normal pseudo residuals and their expected theoretical counterparts in both plots. Hence, there is no evidence indicating a lack of fit in the fitted distribution function.
§.§ Estimation of the reserve for a single date
Here we show the estimation of the outstanding claims (IBNS), RBNS and IBNR reserves for the first date of the testing period. To ease the visualization and comparison, we present the estimation of the reserves in the classical run-off triangle format in Tables <ref> and <ref>, yet however, recall that our method doesn't rely on a given periodicity for its calculation or the construction of a triangle.
The inclusion probabilities π_i(τ) and π_i^V(τ) for the outstanding not settled claims are estimated directly from the models from the previous section using Equations (<ref>), (<ref>) and (<ref>). Figure <ref> displays the histogram of such probabilities and Table <ref> displays some summary statistics. Briefly, we observe that the inclusion probabilities vary drastically from one claim to another due to the heterogeneity of the claims. Note that the probabilities tend to be closer to 1 than to 0 due to the low average reporting delay and payment time, and therefore only the most recently reported claims have a small probability.
Tables <ref> and <ref> present cumulative run-off triangles for total outstanding claims and reported but not settled claims, respectively, as of the valuation date. The incurred but not reported claims reserve estimation is derived from the difference between these triangles, which is not shown to avoid redundancy. We completed the lower half of the triangles using the true reserve values, the IPW estimator (using Equation (<ref>)), and the Chain-Ladder method, on a monthly basis, for comparison purposes. To maintain readability and practicality, the table is limited to 13 months, representing approximately 98% of settled claims within this period. To differentiate between the RBNS and IBNR claims components in the Chain-Ladder method, we utilize the double Chain-Ladder method (<cit.>).
Analyzing the lower half of the triangles in Tables <ref> and <ref>, we observe that the IPW estimator provides cumulative payment estimations that exhibit similar trends and magnitudes as the actual cumulative claims. No evident patterns of under or over-estimation are observed. Additionally, the IPW estimator exhibits different behavior to the Chain-Ladder method, indicating the impact of using the individual level information in the estimation.
The overall findings indicate that, at the cell level, the IPW estimator performs similarly to the Chain-Ladder method for the given valuation date.
We do note that the IPW tends to better capture the changes in the reserve on the most recent dates as a result of accounting for the composition of the portfolio in the estimation. However, it is important to note that the IPW estimator does not consistently outperform the Chain-Ladder method in all cells of the triangle. We emphasize that, even though the IPW estimator can provide such estimations at the cell level, it may not possess the same precision as the estimation of the reserve as a whole. The more granular the desired estimation (i.e the smaller the subpopulation of interest), the lower the level of accuracy.
Table <ref> provides error metrics of the estimation at the cell level. The overall findings indicate that, for the given valuation date, the IPW estimator generally offers a superior approximation of the reserves compared to the traditional Chain-Ladder method. However, it is important to note that the IPW estimator does not consistently outperform the Chain-Ladder method in all cells of the triangle. We emphasize that, even though the IPW estimator can provide such estimations at the cell level, it may not possess the same precision as the estimation of the reserve as a whole. This limitation arises from the reliance on population sampling, which necessitates a large and representative sample for accurate estimation. Consequently, the more granular the estimation (i.e the smaller the subpopulation of interest), the lower the level of accuracy.
Along those lines, instead of focusing on cell-level comparisons, our emphasis lies now on the aggregation of cells to determine the actual reserve value, which is the ultimate objective of estimation. It is noteworthy that the IPW estimator directly provides an estimation of the total reserves using Equations (<ref>), (<ref>) and (<ref>), eliminating the need for constructing the run-off triangle in comparison to the Chain-Ladder method. Table <ref> presents the aggregated reserve values obtained by summing the ultimate values for each accident date, along with the corresponding estimation errors. Our findings reveal that the IPW estimator yields reserve values that closely align with their true counterparts for all reserve types, exhibiting significantly lower estimation errors compared to the Chain-Ladder method.
Furthermore, to evaluate the predictive quality of these estimates from a probabilistic standpoint, Figure <ref> illustrates the predictive distribution of the reserves based on the sampling distribution of the IPW estimators, juxtaposed with the actual observed values. Notably, we observe that the true values consistently fall within the central region of the distribution, closely aligning with the corresponding modes, which represent the predicted reserve values. Consequently, the IPW-based predictions exhibit consistency with the observed reality.
§.§ Estimation of the reserve for several dates
Here we present the estimation of reserves for all 24 months in the testing period. Figure <ref> illustrates the estimations for the outstanding claims compared to the true value of the reserve at the corresponding month. Additionally, Figures <ref> and <ref> depict the estimation for RBNS and IBNR, respectively. To provide a comprehensive analysis, we include 95% confidence intervals for the estimations and include Chain-Ladder (CL) method estimates for comparison. Furthermore, Table <ref> presents error metrics to assess the disparities between the estimations across all dates. We would like to note that this kind of temporal analysis is often overlooked in the claim-reserving literature due to its inherent challenges for having consistent estimations. Our aim is not to boast about the complexity of the analysis but rather to show the behavior of the IPW method in distinct scenarios.
With respect to the total reserve, Figure <ref> demonstrates that the IPW estimator produces predictions that closely align with the true value of the reserve for the majority of the observed periods, exhibiting no discernible pattern of under or overestimations. Additionally, the actual reserve value consistently falls within the associated confidence intervals, indicating a consistent fit with the predicted value. Notably, the IPW prediction proves to be more accurate than the traditional Chain-Ladder method during the considered period. This observation is further supported by the results in Table <ref>, where the error metrics for the IPW over the 24-month period outperform those of the Chain-Ladder method. Therefore, the IPW along with the use of individual information has more predictive power than the macro reserving method.
With respect to the RBNS, we observe in Figure <ref> that the IPW estimator provides accurate predictions for the majority of the first year within the time window and for a portion of the second half of the second year. However, during the intermediate period (8th month to 17th month), the IPW underestimates the reserve, although some data points in this range still fall within the confidence band. It is worth noting that this period exhibits relatively higher reserve levels compared to the rest of the considered time window, which may be attributed to management-related actions of the insurance company that lead to larger reserves. In such cases, the IPW estimation takes longer to capture these changes, as the distribution estimation relies on the preceding two years of data. Consequently, it takes several months for the most recent data to have a significant impact on the estimation. On the other hand, the Chain-Ladder method appears to be more adept at capturing this particular change. However, outside of this specific period, the Chain-Ladder method demonstrates considerable underperformance. Despite this behavior, the IPW consistently outperforms the Chain-Ladder method on average throughout the entire period, as evidenced by the lower error metrics in Table <ref>.
Finally, regarding the IBNR reserve, we observe in Figure <ref> that the IPW estimator provides a reasonable estimation for almost the entire considered period, fluctuating around the true reserve. It is worth noting that the IPW estimator exhibits a more variable behavior compared to previous scenarios. This variability is expected because the IBNR, in our case, represents a smaller proportion of the subpopulation due to the relatively low reporting delay time. Generally, as the size of the subpopulation decreases, the estimation variance of the IPW increases. Despite this variability, the IPW estimator consistently outperforms the traditional Chain-Ladder method over the entire 24-month period, as indicated in Table <ref>. It is interesting to note that the IPW tends to follow the same trends observed in the true value of the reserve i.e. it has congruent patterns of fluctuations and variations in a comparable manner with the true value. We would like to note the fact that the Chain-Ladder estimation is stable throughout the period, while the IPW estimation is not. We attribute this behavior to the fact that the Chain-Ladder assumes a homogeneous portfolio while the IPW does not.
We encountered instability in the behavior of the IPW estimator (i.e absurdly abnormal large reserves) when performing the estimation on some dates, along the same lines as the behavior described in Section <ref>. To address this issue, we implemented the adjusted version of the IPW estimator, as described in algorithm <ref>, and compared it to the raw IPW estimator. If the percentual difference between the two estimations exceeded a certain threshold (e.g., more than 3%), we retained the adjusted estimation. For cases where the difference was not significant, we kept the original estimation. As a result, the estimations become more stable across dates.
Note that one may work directly with the adjusted estimation, but it may introduce a systematic downward bias. Therefore, in practical applications, it is advisable to apply the adjustment only when the actuary considers it necessary.
§ CONCLUSIONS
Macro-level reserving models, particularly the Chain-Ladder method, overlook the underlying heterogeneity within the portfolio of policyholders, treating all claims equally, and providing most of the time modest estimations. Therefore, the estimation of the reserve does not benefit from the use of the individual attributes of the policyholders, which have been shown to provide a significant improvement in the accuracy of these methods in the literature of micro-level reserving.
In this paper, we address the limitation of macro-level reserving models by proposing a statistically justified macro-level reserve estimator based on Inverse Probability Weighting (IPW). Unlike traditional macro-level models, our method incorporates individual-level information in the weights to improve the accuracy of reserve estimation. Moreover, such incorporation is achieved within a less complex framework compared to micro-level models, in the sense that no explicit assumptions on claim frequency or severity are made.
The IPW estimator serves as a hybrid approach that bridges the gap between macro and micro-level methods. It assigns attribute-driven weights to each claim, allowing for a development factor specific to each claim's settlement, similar to the familiar principles of the Chain-Ladder method when applied at the granular level. This method represents an initial step towards obtaining more precise reserves from macro-level models and serves as an intermediate stage in the development of a customized micro-level reserving model.
We believe that the IPW estimator offers a possibly seamless transition from macro to micro-level reserving for insurance companies. We hope practitioners find this method appealing as it is a natural extension of the traditional Chain-Ladder method, accounting for portfolio heterogeneity in a statistically justified fashion.
Future research should explore alternative approaches for estimation, potentially through the development of tailored models specifically designed for inclusion probabilities as in the development factor in Equation (<ref>), which is simpler to interpret. Additionally, investigating the connection between claim reserving and population sampling techniques holds promise for further advancements in estimating reserves. We are currently engaged in related research, <cit.>, delving deeper into the implications of survey sampling theory on the claim reserving problem.
§ ACKNOWLEDGMENTS
This work was partly supported by Natural Sciences and Engineering Research Council of Canada [RGPIN 284246, RGPIN-2017-06684]. Sebastián acknowledges the Mountain Pygmy Possum, an endangered species in Australia, for inspiring research on population sampling and also this project.
apalike
|
http://arxiv.org/abs/2307.03074v1
|
20230706154205
|
Consistent Causal Inference for High-Dimensional Time Series
|
[
"Francesco Cordoni",
"Alessio Sancetta"
] |
stat.ME
|
[
"stat.ME",
"math.ST",
"stat.TH"
] |
Consistent Causal Inference for High-Dimensional Time SeriesWe are grateful to the Editor Serena Ng and the Referees for comments
that have led to corrections and improvements in content and presentation.
We are also grateful to Yanqin Fan for having shared the latest version
of Fan et al. (2022) and useful discussions. We thank the participants
at the Model Evaluation and Causal Search workshop at the University
of Pisa, the Lancaster Financial Econometrics Conference in honour
of Stephen Taylor, and the 2023 SoFiE Conference at Sungkyunkwan University.
The first author acknowledges financial support from MIUR Progetti
di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2017. Both
authors acknowledge financial support from the Leverhulme Trust Grant
Award RPG-2021-359.
Francesco CordoniDepartment of Economics, Royal Holloway University of London, Egham
TW20 0EX, UK. Email: [email protected] and Alessio SancettaCorresponding Author. Department of Economics, Royal Holloway University
of London, Egham TW20 0EX, UK. Email: [email protected]
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A methodology for high dimensional causal inference in a time series
context is introduced. It is assumed that there is a monotonic transformation
of the data such that the dynamics of the transformed variables are
described by a Gaussian vector autoregressive process. This is tantamount
to assume that the dynamics are captured by a Gaussian copula. No
knowledge or estimation of the marginal distribution of the data is
required. The procedure consistently identifies the parameters that
describe the dynamics of the process and the conditional causal relations
among the possibly high dimensional variables under sparsity conditions.
The methodology allows us to identify such causal relations in the
form of a directed acyclic graph. As illustrative applications we
consider the impact of supply side oil shocks on the economy, and
the causal relations between aggregated variables constructed from
the limit order book on four stock constituents of the S&P500.
Key Words: high dimensional model, identification, nonlinear
model, structural model, vector autoregressive process.
JEL Codes: C14, G10.
§ INTRODUCTION
????
CODE in LOBICA/code/
%%%%Simulations
%parse raw data using:
Run book_representation4.m for data generation
% Find the tuning parameters for the simulations
In /simulation_analysis, open main_project.Rproj; then from RStudio,
open general_main.R and general_main_CV.R
Run: general_main_CV.R (to find the tuning params using CV). It
calls main_Estimate_Causal_Graph_sim_for_CV.R for different
cases and writes results in as single files in /results_CV_0.25_0.5_0.75.
WWhen done, run data_processing_CV_results.R in /results_CV_0.25_0.5_0.75
to write a single output in SUMMARY_CV_results.csv in the directory
where main_project.Rproj can be found.
% Final results
Run: general_main.R. It calls main_Estimate_Causal_Graph_sim.R
(MUST SET NUMBER OF CORES APPROPRIATELY (SEE LINE 110 IN FILE))for
different cases and writes results in as single files in /results_0.25_0.5_0.75.
The latter has a CV_flag set to 0 as it uses the CV params: SUMMARY_CV_results.csv.
When done, run data_processing_SID_HD_results.R to get summary
of results (SUMMARY_XXX_results (non zero elements of precision,
HD of graph and Frobenius of estimated A and Sigma_eps). Then, run
to get tables (e.g. TABLE_results_a_0.250000_0.500000_0.750000_DIST.csv
ready for latex).
IMPORTANT: aux functions are in aux_function_MB_CLIME and MB_CLIME_PC.R
%%%%%%%%Empirical application
Functions in /empirical/
Run: main_empirical.R (set dir correctly, line 2). In same function,
mind the flags: flag_instruments = 5 flag_CLIME = 0 (to use all
instr and only MB). Uncomment line 68 (source(aux_5_instrument.R))
if the tuning parameters have not been already generated by CV. The
latter uses more than one core, select the appropriate one on line
113 or use as it is. If you uncomment, just run to that. Then, save
the output in a directory to be used in line 73 or 76 of main_empirical.R
(e.g. ~/Desktop/saved_data_lobica/stability_results_session_21_11_2022_daily_clime_X.RData;
better not to change so that Francesco can use it).
The output is all saved locally and can be found in get_irfs.R and
main_plot.R: these functions are called in main_empirical.R.
IMPORTANT: aux functions are in aux_function_MB_CLIME and MB_CLIME_PC.R
% DEBUG: for example debug(MB_PC)
%%%%%%%%%—————————-for
example of sparse Theta and dense A
for all k∈[K] and l≠ k,
X_t,k⊥ X_t,l|{ X_t,k+1,X_t,k-1}∩{ X_t,i:i∈[K]}
X_t,k⊥ X_t,l|{ X_t-1,k+1,X_t-1,k,X_t-1,k-1}∩{ X_t-1,i:i∈[K]}
and
X_t-1,k⊥ X_t-1,l|{ X_t,k+1,X_t,k,X_t,k-1}∩{ X_t,i:i∈[K]}
and such that Θ_11=Θ_22. This imposes a special structure:
the partial correlation between X_t,k and X_t,k+i given
all other covariates (including the first lagged ones) is the same
as the partial correlation between X_t-1,k and X_t-1,k+i
given all other covariates (including the first forward ones) is the
same.
The code for this is the following
################start from matrix of partial correlations
###Example to use in paper
K =10 n = 100 seed = 1
d_var = 1#.75 d_corr = .33 d_corr_lag0 = .5#d_corr*1.5 d_corr_lag1
= .25#d_corr*.75
Theta11 = np.zeros((K,K)) Theta12 = np.zeros((K,K))
for k in range(K): Theta11[k,k] = 1/d_var Theta12[k,k] =
-d_corr_lag0/d_var
if k==0: Theta11[k,k] = Theta11[k,k] Theta11[k,k+1] =
-d_corr*d_var
#Theta12[k,k] = Theta12[k,k] Theta12[k,k+1] = -d_corr_lag1/d_var
elif k==K-1: Theta11[k,k-1] = -d_corr*d_var Theta12[k,k-1]
= -d_corr_lag1/d_var
else: Theta11[k,k-1] = -d_corr*d_var Theta11[k,k+1] =
-d_corr*d_var
Theta12[k,k-1] = -d_corr_lag1/d_var Theta12[k,k+1] = -d_corr_lag1/d_var
Theta = np.r_[np.c_[Theta11,Theta12],np.c_[Theta12.T,Theta11]]
Sigma = np.linalg.inv(Theta)
invDSigma = np.eye(2*K)/np.sqrt(np.diag(Sigma))
Sigma = np.dot(np.dot(invDSigma,Sigma),invDSigma)
Theta = np.linalg.inv(Sigma) Theta[np.abs(Theta)<1e-12] =0
A = np.dot(Sigma[:K,K:],np.linalg.inv(Sigma[:K,:K]))
Sigma_eps = np.linalg.inv(Theta[:K,:K])
np.mean(np.abs(Theta[:K,:K] -Theta[K:,K:]))
np.mean(np.abs(Theta[K:,:K] -Theta[:K,K:].T))
Sigma = np.linalg.inv(Theta)
np.mean(np.abs(Sigma[:K,:K] -Sigma[K:,K:]))
np.mean(np.abs(Sigma[K:,:K] -Sigma[:K,K:].T))
np.mean(np.abs(Sigma[K:,:K] -Sigma[:K,K:]))
Identifying and estimating causal relations is a problem that has
received much interest in economics. In the last two decades the statistical
and machine learning literature has made a number of advances on the
front of identification and estimation within the framework of causal
graphs (Comon, 1994, Hyvärinen and Oja, 2000, Pearl, 2000, Spirtes
et al., 2000, Hyvärinen et al., 2001, Shimizu et al., 2006, Meinshausen
and Bühlmann, 2006, Kalisch and Bühlmann, 2007, Cai et al.,
2011, Bühlmann et al., 2014, Peters et al., 2014), where the
data generating process can be characterized as a system of structural
equations. This complex causal relations system might be represented
through the causal graph, which conveys essential topological information
to estimate causal effects.
However, the true data generating process is often a latent object
to researchers, which can only rely on finite sample observations
to infer the causal structure and mechanism of the true system. A
causal model entails a probabilistic model from which a researcher
can learn from observations and outcomes about changes and interventions
of the system variables (Pearl, 2000, Peters et al., 2014). Thus,
causality can be formally defined using the do-notation of Pearl (2000)
in terms of intervention distributions. This definition of causality
is quite different from the well known concept of Granger causality.
However, causal relations in economics and finance require to account
for time series dependence.
In this paper we develop a methodology to extract the causal relations
of time series data, conditioning on the past in a flexible way. We
assume that there is a monotone transformation of the data that maps
the original variables into a Gaussian vector autoregressive (VAR)
model (see also Fan et al., 2022). There are a number of advantages
to this approach. First, we are able to retain the interpretability
of VAR models building on the rich econometrics literature on structural
VAR models. Second, we do not need any assumptions on the marginal
distribution of the data. This means that the procedure is robust
to fat tails, as we do not make any assumption on the existence of
any moments. For instance, given that the existence of a second moment
for financial data has been a much debated topic in the past (Mandelbrot,
1963, Clarke, 1973, for some of the earliest references) dispensing
all together of this unverifiable condition should be welcomed. Third,
we can model variables that take values in some subset of the real
line, for example variables that only take positive values or are
truncated. This is not possible using a standard VAR model.
The estimation of the contemporaneous causal structure of a time series
is equivalent to solving the identification problem of a structural
VAR model. The latter can be achieved by finding a unique Choleski
type decomposition of the covariance matrix of the VAR innovations
(Rigobon, 2003, Moneta et al., 2013, Gouriéroux et al., 2017,
Lanne et al., 2017). Recent advances in the identification problem
under general conditions and linearity exploit the use of internal
and external instruments and the method of local projections (Stock
and Watson, 2018, Plagborg-Møller and Wolf, 2021). However, the
time series dynamics of economic and financial data may not be captured
well by a linear VAR model when the data is not Gaussian. For example,
some variables may only be positive. The problem of estimation is
exacerbated if the data have fat tails. This may distort the estimates.
Such problems reflect negatively on the estimation of causal relations
for time series data. Furthermore, due to the curse of dimensionality,
SVAR analysis is only feasible in a low-dimensional context. Restricting
the VAR model only to a few variables may lead to unreasonable adverse
effects such as `price-puzzles' in impulse responses (Sims, 1992,
Christiano et al., 1999, Hanson, 2004). Moreover, models of the global
economy tend to be high dimensional. To avoid the curse of dimensionality,
factor augmented VAR (FAVAR) models (Bernanke et al., 2005) and dynamical
factor models (Forni et al., 2000, Forni et al., 2009) are often employed.
However, the interpretation of the causal relations with factor models
is not always straightforward. Along these approaches, we also mention
the GVAR methodology, originally proposed by Pesaran et al. (2004),
where country specific VAR models are stacked together in a way that
maintains ease of interpretation at the cost of some assumptions.
Our methodology does not require the machinery of factor models or
assumptions on how to join lower dimensional models into a higher
dimensional one. However, this is achievable at the cost of certain
restrictions. We envisage that our methodology could work in conjunction
with the the existing ones to shed further light on structural relation
in high dimensional VAR models. We also point out that high dimensional
VAR models may even arise in practice as a result of a large number
of lags.
This paper builds on a number of previous contributions and develops
a methodology to address the aforementioned problems. Our approach
is tantamount to the assumption that the cross-sectional and transition
distribution of the variables can be represented using a Gaussian
copula. The procedure builds on the work of Liu et al. (2012) and
does not require us to estimate any transformation of the variables
or the marginal distribution of the data, as commonly done when estimating
a copula. In fact, our procedure bypasses the estimation of the innovations
of the model altogether. Our methodology is built for high dimensional
time series, as commonly found in some economics and financial applications.
What we require is some form of sparsity in the partial dependence
of the data. This is different from assuming that the covariance matrix
of innovations or the matrix of autoregressive coefficients are sparse.
Such two restrictions can be restrictive. We shall make this clear
in the text when we discuss our assumptions. Finally, even when not
all causal relations are identified, we are able to identify the largest
number of causal relations. This statement is formalized by the concept
of complete partially acyclic graph using the PC algorithm (Spirtes
et al., 2000, Kalisch and Bühlmann, 2007). These concepts are
reviewed in the main body of the paper (Section <ref>).
We conclude this introduction with a few remarks whose aim is to put
the goals of this paper into a wider perspective. The process of scientific
discovery is usually based on 1. the observation of reality, 2. the
formulation of a theory, and 3. tests of that theory. The plethora
of data available allows the researcher to observe different aspects
of reality that might have been precluded in the past. High dimensional
estimation methods are particularly suited to explore the present
data-centric reality. However, the next step forward requires formulation
of a theory or hypothesis. Such theory needs to be able to explain
rather than predict in order to enhance our understanding. This very
process requires the identification of a relatively small number of
explanatory causes for the phenomenon that we are trying to understand.
The problem's solution, in a complex and rather random environment,
should then be a simple approximation. This approximation can then
be tested in a variety of situations in order to verify its applicability.
The program of this paper is to follow this process of scientific
discovery. We start from possibly high dimensional dynamic datasets.
We aim to provide a reduced set of possible contemporaneous causes
conditioning on the past.
§.§ Relation to Other Work
One of the main empirical econometric tools for the study of policy
intervention effects is the VAR approach (Sims, 1980, Kilian and Lütkepohl,
2017). In the first step, the so called reduced form model is estimated.
Then, the structural counterpart needs to be recovered. This gives
rise to an identification problem, which is equivalent to finding
the contemporaneous causal relations among the variables.
Traditionally, the identification of Structural Vector Autoregressive
(SVAR) models was achieved by imposing model restrictions. Such restrictions
can be derived from an underlying economic model, such as short and
long-run restrictions on the shocks impact (Bernanke, 1986, Blanchard
and Quah, 1989, Faust and Leeper, 1997), or imposing sign restrictions
on impulse response functions (Uhlig, 2005, Chari et al. 2008).
The success of the VAR approach is its reliance on data characteristics,
thus allowing the validation of economic models under reasonably weak
assumptions. However, standard restrictions necessary for the identification
invalidate the data-driven nature of SVAR. Recently, researchers
have explored alternative methods to achieve identification in SVAR
models by exploiting different statistical features of the data. For
instance, identification can be obtained by relying on heteroskedasticity
(Sentana and Fiorentini, 2001, Rigobon, 2003, Lütkepohl and Netšunajev,
2017) or non-Gaussianity of the residuals (Moneta et al., 2013, Gouriéroux
et al., 2017, Lanne et al., 2017). On the other hand, another popular
method used for identification, which however does not exploit specific
statistical properties of the data distribution as the previously
mentioned, is the instrumental variables approach (Mertens and Ravn,
2013, Stock and Watson, 2018, Plagborg-Møller and Wolf, 2021).
Our method is related to approaches that rely on the graphical causal
model literature (Swanson and Granger, 1997, Demiralp and Hoover,
2003, Moneta, 2008), where identification can be achieved by exploiting
the set of conditional and unconditional independence relations in
the data. Our work is also related to the statistical and machine
learning literature for the identification of causal graph structures
in a high dimensional setting (Meinshausen and Bühlmann, 2006,
Kalisch and Bühlmann, 2007, Liu et al., 2009, Zhou et al., 2011,
Bühlmann et al., 2014, Harris and Drton, 2013). In particular
the latter reference combines the use of rank correlations with the
PC algorithm, as we do in the present paper. However, none of these
approaches accounts for contemporaneous causal inference conditioning
on the past, as required for time series problems.
To account for the time series dependence, we employ a modelling assumption
that can be viewed as a Gaussian copula VAR model, a definition that
will be made clear in the text. We recently discovered that Fan et
al. (2022) have used the same time series assumption for the analysis
of high dimensional Granger causality. The present paper is concerned
with conditional causal relations and identification of the Gaussian
copula VAR. Moreover, some basic assumptions are also different. For
example, Fan et al. (2022) assume that the autoregressive matrix of
the Gaussian copula VAR is sparse. We instead assume that the inverse
of the scaling matrix of the Gaussian copula that leads to a VAR representation
is sparse. This is a very different assumption. Hence, the contributions
are related, but complementary.
§.§ Outline of the Paper
The plan for the paper is as follows. In Section <ref>,
we introduce the model and briefly discuss its statistical properties.
In Section <ref> we discuss identification
of the model and the causal relations. In Section <ref>
we describe algorithms to find estimators for the population quantities,
including the complete partially acyclic graph. In Section <ref>
we state conditions and results for the consistency of the quantities
derived from the algorithms. Section <ref> provides
two empirical illustrations. First, we investigate the identification
of the effect of supply side shocks on economic activity. Then, we
analyze the causal relations of order book variables in electronic
trading. Section <ref> concludes. Additional explanatory
material can be found in the Appendix. There, we provide more details
on the model and its identification under possibly mixed data types.
We also discuss calculation of impulse response functions for our
nonlinear model. All the proofs and other additional details can be
found in the Electronic Supplement to this paper. There we also present
the main conclusions from a simulation study as evidence of the finite
sample properties of our methodology (Section <ref>
in the Electronic Supplement).
Software.
The algorithms presented in this paper are implemented in the R scripting
language. The code is available from the URL <https://github.com/asancetta/CausalTimeSeries>.
Most of the code is based on existing R packages and also includes
a cross-validation procedure to choose tuning parameters.
§ THE MODEL
Let X:=(X_t)_t∈ℤ be a sequence of stationary
random variables taking values in ℝ^K or some subset
of it. For each k=1,2,...,K, we suppose that there is a monotone
function f_k such that Z_t,k=f_k(X_t,k) is
a standard Gaussian random variable such that Z_t=(Z_t,1,Z_t,2,...,Z_t,K)'
Z_t=AZ_t-1+ε_t
where A has singular values in (0,1) and (ε_t)_t∈ℤ
is a sequence of independent identically distributed random variables
with values in ℝ^K and covariance matrix Σ_ε.
Throughout, the prime symbol ' denotes transposition. All vectors
in the paper are arranged as column vectors. We do not require knowledge
of the functions f_k. We also note that there is always a monotone
transformation that maps any univariate random variable into a standard
Gaussian. We provide details about this in Section <ref>
in the Appendix. Here, the main assumption is that such transformed
variables satisfy the VAR dynamics in (<ref>). Under
stationarity assumptions, all the information of the model can be
obtained from the covariance matrix of the 2K-dimensional vector
(Z_t',Z_t-1')', which we denote by Σ. We
can then partition Σ as
Σ=([ Σ_11 Σ_12; Σ_21 Σ_22 ])=([ Γ AΓ; Γ A' Γ ])
with obvious notation, once we note that A is as in (<ref>)
and Γ:=𝔼Z_tZ_t'. Clearly, Σ_ε:=Γ-AΓ A'
(recall Σ_ε:=𝔼ε_tε_t').
The above setup can be recast into a formal probabilistic framework
using the copula function to model Markov processes (Darsow et al.,
1992). The copula transition density would be the ratio of two Gaussian
copulae: one with scaling matrix Σ and one with scaling matrix
Γ. Given that we shall not use this in the rest of the paper,
we omit the details. However, given this fact, for short, we refer
to our model as a Gaussian copula VAR. We note that when X_t
has an invariant distribution with marginals that are continuous,
the functions f_k are necessarily equal to the unconditional
distribution of X_t,k, by Sklar's Theorem (Joe, 1997).
We consider a high dimensional framework, where K can go to infinity
with the sample size. Formally, this would require us to consider
a family of models (<ref>) indexed by the sample size
n to allow for increasing dimension K (Han and Wu, 2019, for
more details). We do not make explicit this in the notation. Next,
we summarise the main properties of the model under the possibility
that K→∞.
Define Z_t,k=f_k(X_t,k)
for some increasing monotonic transformation f_k:ℝ→ℝ
, k=1,2,...,K, such that (Z_t)_t∈ℤ
follows a Gaussian VAR as described in (<ref>). Furthermore,
suppose that the singular values of A are in a compact interval
inside (0,1) and the eigenvalues of Σ_ε
are in a compact interval inside (0,∞), uniformly
in K. Then, (X_t)_t∈ℤ is a stationary
Markov chain with strong mixing coefficients that decay exponentially
fast, uniformly in K even for K→∞.
Recall that the singular values of a matrix A are the square root
of the eigenvalues of A'A. Hence, the condition means that A
is full rank with eigenvalues inside the unit circle. We note that
for fixed K the model is not only strong mixing, but also absolutely
regular (beta mixing), with exponentially decaying coefficients (Doukhan,
1995, Theorem 5, p.97). However, when K is allowed to increase,
this is not the case anymore (Han and Wu, 2019, Theorem 3.2). Nevertheless,
allowing for increasing dimension K, it is still strong mixing
with exponentially decaying coefficients.
The extension of (<ref>) to a VAR(p), for fixed
finite p, has been considered by Fan et al. (2022, Appendix B).
The process remains geometrically strong mixing if the singular values
of the autoregressive matrices are all in a compact interval inside
(0,1). For simplicity, we shall restrict attention to
the VAR(1) case. The methodological implementation for a higher order
VAR is simple, but we will still provide some remarks on this as it
is relevant to the high dimensional framework.
§ IDENTIFICATION
In the next section, we briefly review causal graph terminology. While
these concepts are not widely used in econometrics, they do simplify
some discussion when stating assumptions and contemporaneous relations
(Section <ref> for an empirical illustration
to oil price shocks). In Section <ref>
we show how these concepts relate to the more familiar language and
setup of structural vector autoregressive models.
§.§ Preliminary Concepts
A graph G=(𝒱,ℰ) consists of a set
of vertices 𝒱={ 1,2,...,p}, where p
is the number of vertices, and edges ℰ⊆𝒱×𝒱.
The edges are a set of ordered pairs of distinct vertices. The edges
are directed if the order matters, (k,l)∈ℰ
but (l,k)∉ℰ, otherwise it is undirected.
Arrows are commonly used to define the direction when there is one.
In our context, 𝒱 is the set of indices of W_t=(X_t',X_t-1')'
, i.e. p=2K, while ℰ contains the direction in the
causal relations if any. For example, we know that we cannot have
X_t,i→ X_t-1,i while the other way around is possible
if X_t-1,i Granger causes X_t,i. In the language of graphs
we say that X_t-1,i is a parent of X_t,i. In this paper
we focus on the causal relations of X_t conditioning on X_t-1.
This is different from Granger causality. Given that the statistical
relations of the elements in X_t conditioning on X_t-1 are
defined by ε_t, we focus on finding the set of parents
of each ε_t,i. For example, ε_t,1 is
a parent of ε_t,2 if ε_t,1 causes ε_t,2
and not the other way around. We write ε_t,1→ε_t,2.
When the variables ε_t,k are jointly Gaussian, it is
well known that conditional independence is not enough to identify
the direction of the relation (Moneta et al., 2013, Peters et al.,
2014).
In the case when all causal relations are identified with no cycles,
the causal graph is a directed acyclic graph (DAG): all edges are
directed and there are no cycles. There are no cycles if no descendant
can be a parent of their ancestor. When the direction cannot be fully
identified, we shall content to obtain some undirected edges. It is
possible that no directed edge can be identified. The graph where
we do not consider the directions is called the skeleton. When we
use observational data, we work with their distribution, possibly
under model assumptions as in (<ref>). We say that
the distribution of the data is faithful to the graph if the set of
all (possibly conditional) independence relations of the distribution
of the data and the graph coincide. The (possibly conditional) independence
relations of the graph are defined as the set of vertices for which
there is no edge between them. Such relations only require to identify
the skeleton. Unfortunately, a given distribution of data can generate
an infinite number of DAG's. In the case of a VAR this is equivalent
to say that the structural VAR cannot be identified. This means that
we cannot draw arrows for all edges. Hence, we may need to content
ourselves with a complete partially directed acyclic graph (CPDAG),
which is a graph where some edges are undirected because they cannot
be identified. In summary, in the more familiar language of econometrics,
identification of the DAG of the K-dimensional innovations ε_t
means that the system of simultaneous equations for ε_t
is recursive. This is equivalent to finding a permutation of the variables
such that the covariance matrix of the permuted innovations is the
product of a lower triangular matrix times its transpose (Lemma <ref>
in Section <ref>). We shall
use a sample based version of the PC algorithm (Kalisch and Bühlmann,
2007) to identify the CPDAG under the assumption that the underlying
causal structure is recursive. For high dimensional time series data,
we require special tools as devised in the present paper.
§.§.§ Remarks on the PC Algorithm
A full description of the PC algorithm can be found in (Spirtes et
al., 2000). Here, we provide a short overview assuming knowledge of
Σ_ε. The PC algorithm identifies as many causal
relations as possible and its output is a CPDAG. In the present case,
it exploits the assumption that the system of simultaneous equations
of the innovations is recursive (i.e. the causal graph is a DAG).
It then proceeds into two steps. The first step exploits the set of
all conditional independence relations in the data as follows. It
identifies the so called moral graph, which is the set of all edges
implied by the nonzero entries in Θ_11:=Σ_ε^-1.
Note that the (i,j) entry in Θ_11 is zero
if and only if ε_t,i and ε_t,j are independent
when conditioning on all other variables (Proposition 5.2 in Lauritzen,
1996). Using the zero entries in Σ_ε, it removes
all those edges in the moral graph that correspond to variables that
are unconditionally independent, i.e. independent when conditioning
on the empty set. This produces the skeleton. It then uses a set of
logical rules to direct as many arrows as possible.
We give a straightforward example of identification strategy used
by the PC algorithm. Suppose that we only have a set of three variables
{ε_t,1,ε_t,2,ε_t,3}.
Suppose that any pair of variables from this set is dependent when
conditioning on the third one. According to the aforementioned remarks
on Θ_11, we have that this matrix has no zero entries. However,
suppose that when we condition on the empty set, ε_t,1
and ε_t,3 are independent. This means that these two
variables are unconditionally independent. This is tantamount to saying
that (1,3) and (3,1) entries in Σ_ε=Θ_11^-1
are zero. In this case, we must have that ε_t,1 and
ε_t,3 are related to each other only through a common
effect ε_t,2. The PC algorithm would then produce the
following DAG ε_t,1→ε_t,2←ε_t,3.
This conclusion does not assume that the underlying causal structure
be representable by a DAG. Other logical rules used by the PC algorithm
assume that the causal relations between the variables be representable
by a DAG (Algorithm 2 in Kalisch and Bühlmann, 2007, for the
full list of rules).
In the next section, we relate these concepts to SVAR identification
and existing methods based on instruments. We do so to show how our
methodology adds to the arsenal of already existing methods.
§.§ Identification of the Gaussian Copula VAR
We conclude with two results that show the identification strategy
in our methodology. We define the precision matrix Θ=Σ^-1.
As we did for Σ in (<ref>), we partition
it with same dimensions as in (<ref>):
Θ=([ Θ_11 Θ_12; Θ_21 Θ_22 ]).
The parameters in (<ref>) are identified from the
precision matrix (<ref>). The following,
is a consequence of the classical result on graphical Gaussian models
(Lauritzen, 1996, eq. C3 and C4).
Suppose that the
conditions of Proposition <ref> hold. Then,
A= -Θ_11^-1Θ_12 and Σ_ε=Θ_11^-1.
When the DAG is identified, we can identify the SVAR. In the more
common language used in econometrics, this is the same as saying that
the structural equation system of the innovations is recursive, as
it will be formally defined in (<ref>).
To this end, we introduce some notation. Let Π be a K× K
matrix that can be transformed into the identity by simple permutation
of its rows. We call Π a permutation matrix as it permutes the
rows of the conformable matrix that it premultiplies. We have the
following result for identification of the SVAR.
Suppose that the conditions
of Proposition <ref> hold and that the causal
graph for ε_t in (<ref>) is a DAG.
Then, we can find a permutation matrix Π such that
Π Z_t=DΠ Z_t+(I-D)Π AZ_t-1+ξ_t
where D is lower triangular with diagonal elements equal to zero,
and ξ_t is a vector of independent Gaussian random variables
such that 𝔼ξ_tξ_t' is a diagonal full rank matrix.
In particular, the innovation in (<ref>) satisfies Πε_t=Hξ_t
where H:=(I-D)^-1 is a full rank lower triangular
matrix with diagonal elements equal to one. Furthermore, the process
admits the infinite moving average representation
Z_t=∑_s=0^∞Υ_sξ_t-s, where =A^sΠ'H.
From the causal DAG we can derive the permutation matrix Π, where
each row describes the recursive order of the nonzero entry in such
row. The ordering is often nonunique. In what follows, we shall always
refer to the Π matrix as the one that is obtained from the least
number of row permutations of the identity matrix. In this case Π
is unique. Hence, estimation of the DAG is equivalent to estimation
of the permutation matrix Π. From Lemma <ref>
we deduce that
Πε_t=DΠε_t+ξ_t
where the above is a structural equation system for the innovations
ε_t. The ε_t variables on the right
hand side are the cause of the left hand side variables.
From the structural model in (<ref>) it is clear that the
shock specific to Z_t,l is the l^th entry in Π'ξ_t,
using the fact that Π'=Π^-1. By this remark and (<ref>),
the impact on Z_t+s,k of intervening on Z_t,l (via the l^th
entry in Π'ξ_t) is computed as Υ_sΠ e_l where
e_l is the K×1 vector of zeros, but for the l^th
entry, which is one. Given that the structural shock ξ_t has
diagonal matrix with possibly different diagonal elements, we may
use Υ_sΣ_ξ^1/2Π e_l in place of Υ_sΠ e_l,
where Σ_ξ:=𝔼ξ_tξ_t'. It is clear that
the representation in (<ref>) in terms of the shocks
ξ_t-s is not sufficient to carrying out causal inference in
the sense of the structural equation system (<ref>).
Knowledge of the permutation matrix Π is necessary. Working with
observational data, we start from a reduced form model (<ref>)
and obtain (<ref>) when identification is possible. In turn,
identification is only possible if Π can be identified.
When interest lies on the impulse response functions, we need to account
for nonlinearity. The model in (<ref>) is linear only
after applying a transformation to each variable. Koop et al. (1996)
address such problem focusing on generalized impulse response functions
for reduced form models (Kilian and Lütkepohl, 2017, Ch.18 for
a discussion on structural models). An explicit discussion on the
calculation within our framework can be found in Section <ref>
of the Appendix. However, by linearization, the impulse response function
is approximately equal to a constant multiple of Υ_sΠ
(Lemma <ref> in the Appendix, and discussion
therein).
§.§.§ Identification Using External Instruments
The identification strategy based on the PC algorithm (Section <ref>),
is one additional method to be added to the arsenal of existing strategies
based on internal and external instruments, possibly using local projections
(Stock and Watson, 2018, Plagborg-Møller and Wolf, 2021). This
follows from the fact that the latent VAR in (<ref>)
is Gaussian. Hence, expectations and projections are just functions
of Θ in (<ref>). The latter is
one of the quantities of interest in this paper.
We note that the methodology based on external instruments can have
nontrivial implications for a recursive system, when projections and
conditional expectations coincide, as in the Gaussian case. Suppose
an augmented VAR so that an external instrument is included in the
VAR as first variable Z_t,1 to identify the effect of a shock
of Z_t,l on Z_t,k. Being an instrument, Z_t,1 satisfies
the usual instrumental variable exclusion assumption for a SVAR (Assumption
LP-IV in Stock and Watson, 2018, Assumption 4 in Plagborg-Møller
and Wolf, 2021). Adapting Assumption 4 in Plagborg-Møller and
Wolf (2021) to our notation and using the Markov assumption implied
by (<ref>), this means that Z_t,1 conditional
on { Z_t-s:s≥1} takes the form ε_t,1=α e_l'Π'ξ_t+e_1'Π'ξ_t
for some constant α (Plagborg-Møller and Wolf, 2021, Eq.17).
Note that e_l'Π'ξ_t is the structural shock of variables
ε_t,l. Then, from (<ref>)
we know that ξ_t=(I-D)Πε_t. Substituting
the latter in the former equation, we have that
ε_t,1=α e_l'Π'(I-D)Πε_t+e_1'Π'ξ_t.
Given that for Gaussian random variables zero correlation is equivalent
to independence, the above is a structural equation. In particular
it means that ε_t,1 is caused by all the variables
in ε_t for which the 1× K vector e_l'Π'(I-D)Π
has nonzero entries. The simplest case is when ε_t,l
is not caused by any other entry in ε_t. In graph language,
this means that ε_t,l is a source node and in structural
equation notation it means that ε_t,l=e_l'Π'ξ_t.
From (<ref>), this can only
be the case if e_l'Π'DΠ is a zero row vector.
The above shows that the standard representation (<ref>)
for the instrumental variable exclusion assumption for a SVAR has
non trivial implications in empirical work. In fact, given that (<ref>)
is a structural equation, α≠0 means that ε_t,1,
the instrument conditioning on the past, must be caused by ε_t,l
and possibly by other variables. This is contradictory to the empirical
interpretation of an instrument. In one of our empirical illustrations,
we consider the oil supply shock identification methodology discussed
in Känzig (2021). There the instrument is based on price changes
around OPEC announcements. The variable of interest for which we want
to measure the effect of a shock is real oil prices. When projections
and conditional expectations coincide, (<ref>)
essentially implies that OPEC announcements (ε_t,1)
are contemporaneously caused by real oil price (ε_t,l)
and possibly other variables. This is contrary to what is usually
put forward as a justification for the use of this instrument. Of
course, projections and conditional expectations may be unrelated,
and more importantly the system may not be recursive. Nevertheless,
we shall show that an approach based on structural equations (and
equivalently causal graphs) can help us understanding the underlying
assumptions.
Suppose that ε_t satisfies (<ref>).
The exclusion restriction using an instrument X_t,K+1=f_K+1(Z_t,K+1)
where Z_t,K+1 is standard normal can instead be formulated as
e_l'Π'ξ_t,l=ν_t,l+ε_t,K+1 where ε_t,K+1
is Z_t,K+1 conditioning on the past of (Z_t,1,Z_t,2,...,Z_t,K+1)
and ν_t,l is a structural shock independent of ε_t,K+1.
Then, Z_t,K+1 is a valid instrument if ε_t,K+1=ξ_t,K+1
is a structural shock. This means that ξ_t,l is a structural
shock when we omit Z_t,K+1. To see this, note that ε_t,K+1
satisfies the IV exclusion restriction for the impact of ε_t,l
on the other variables, and it is compatible with a recursive structural
equation system. Assuming that (<ref>) holds for the
augmented (K+1)×1 vector that also includes Z_t,K+1,
we can recover the joint distribution of (ε_t,1,ε_t,2,...,ε_t,K+1)
and (Z_t,1,Z_t,2,...,Z_t,K+1), and apply any of
the projection methods used in the literature. Our methodology allows
us to do this. Moreover, relying on a sample version of the PC algorithm,
we can also estimate whether this exclusion restriction holds for
the augmented dataset. We shall illustrate this in Section <ref>
with the dataset in Känzig (2021). In summary, our framework
not only puts forward an alternative identification approach, but
also allows us to use existing methodologies. Relying on causal graphs
and structural equations systems can allow us to precisely define
assumptions, and its visual aspect may help our intuition.
Next, we introduce algorithms that will be shown to produce consistent
estimators, under assumptions stated in Section <ref>.
§ ESTIMATION ALGORITHMS
For any positive integer p, [p]:={ 1,2,...,p}.
For any matrix Q of dimensions p× q and sets 𝒜⊆[p]
and ℬ⊆[q], A_𝒜,ℬ
is the submatrix with rows in 𝒜 and columns in ℬ.
In A_𝒜,ℬ, when 𝒜=[p]
we write A_·,ℬ and similarly if ℬ=[q].
When 𝒜=[p]∖{ i} for
some i∈[p], we write A_-i,ℬ and similarly
for ℬ. When A is a vector, it is always assumed that
it is a column vector and we shall use the same notation, but with
one single subscript. This notation will be used throughout the paper
with no further mention.
The estimation methodology is based on a number of steps which extend
the methodology in Liu et al. (2012). First, we find an estimator
of the matrix Σ in (<ref>), which
is the Gaussian copula scaling matrix of the vector W_t=(X_t',X_t-1')'.
This is achieved using Algorithm <ref>. Once,
the estimator for Σ is available, we identify the set of zero
entries in the precision matrix, i.e., the inverse of Σ. This
can be achieved using Lasso, as described in Algorithm <ref>.
This algorithm follows the approach of Meinshausen and Bühlmann
(2006) to find the zeros in the inverse of (<ref>).
However, the algorithm also thresholds the resulting Lasso estimators
in order to achieve sign consistency. In this form, the algorithm
is equivalent to Gelato (Zhou et al., 2011).
In Algorithm <ref>, (<ref>) is solved
by the x that satisfies the first order conditions in a Lasso minimization
problem. The constraint x_i=0 is needed to avoid running the
regression of the i^th variable on all the other covariates and
itself. We need the estimator to be in this form for later use. A
competing algorithm to find the zeros of the precision matrix is the
CLIME estimation algorithm with thresholding (Cai et al., 2011). The
procedure is described in Algorithm <ref>. The minimization
problem in Algorithm <ref> can be solved for one
column of Ω at the time, with Ω as defined there,
due to the use of the uniform norm. We shall show the validity of
both algorithms within the time series context of this paper.
Algorithm <ref> allows us to
estimate the parameters in (<ref>). In particular,
it uses the information on the zeros of the estimator for the precision
matrix Θ to construct a sparse estimator (Le and Zhong, 2021).
Using Lemma <ref>, such sparse estimator
of the precision matrix is used to estimate the autoregressive matrix
A and the covariance matrix of the innovations ε_t
in (<ref>).
Finally, using Algorithm <ref>, we identify the PCDAG.
Algorithm <ref> makes reference to the PC algorithm.
We do not report the details in Algorithm <ref>, as the
number of steps is relatively large and can be found in Spirtes et
al. (2000) among many other places. The aim of the PC algorithm is
to start with a dense graph with undirected edges for all variables.
It then aims at removing edges to obtain the skeleton of the graph.
Finally, it uses a set of rules to direct all possible edges based
on deterministic rules. It is not guaranteed that all edges can be
directed, of course.
In order to delete edges, the PC algorithm uses the correlation coefficients
between two variables, conditional on subsets of other variables.
Note that the innovations in the latent model (<ref>)
are Gaussian so that zero correlation implies independence. As soon
as we find a set of conditioning variables such that the two variables
are conditionally uncorrelated, we remove an edge between these two
variables. Given that the conditional correlations are unknown, Kalisch
and Bühlmann (2007) suggest to replace these with sample versions
as in Algorithm <ref>. They define a parameter α,
as in Algorithm <ref>, and show that for α→0
at a certain speed we can obtain a consistent estimator of the PCDAG,
as if we knew the true conditional correlations. For this reason,
Algorithm <ref> only gives details on the sample estimator
leaving out the deterministic steps, to avoid distracting details.
Identification of the SVAR requires that all edges are directed. Assuming
that Algorithm <ref> can direct all the edges, for each
i∈[K], we obtain estimators 𝒱̂(i)
for the set of parents of ε_t,i, using the notation
in Algorithm <ref>. According to Lemma <ref>,
to find the matrix D, we need to find the regression coefficients
of the innovation ε_t,i on ε_t,𝒱̂(i),
i∈[K]. Algorithm <ref> finds
such regression coefficients and collects them into a K× K
matrix Δ̂, i=1,2,...,K. In particular, the i^th
row of Δ̂ has entries 𝒱̂(i)
equal to the coefficients found regressing ε_t,i on
ε_t,𝒱̂(i) and zeros elsewhere.
By the fact that the graph is a DAG, there is a permutation matrix
Π̂ such that Π̂Δ̂Π̂^-1 is an
estimator for D and is a lower triangular matrix with zeros along
the diagonal. The regression coefficients are obtained relying on
Σ̂_ε:=Θ̂_11^-1. This is because
Θ̂_11 is a sparse estimator with good asymptotic properties.
Such properties are inherited by Σ̂_ε even
though Σ_ε is not sparse. The estimator Σ̂_ε
is not necessarily sparse. Moreover, regression coefficients are found
directly from Σ̂_ε with no need to estimate
the innovations.
The tuning parameters for Algorithms <ref> and <ref>
are chosen using cross-validation (Section <ref>
in the Electronic Supplement, for details).
In the Electronic Supplement, we also use simulations to investigate
the finite sample properties of the estimators in our algorithms (see
Section <ref> in the Electronic Supplement).The
simulation analysis show that our approach produces more reliable
results than methods that do not account for either sparsity or time
series dependence, i.e. setting λ=0 in Algorithms <ref>
and <ref> or assuming A=0 in (<ref>).
Even when the persistence of the time series is reduced, our methodology
produces the best results for estimation of the causal structure and
the VAR parameters (for details, see Tables <ref>-<ref>
in Section <ref> in the Electronic Supplement).
Although our approach is designed for a high dimensional setting,
it provides competitive results even in the low dimensional case.
§ ASYMPTOTIC ANALYSIS OF THE ALGORITHMS
The consistency of the algorithms relies on a set of conditions. Before
introducing our conditions, we introduce some additional notation.
§.§ Additional Notation
For any vector, the ℓ_p norm is denoted by |·|_p,
p∈[0,∞]. For any I× J dimensional matrix
A, |A|_p,q=(∑_j=1^J(∑_i=1^I|A_i,j|^p)^q/p)^1/q
is the elementwise norm. When q=∞ we define |A|_p,∞=max_j≤ J(∑_i=1^I|A_i,j|^p)^1/p.
When both p=q=∞ we simply write |A|_∞=max_i≤ I,j≤ J|A_i,j|,
and this should not cause confusion with the ℓ_∞ norm.
For p=0, |A|_0,∞=max_j≤ J∑_i=1^I1_{|A_i,j|>0}.
When p=q=0, this is just the total number of non-zero elements
in A. Finally, |·|_ op is used to define
the following operator norm: |A|_ op=max_x:x'x≤1|Ax|_2.
Then, |A|_ op is the largest singular value of
A. For ease of reference, we call this norm the operator's norm.
Let
𝒰(ω,s)={Ω∈ℝ^2K×2K:Ω≻0,|Ω|_1,∞≤ω,|Ω|_0,∞≤ s}
The symbol Ω≻0 is used to mean that Ω is a symmetric
strictly positive definite matrix. Then, 𝒰(ω,s)
is the set of symmetric strictly positive definite matrices whose
absolute sum of column entries is at most ω, and with maximum
number of non-zero entries in each row equals s.
We shorten left and right and side with l.h.s. and r.h.s., respectively.
Finally, ≲ is used when the l.h.s. is bounded above by a
constant times the r.h.s.; ≳ is bounded below by a constant
times the r.h.s.; ≍ is used when the l.h.s. is bounded below
and above by constants times the r.h.s.. Finally, to avoid notational
trivialities, we assume that K≥2.
§.§ Assumptions
(Model) There are monotone
functions f_k such that Z_t,k=f_k(X_t,k)
is a standard Gaussian random variable such that (<ref>)
holds. Moreover, X_t has continuous marginal distributions.
(Dimension) The state
space is a subset of ℝ^K, where K=O(n^η_K)
for some η_K<∞.
(Precision
matrix sparsity) The precision matrix Θ=Σ^-1 is an
element of 𝒰(ω,s) for s=O(n^η_s)
for some η_s<1/2.
(Identifiability) θ_min≳ n^-η_θ,
η_θ<1/2, where θ_min is the smallest absolute
value of the nonzero elements in Θ.
(Eigenvalues) The singular
values of A are in a compact interval inside (0,1)
and the eigenvalues of Σ_ε are in a compact interval
inside (0,∞), uniformly in K.
Strictly speaking, if K→∞ as n→∞,
we should index both the process X and its law by n and think
in terms of a sequence of processes. We refrain to do so for notational
simplicity. No part in the proofs makes implicitly use of assumptions
that contradicts this.
§.§ Remarks on the Assumptions
Assumption <ref>.
The modelling assumption includes a Gaussian linear vector autoregressive
model as special case. However, it is clearly more general than that.
Once, we assume that the data satisfy a VAR model after a monotone
transformation, we do not need to impose any moment condition on the
original data. Hence the procedure is robust to fat tails. As discussed
in Section <ref>, we can view this assumption as a
Gaussian copula assumption for the cross-sectional and time series
dependence. Assumption <ref> can be viewed as a generalization
of the framework of Liu et al. (2012) in the time series direction
and has been recently exploited by Fan et al. (2022) to test for Granger
causality in high dimensional models.
The continuity of the marginal distribution of X_t is not needed.
As shown in Fan et al. (2017) we can recover the parameters of the
latent Gaussian process even for mixed data types (see also Section
<ref> in the Appendix). In this
case, we would modify Algorithm <ref> accordingly.
Our results apply to a VAR(p) for fixed and finite p, if we
redefine W_t:=(X_t',X_t-1')' to be W_t:=(X_t',X_t-1',...,X_t-p')';
here p is defined locally and not related to the same symbol in
other parts of the paper. Then, we just need to change the dimension
of the set of matrices in (<ref>) from 2K×2K
to K_p× K_p where K_p=(p+1)K. The conditions,
will then apply to these new quantities. Clearly, the dimension of
the matrix Θ is K_p× K_p while the dimension of
the submatrix Θ_11 in (<ref>)
is still K× K.
Assumption <ref>.
The precision matrix is supposed to have maximum absolute sum of each
column bounded by a constant ω. Our bounds make explicit the
dependence on ω so that we can have ω→∞
if needed. This constant is only used in Algorithms <ref>
and <ref>. The total number of non zero elements
in each row is supposed to be bounded by a constant s. This is
allowed to grow to infinity with the sample size at a certain rate.
This assumption is different from Fan et al. (2022) who assumes that
the autoregressive matrix A in (<ref>) is sparse.
This is not the case here. By Lemma <ref>,
sparsity of Θ does not imply sparsity of either A or Σ_ε.
In order to see this, we recall that Θ_i,j=0 if and only
if W_t,i and W_t,j are independent, conditioning on all
the other remaining variables Lauritzen (1996, Proposition 5.2), where
W_t:=(X_t',X_t-1')'.
For random variables
Y_1,Y_2,Y_3, let Y_1⊥ Y_2|Y_3 mean that Y_1
and Y_2 are independent given Y_3. Now, suppose that for
all k∈[K] and l≠ k,
X_t,k⊥ X_t,l|{ X_t,k+1,X_t,k-1}∩{ X_t,i:i∈[K]}
X_t,k⊥ X_t,l|{ X_t-1,k+1,X_t-1,k,X_t-1,k-1}∩{ X_t-1,i:i∈[K]}
and
X_t-1,k⊥ X_t-1,l|{ X_t,k+1,X_t,k,X_t,k-1}∩{ X_t,i:i∈[K]}
and such that Θ_11=Θ_22. Intuitively, this means
that variables that are not close to each other in terms of index
are conditionally independent. The intersection with { X_t,i:i∈[K]}
is to avoid conditioning on X_t,K+1 for example, as we only have
K variables. Given our modelling assumption (<ref>),
and the previous remarks about Θ, this means that Θ_11
and Θ_12 are tridiagonal. Moreover, Θ_11=Θ_22
means that the partial correlation between Z_t,k and Z_t,k+i
given all other covariates (including Z_t-1) is the same as the
partial correlation between Z_t-1,k and Z_t-1,k+i given
all other covariates (including Z_t) is the same. From Lemma
<ref> and the fact that the inverse
of a tridiagonal matrix is not sparse, we deduce that both A and
Σ_ε are not sparse.
Clearly, we can obtain non-sparse A from sparse Θ under
more general setups than Example <ref>.
This is just chosen as a simple illustration for the sake of conciseness.
Assumption <ref>.
This assumption is only used to ensure that we can identify the zero
entries in Θ. It is necessary in order to ensure the validity
of post selection asymptotic, though the rate can be arbitrarily slow
when θ_min→0 (Leeb and Pötscher, 2005, p.29ff).
Assumption <ref>.
The eigenvalues condition means that the variables are linearly independent
in the population. This could be weakened, but at the cost of technical
complexity. This assumption also implies the following.
Under Assumption
<ref> the following statements hold uniformly
in K:
* The eigenvalues of Γ=Var(Z_t) are bounded away
from zero and infinity;
* There are constants σ_min,σ_max∈(0,∞)
such that the eigenvalues of Σ in (<ref>)
are in the interval [σ_min,σ_max];
* There is a ν>0 such that |Θ_i,i|≥ν^2;
* The partial correlations of ε_t,i and ε_t,j
conditioning on any other subset of remaining innovations is bounded
above by a constant σ̅<1.
§.§ Uniform Convergence of the Scaling Matrix Estimator
The uniform consistency of the covariance estimator from Algorithm
<ref> is well known (Liu et al., 2012). It is
still consistent for dependent data.
Under the Assumptions,
|Σ̂-Σ|_∞=O_P(√(ln K/n)).
Fan et al. (2022) show a similar result using Kendall's tau instead
of Spearman's rho with a different method of proof.
§.§ Estimation of the Undirected Graph
§.§.§ Consistency for Algorithm <ref>
The reader is referred to the Assumptions and Algorithm <ref>
for the notation. Let β^(i) be the population
regression coefficient including a zero in the i^th entry, i.e.
the solution to Σ_·,ix-Σ=0 s.t. x_i=0.
Suppose that the
Assumptions hold. There is a finite constant c large enough such
that in Algorithm <ref>, choosing λ=λ_n=cω√(ln K/n),
with ω is as in Assumption <ref>
we have that max_i∈[K]|β̂^(i)-β^(i)|_1=O_P(ω s√(ln K/n)).
One could choose c→∞ slowly enough, in which case
the bound would be O_P(c×ω s√(ln K/n))
instead of O_P(ω s√(ln K/n)). The
proof of this result shows that we could have stated the results as
finite sample one with high probability. However, such statement would
still depend on an unknown constant. Hence, for simplicity, we have
chosen not to do so.
Using appropriate thresholding, with threshold constant greater than
the noise level, but smaller than θ_min, the absolute value
of the smallest nonzero entry in Θ, leads to set identification.
In what follows sign(x) is the sign of the real
variable x with sign(0)=0.
Suppose that
the Assumptions hold. In Algorithm <ref>, set τ=τ_n=o(θ_min)
such that λ=λ_n=o(τ_n) with λ
as in Theorem <ref>. If ω s√(n^-1ln K)→0,
then,
( sign(β̂_j^(i))≠ sign(β_j^(i)) for at least one i∈[K],j∈[2K])→0.
§.§.§ Consistency Results for Algorithm <ref>
The reader is referred to the Assumptions and Algorithm <ref>
for the notation.
Suppose that
the Assumptions hold. There is a finite constant c large enough
such that in Algorithm <ref>, λ=λ_n=cω√(ln K/n),
where ω is as in Assumption <ref>,
implies that |Ω̂-Θ|_∞=O_P(ω^2√(ln K/n)).
The same remark we made about c in Theorem <ref>
applies here. Also here, we could have stated the result as a finite
sample one with high probability.
Using the appropriate level of thresholding, Theorem <ref>
implies the following.
Suppose that
the Assumptions hold. In Algorithm <ref>, set τ=τ_n=o(θ_min)
and λ=λ_n=o(τ_n/ω) with λ
as in Theorem <ref>. If ω^2√(n^-1ln K)→0,
then,
( sign(Ω̂_i,j)≠ sign(Θ_i,j) for some i,j∈[2K])→0.
§.§ Estimation of the Process Parameters and Causal Graph
In what follows, we suppose that the conditions of either Theorem
<ref> or Theorem <ref>
hold, depending on which algorithm is used. For short we generically
refer to these as the Assumptions (λ,τ) as
they also involve restrictions on the choice of penalty λ
and threshold τ.
§.§.§ Consistency of Precision Matrix Estimation
The estimator for the precision matrix is elementwise uniformly consistent
under sparseness conditions.
Suppose
that the Assumptions (λ,τ) hold. Then, the
estimator Θ̂ from Algorithm <ref>
satisfies |Θ̂-Θ|_∞=O_P(√(ln K/n)).
While the quantity s=|Θ|_0,∞ does not enter
the bound, a constraint on its growth rate, as prescribed by Assumption
<ref>, is required for Theorem
<ref> to hold.
§.§.§ Consistency of the Estimators for the Autoregressive Matrix and Innovation
Covariance Matrix
Recall that by Lemma <ref>, using the
notation in (<ref>) and (<ref>),
A= -Θ_11^-1Θ_12 and Σ_ε=Θ_11^-1.
Hence, we need consistency of Θ_12 and the inverse of Θ_11,
which is the case under sparseness. Recall that s=|Θ|_0,∞
as in Assumption <ref>. We have
the following bounds in terms of the operator's norm.
Suppose
that the Assumptions (λ,τ) hold. Then, |Σ̂_ε-Σ_ε|_ op=O_P(s√(ln K/n))
and |Â-A|_ op=O_P(s√(ln K/n)).
§.§.§ PC Algorithm
Let Ĝ be the estimated PCDAG from Algorithm <ref>
and G the true PCDAG. The next result requires faithfulness of
the distribution of the data to the graph, as defined in Section <ref>.
In what follows, Φ(·) is the cumulative distribution
function of a standard normal random variable.
Suppose that the
Assumptions (λ,τ) hold and that the joint distribution
of the innovations ε_t in (<ref>) is
faithful to the DAG for all K. Run the PC algorithm as referenced
in Algorithm <ref> with α=α_n such that
α_n=2(1-Φ(n^1/2c_n/2)) for
c_n≍ n^-η_c where 2η_c+3η_s<1 with η_s
as in Assumption <ref>. Then, (Ĝ≠ G)≲ n^-p
for any constant p<∞.
Theorem <ref> says that the estimator for
the PCDAG converges to the true one at an arbitrarily fast polynomial
rate. This is worse that the exponential rate obtained by Kalisch
and Bühlmann (2007) for causal discovery using independent identically
distributed data.
§.§.§ Consistency of Structural Model Parameters
We show that D̂ from Algorithm <ref>
is consistent for D, with D as in Lemma <ref>.
When the PC algorithms in Algorithm <ref> produces edges
that are all directed, we interpret D to be the one corresponding
to the permutation matrix Π that is obtained by the least number
of row permutations of the identity. Then, D is unique.
In the following, we state the consistency of D̂ for D,
and the consistency of an estimator Ĥ for H, in (<ref>),
with convergence rates. We shall denote by κ the maximum number
of direct descendants among all parents. It is not difficult to show
that this is the same as the maximum number of nonzero elements among
the columns of D. Such number is bounded above by s, which corresponds
to the maximum number of adjacent variables across all the nodes.
Suppose that the Assumptions
(λ,τ) hold, that the joint distribution of
the innovations ε_t in (<ref>) is faithful
to the DAG for all K, and that all the estimated edges resulting
from Algorithm <ref> are directed. Then, using Algorithm
<ref>, |D̂-D|_ op=O_P(s√(κln K/n)),
where D is as in (<ref>) with Π obtained by the least
number of row permutations of the identity. Moreover, we also have
that Ĥ=(I-D̂)^-1 satisfies |Ĥ-H|_ op=O_P(s√(κln K/n)).
§ EMPIRICAL ILLUSTRATIONS
To showcase the methodology presented in this paper we consider two
illustrations. The first considers a supply side oil price shock.
This problem has recently been considered by Känzig (2021). While
the baseline model used in Känzig (2021) only includes 6 variables,
this is still a high dimensional problem due to the fact that the
selected number of lags is 12. Our aim is to highlight the features
of our methodology and how it can be used to gain additional insights
on the role of an external instrument. This application should clarify
some of the language used in the paper and draw a clear parallel between
the more common language used in economics and causal DAG's. We hope
to convince the reader that the use of the DAG has much to offer,
once its role is understood.
The second application focuses on the causal relation between information
in the order book in high frequency trading. For this application,
the latent model is a VAR(1), however, the number K of variables
is large: K=60. Among other things, there we highlight how the
information from the impulse response functions produces a net effect
that is different from the causal information flow represented by
the structural equation model or equivalently the causal graph.
§.§ The Effect of Oil Price Shocks
Shocks in real oil price can be caused by either demand or supply
shocks. Känzig (2021) uses oil futures price changes around OPEC
announcements as an instrument to identify supply side shocks. We
use our methodology to show how we identify the latent structural
VAR model without an instrument for this specific dataset. We then
include the instrument as an additional variable to our model and
show that the resulting DAG suggests that this is a valid instrument,
though unnecessary for the purpose of identifying the structural parameters.
We stress that the goal of this application is to refer to a state
of the art approach for a well known problem and show what we can
achieve with our methodology, assuming that the system is recursive.
§.§.§ The Data and the Covariates
We consider the same dataset used in Känzig (2021). The data
consists of real oil price, U.S. CPI, U.S. industrial production,
world industrial production, world oil inventories and world oil production.
We also include a Crude Oil Shock variable constructed in Känzig
(2021), as additional variable. This variable can be used as either
internal or external instrument for identification. Here, it will
be used as an internal instrument to show that it is a valid instrument,
relying on the graphical method of the paper. The sample period is
from February 1975 to December 2017. The data is at monthly frequencies.
Due to either the persistency or nonstationarity of the data, we first
difference all variables except for the oil supply news shock. The
covariates are listed in Table <ref>.
§.§.§ Estimation
We estimate the causal graph using our proposed methodology and the
six variables introduced in the previous section. We allow for lags
greater than one, by the minor modification discussed in Section <ref>.
We choose a lag length of 12 as in Känzig (2021). This is in
line with the choice by Akaike's information criterion as implemented
in Section <ref> of the Electronic Supplement.
We use both Lasso (Algorithm <ref>) and CLIME (Algorithm
<ref>) for the estimation of the sparse precision
matrix. For these algorithms, the penalization parameter λ
and the threshold parameter τ are selected using cross-validation
(see Section <ref> in the Electronic Supplement
for details). We then apply Algorithms <ref>,
<ref>, and <ref> to estimate the
Gaussian copula VAR parameters, recover the contemporaneous causal
structure and possibly identify the matrix of contemporaneous relations
D. The latter can then be used for estimation of the impulse response
functions.
§.§.§ Summary of Results
The results for Lasso and CLIME were very similar. In the interest
of space, we report and discuss only the results when Lasso (Algorithm
<ref>) is used as intermediate step, with no further
mention.
Using our methodology, we estimate a model with 12 lags, in line with
Känzig (2021). This means that the number of relevant parameters
to be estimated is 12K^2+K·(K+1)/2=453, where K=6
is the number of covariates in the model without instrument. Given
a sample size of n=503 we clearly are in a high dimensional setting.
We found that all the edges of the causal graph were directed. This
means that we are able to identify the permutation matrix Π and
the matrix D in Lemma <ref>. In consequence,
the SVAR parameters are identified without the need of an instrument,
under the assumption that the system is recursive. We report the DAG
in Figure <ref>.
To shed further light on the value of OilShock as instrument used
in Känzig (2021) and confirm that it satisfies the exclusion
restriction for an instrument, we estimate the model including the
latter as an additional variable. We now have K=7 variable with
same number of lags. The result shows that OilShock is a source node,
it impacts real oil prices directly and is not connected to any of
the other variables. According to the discussion in Section <ref>
this means that it is a valid instrument.
The results also highlight a challenge. We find that a shock to UsCpi
leads to a contemporaneous effect on OilPrice. This appears inconsistent
with logic and economic theory. One explanation is that the underlying
assumption that the system is recursive, after proper permutation,
is not satisfied. However, the direction of this particular causal
relation is identified using the example at the end of Section <ref>,
which does not rely on recursivity. To see this, note that from the
data we have that OilShock and UsCpi are unconditionally independent,
but dependent when we condition on OilPrice. Then, this implies that
OilShock and UsCpi are unrelated common causes to OilPrice (Section
<ref>). Hence, the answer needs to be
found elsewhere. To check that the results are not statistical artifacts
specific to our methodology, we estimate the same model as in Känzig
(2021): a VAR for the levels of the observed variables with 12 lags.
We then check whether the residuals of OilShock and UsCpi are unconditionally
independent of each other and all other variables, but are dependent
when conditioning on OilPrice. We found that with 95% confidence,
this is the case. Hence, we rule out that this is a statistical artifact
specific to our methodology. The answer requires work beyond the scope
of this paper.
There are two take away from this empirical illustration. First, for
this specific data set, we are able to identify the latent SVAR parameters,
under the assumption of a recursive structure with no need for an
instrument. Note even assuming that the structure is recursive, the
set of possible solutions has cardinality that grows exponentially
with the number K of variables. Hence, this is a nontrivial exercise.
Second, by including the potential instrument as one of the variables,
we are able to confirm the validity of the instrument. Again the underlying
assumption is that the system is recursive. However, given the structure
of the graph, recursivity is not used to orient the edges in the subgraph
OilPrice, OilShock and UsCpi.
§.§ Causal Relations in the Limit Order Book
Large orders to buy or sell stocks in a financial market are usually
broken into smaller ones and executed over a fixed time frame, where
time can be measured in clock time or volume time (Donnelly, 2022
for a review). One important dilemma when executing orders is to decide
whether to execute crossing the spread or posting passive orders.
The latter does not guarantee immediate execution. However, it is
believed to reduce market impact. Empirical evidence shows that passive
orders that skew the limit order book also cause market impact. Hence,
it is of interest to understand the causal implication of an algorithm
that keeps posting limit orders resulting on an order book imbalance
(Table <ref> for definition) versus an algorithm
that continuously trade crossing the spread. Moreover, limit orders
may be posted at deeper levels in to book to gain queue priority.
Such actions tend to have a persistent impact on the book as many
orders need to be executed over a finite number of time. We are interested
in understanding the implications of such persistent actions.
To this end, we apply our methodology to study the causal relations
between aggregated order book and trades variables in high frequency
electronic trading. Aggregation allows us to reduce noise and extract
information that is concealed at high frequency. We aggregate the
information in volume time (Section <ref>,
for more details). This is different from the analysis of order book
tick data which has been studied extensively in the literature (Cont
et al., 2014, Kercheval and Zhang, 2015, Sancetta, 2018, Mucciante
and Sancetta, 2022a, 2022b). It is well known that market participants
look at the order book to extract market information (MacKenzie, 2017).
We want to extract average causal relations. The underlying assumption
is that (<ref>) holds, i.e.
the causal structure can be represented in terms of a DAG.
We shall estimate a model with 5 stocks to investigate the direction
of information dissemination within each stock, via the order book
and trades, as well intra stocks. This requires the estimation of
a large dimensional model. Our results will also show how the methodology
of this paper allows us to disentangle contemporaneous causal effects
from time series effects.
§.§.§ The Data and the Covariates
We consider four stocks constituents of the S&P500 traded on the
NYSE: Amazon (AMZN), Cisco (CSCO), Disney (DIS) and Coca Cola (KO).
We also consider the ETF on the S&P500 (SPY). The stock tickers are
given inside the parenthesis. The sample period is from 01/March/2019
to 30/April/2019, from 9:30am until 4:30pm on every trading day. The
data were collected from the LOBSTER data provider (Huang and Polak,
2011)[https://lobsterdata.com/.]. This is a Level 3 dataset,
meaning that it contains all limit orders and cancellations for the
first 10 levels of the order book as well as trades, all in a sequential
order.
We construct a set of covariates related to the ones that are commonly
found in the studies of high frequency order book and trades. However,
we use aggregated data in volume time in intervals of 10% of daily
volume of SPY. Volume time means that instead of clock time, we use
cumulated trades as measure of time. We choose SPY as common time
for all the instruments as this is the asset that replicates the S&P500
index. Aggregated data allow us to estimate an average propensity
of each covariate to cause the other. For example, an order book where
limit orders to buy tend to be much higher than limit orders to sell
could drive the price up over. The covariates are the book imbalance
up to ten levels, a geometric average return, and the trade imbalance,
often termed order flow imbalance. The covariates are listed in Table
<ref>, where their definition can be found.
In Table <ref>, Mid=( AskPrice_1+ BidPrice_1)/2
and LagMid is the Mid from the previous minute bucket,
where AskPrice_i is the ask price at level i and similarly
for BidPrice_i. The operator avg(·)
takes the data from the same one minute bucket and computes the average
value. In case of much market activity, the exchange will use the
same timestamp for a number of messages at different levels. In the
case of the orderbook, we use the last book snapshot of the many with
the same time stamp. We do not apply this logic to trades. These covariates
are directional ones. For this reason, we have omitted other interesting
ones, such as the spread. Moreover, the instruments we use are all
very liquid and the spread does not change much in this case.
For ease of reference, in what follows, we shall use the convention
of merging the ticker and covariate short name.
§.§.§ Estimation
The estimation is the same as in Section <ref>,
but constraining the analysis to one lag only. We shall also compute
the impulse response functions for a subset of the variables using
the methodology discussed in Section <ref>
in the Appendix.
§.§.§ Summary of Results
The results for Lasso and CLIME were very similar. We discuss only
the results when Lasso (Algorithm <ref>) is used
as intermediate step. Our results show that the causal structure of
the order book of each instrument exhibits a dense network structure.
Within each instrument, the first level of order book imbalance is
not contemporaneously caused by any other variable (this is called
a source node). In general we observe how the causal structure goes
from top levels of the book to deeper ones. Usually, the return is
affected directly by the deeper levels of the order book imbalance.
For all instruments the return is a cause of the trade imbalance variable
that does not happen to cause any other variable (this is called a
sink node). We also observe cross-causal effects across instruments.
We observe how in general the return of an instrument could be affected
by other instrument returns, e.g., AMZN return impacts CSCO and the
SPY return. In particular, the SPY return is affected by the other
returns. We also observe that the trade imbalance of an instrument
may directly affect the top levels of the book of other instruments
impacting so on all the order book structure, e.g., the AMZN trade
imbalance directly affects the first level of CSCO and SPY book imbalance
as well as the respective trade imbalance together with trade imbalance
of DIS and the eighth level of the SPY book imbalance.
The details can be found in Figure <ref> that
shows the DAG of contemporaneous causal relations obtained from our
estimation procedure.
< g r a p h i c s >
Contemporaneous Causal Graph for the Aggregated Oderbook Information.
The graph is estimated using the methodology of the paper with Lasso.
The penalty parameters λ and τ are chosen by cross-validation.
The results are robust to parameters chosen locally around the cross-validation
ones.
We also show how the contemporaneous impulse response function Π'HΠ
may fail to show the direct contemporaneous causal relations defined
via Π'DΠ in (<ref>).
Consider the subgraph composed by CSCOBookImb_1, CSCOBookImb_2,
CSCORet and SPYRet as shown in Figure <ref>.
The related impulse response functions are plotted in Figure <ref>.[Bootstrap confidence intervals were very tight due to the large sample
size, so they are not plotted.] By looking at the impulse response functions, we may conclude that
CSCOBookImb_1 and CSCOBookImb_2 are directly
affecting CSCORet and SPYRet. However, this effect
is mediated as shown in Figure <ref>. There,
we observe that a shock on CSCOBookImb_1 will first impact
the CSCOBookImb_2 and CSCORet and then it propagates
to the SPYRet. Only the causal graph or equivalently the
structural equations system allows us to understand the information
flow in the order book. The impulse response functions only represent
the contemporaneous net effect of a shock.
§ CONCLUSION
This paper has introduced a novel approach for the estimation of causal
relations in time series. It essentially uses a Gaussian copula VAR
model. Such causal relations differ from Granger causality. Our methodology,
allows us to identify causal relations in high dimensional models.
Using a sparsity condition we are able to consistently estimate the
model parameters. Our sparsity condition does not impose sparsity
of the autoregressive matrix and of the covariance matrix of the innovations
implied by the Gaussian copula VAR model. Our sparsity conditions
can be viewed as weak assumptions on conditional independence. We
are then able to identify the related directed acyclic graph of causal
relations, using observational data, as if we knew the true distribution
of the data.
Asymptotic results and finite sample investigation confirm the viability
of our methodology and its practical usefulness for high dimensional
problems. A finite sample analysis, carried out using simulation (Section
<ref> in the Electronic Supplement), confirms
the asymptotic results of the paper. Moreover, the simulations show
that not accounting for time series dependence leads to wrong causal
inference. Failing to exploit sparsity leads to suboptimal results,
even in low dimensions.
We also relied on two empirical applications to highlight the methodology
of the paper. We considered the effect of oil price shocks to the
economy as studied in Känzig (2021). We showed how our methodology
can be used to verify whether an instrument is needed and whether
the instrument is a valid one. Then, we applied our methodology to
the analysis of the conditional contemporaneous causal relations of
order book data aggregated in volume time. To the best of our knowledge
this has not been done before and has important implications for understanding
the aetiology of electronic trading. The applications also showed
how causal inference provides the path followed by a shock via a system
of structural equations that has a graphical representation. On the
other hand the contemporaneous impulse response functions show the
net effect with no information on the actual causal path.
There are a number of areas that have been overlooked and require
further research in the future. For example, the methodology assumes
that the system of structural innovations of the latent VAR process
is recursive. In this case, strategies for partial identifications
within our framework need to be devised. However, we showed that methods
based on instruments can still be used in our setup. Moreover, the
literature has put forward the possibility of models that exhibit
some form of time variation. This time variation is then exploited
for identification via heteroskedasticity. Our framework does not
cover this, yet. This extension requires careful study, as it has
nontrivial implications for the meaning of causality, as used in this
paper. For example, time variation may result from omitted variables/causes.
In this case, a nonlinear framework, such as ours, can be a suitable
starting point to address the problem. To conclude, our approach provides
an opportunity for much new research building on the existing contributions
in the literature.
10
key-3 Acid, S. and L.M. de Campos (2003) Searching for
Bayesian Network Structures in the Space of Restricted Acyclic Partially
Directed Graphs. Journal of Artificial Intelligence Research 18, 445–490.
key-41Bernanke, B. (1986) Alternative Explanations of the
Money-Income Correlation. In Carnegie-Rochester Conference Series
on Public Policy 25, 49-99. North- Holland.
key-43Bernanke, B.S., J. Boivin and P. Eliasz (2005) Measuring
the Effects of Monetary Policy: A Factor-Augmented Vector Autoregressive
(FAVAR) Approach. The Quarterly Journal of Economics 120, 387-422.
key-44Blanchard, O. and D. Quah (1989) The Dynamic Effects
of Aggregate Demand and Supply Disturbances. American Economic Review
79, 655-673.
key-15Bühlmann, P., J. Peters and J. Ernest (2014)
CAM: Causal Additive Models, High-Dimensional Order Search and Penalized
Regression. The Annals of Statistics 42, 2526-2556.
key-1Cai, T., W. Liu and X. Luo (2011) A Constrained ℓ_1
Minimization Approach to Sparse Precision Matrix Estimation. Journal
of the American Statistical Association 106, 594-607.
key-45Chari, V., P.J. Kehoe and E.R. McGrattan (2008) Are
Structural VARs with Long-Run Restrictions Useful in Developing Business
Cycle Theory?. Journal of Monetary Economics 55, 1337-1352.
key-16Christiano, L.J., M. Eichenbaum and C. L. Evans (1999)
Monetary Policy Shocks: What Have We Learned and to What End?. Handbook
of Macroeconomics 1, 65–148.
key-2Clarke, P.K. (1973) A Subordinated Stochastic Process
Model with Finite Variance for Speculative Prices. Econometrica 41,
135-155.
key-5 Comon, P. (1994) Independent Component Analysis a
New Concept?. Signal Processing 36, 287–314.
key-2Cont, R., A. Kukanov and S. Stoikov (2014) The Price
Impact of Order Book Events. Journal of Financial Econometrics 12,
47-88.
key-3Darsow, W.F., B. Nguyen and E.T. Olsen (1992) Copulas
and Markov processes. Illinois Journal of Mathematics 36, 600-642.
key-46Demiralp, S. and K.D. Hoover (2003) Searching for
the Causal Structure of a Vector Autoregression. Oxford Bulletin of
Economics and Statistics 65, 745-767.
key-4Donnelly, R. (2022) Optimal Execution: A Review. Applied
Mathematical Finance 29, 181-212.
key-2Doukhan, P. (1995) Mixing. New York: Springer.
key-5Fan, J., H. Liu, Y. Ning and H. Zou (2017) High Dimensional
Semiparametric Latent Graphical Model for Mixed Data. Journal of the
Royal Statistical Society B 79, 405-421.
key-1Fan Y., F. Han and H. Park (2022) Estimation and Inference
in a High-dimensional Semiparametric Gaussian Copula Vector Autoregressive
Model. Preprint.
key-47Faust, J. and E.M. Leeper (1997) When Do Long-Run
Identifying Restrictions Give Reliable Results?. Journal of Business
& Economic Statistics 15, 345-353.
key-48Forni, M., D. Giannone, M. Lippi and L. Reichlin
(2009) Opening the Black Box: Structural Factor Models with Large
Cross-Sections. Econometric Theory 25, 1319-1347.
key-11Forni, M., M. Hallin, M. Lippi and L. Reichlin (2000)
The Generalized Dynamic-Factor Model: Identification and Estimation.
Review of Economics and Statistics 82, 540-554.
key-49Gouriéroux, C., A. Monfort and J.-P. Renne (2017)
Statistical Inference for Independent Component Analysis: Application
to Structural VAR Models. Journal of Econometrics 196, 111-126.
key-9Han, F. and W.B. Wu (2019) Probability Inequalities
for High Dimensional Time Series Under a Triangular Array Framework.
https://arxiv.org/abs/1907.06577v1.
key-17Hanson, M. S. (2004) The “Price
Puzzle” Reconsidered. Journal of Monetary Economics
51, 1385–1413.
key-6Harris, N. and M. Drton (2013) PC Algorithm for Nonparanormal
Graphical Models. Journal of Machine Learning Research 14, 3365-3383.
key-6Hyvärinen, A., J. Karhunen and E. Oja (2001)
Independent Component Analysis. Wiley, New York.
key-7Hyvärinen, A. and E. Oja (2000) Independent Component
Analysis: Algorithms and Applications. Neural Networks 13, 411–430.
key-1 Huang, R. and T. Polak (2011) LOBSTER: The Limit
Order Book Reconstructor. School of Business and Economics, Humboldt
Universität zu Berlin, Techenical Report.
key-29Joe, H. (1997) Multivariate Models and Dependence
Models. London: Chapman & Hall.
key-27Kalisch, M. and P. Bühlmann (2007) Estimating
High-Dimensional Directed Acyclic Graphs with the PC-Algorithm. Journal
of Machine Learning Research 8, 613-636.
key-2Känzig, D. (2021) The Macroeconomic Effects of
Oil Supply News: Evidence from OPEC Announcements. American Economic
Review 111, 1092-1125.
key-3-2Kercheval, A.N., Y. Zhang (2015) Modelling High-Frequency
Limit Order Book Dynamics with Support Vector Machines. Quantitative
Finance 15, 1-15.
key-1Kilian, L. and H. Lütkepohl (2017) Structural
Vector Autoregressive Analysis. Cambridge University Press.
key-1Koop, G., M.H. Pesaran and S.M. Potter (1996) Impulse
Response Analysis in Non-Linear Multivariate Models. Journal of Econometrics
74, 119–147.
key-50Lanne, M., M. Meitz and P. Saikkonen (2017) Identification
and Estimation of NonGaussian Structural Vector Autoregressions. Journal
of Econometrics 196, 288-304.
key-7Lauritzen, S. L. (1996) Graphical Models. Oxford:
Oxford University Press.
key-5Leeb, H. and B. M. Pötscher (2005) Model Selection
and Inference: Facts and Fiction. Econometric Theory 21, 21-59.
key-32Liu, H., F. Han, M. Yuan, J. Lafferty and L. Wasserman
(2012) High Dimensional Semiparametric Gaussian Copula Graphical Models.
The Annals of Statistics 40, 2293-2326.
key-4Liu, H., J. Lafferty and L. Wasserman (2009) The Nonparanormal:
Semiparametric Estimation of High Dimensional Undirected Graphs. Journal
of Machine Learning Research 10, 2295-2328.
key-51Lütkepohl, H. and A. Netšunajev (2017)
Structural Vector Autoregressions with Heteroskedasticity: A Review
of Different Volatility Models. Econometrics and Statistics 1, 2-18.
key-1-2MacKenzie, D. (2017) A Material Political Economy:
Automated Trading Desk and Price Prediction in High - Frequency Trading.
Social Studies of Science 47, 172-194 .
key-1Mandelbrot, B. (1963) The Variation of Certain Speculative
Prices. Journal of Business 36, 394-419.
key-28Meinshausen, N. and P. Bühlmann (2006) High-Dimensional
Graphs and Variable Selection with the Lasso. The Annals of Statistics
34, 1436-1462.
key-2Mertens, K. and M. O. Ravn (2013) The Dynamic Effects
of Personal and Corporate Income Tax Changes in the United States.
American Economic Review 103, 1212-47.
key-1Plagborg-Møller, M. and C.K. Wolf (2021) Local
Projections and VARs Estimate the Same Impulse Responses. Econometrica
89, 955-980.
key-52Moneta, A. (2008) Graphical Causal Models and VARs:
An Empirical Assessment of the Real Business Cycles Hypothesis. Empirical
Economics 35, 275-300.
key-52-1Moneta, A., D. Entner, P. O. Hoyer and A. Coad
(2013) Causal Inference by Independent Component Analysis: Theory
and Applications. Oxford Bulletin of Economics and Statistics 75,
705-730.
key-6Mucciante, L. and A. Sancetta (2022a) Estimation of
a High Dimensional Counting Process Without Penalty for High Frequency
Events. Econometric Theory: https://doi.org/10.1017/S0266466622000238.
key-7Mucciante, L. and A. Sancetta (2022b) Estimation of
an Order Book Dependent Hawkes Process for Large Datasets. Preprint.
key-8Pearl, J. (2000) Causality: Models, Reasoning, and
Inference. Cambridge, UK: Cambridge University Press.
key-10Peters, J., J. M. Mooij, D. Janzing and B. Schölkopf
(2014) Causal Discovery with Continuous Additive Noise Models. Journal
of Machine Learning Research 15, 2009–2053.
key-53Rigobon, R. (2003) Identification through Heteroskedasticity.
The Review of Economics and Statistics 85, 777-792.
key-4Sancetta, A. (2018) Estimation for the Prediction
of Point Processes with Many Covariates. Econometric Theory 34, 598-627.89-107.
key-10Sentana, E. and G. Fiorentini (2001) Identification,
Estimation and Testing of Conditionally heteroskedastic Factor Models.
Journal of Econometrics 102, 143-164.
key-9Shimizu, S., P. O. Hoyer, A. Hyvärinen and A.
Kerminen (2006) A Linear Non-Gaussian Acyclic Model for Causal Discovery.
Journal of Machine Learning Research 7, 2003–2030.
key-54Sims, C. A. (1980) Macroeconomics and Reality. Econometrica
48, 1-48.
key-18Sims, C. A. (1992) Interpreting the Macroeconomic
Time Series Facts: The effects of Monetary Policy. European Economic
Review 36, 975–1000.
key-42Spirtes, P., C. Glymour and R. Scheines (2000) Causation,
Prediction, and Search. Boston: The MIT Press.
key-1Stock, J. H. and M. W. Watson (2018) Identification
and Estimation of Dynamic Causal Effects in Macroeconomics Using External
Instruments. The Economic Journal 128, 917-948.
key-55Swanson, N. R. and C.W. Granger (1997) Impulse Response
Functions Based on a Causal Approach to Residual Orthogonalization
in Vector Autoregressions. Journal of the American Statistical Association
92, 357-367.
key-5Tsamardinos, I., L. E. Brown and C. F. Aliferis (2006)
The max-min hill-climbing Bayesian network structure learning algorithm.
Machine Learning 65, 31–78.
key-6Uhlig, H. (2005) What are the Effects of Monetary
Policy on Output? Result from an Agnostic Identification procedure.
Journal of Monetary Economics 52, 381-419.
key-13Zhou, S., P. Rütimann, M. Xu and P. Bühlmann
(2011) High-dimensional Covariance Estimation Based On Gaussian Graphical
Models. Journal of Machine Learning Research 12, 2975-3026.
§ APPENDIX
§ REMARKS ON THE GAUSSIAN TRANSFORMATION
We provide some remarks on the model in order to clarify its applicability.
For simplicity of exposition, suppose that the number of covariates
K=2 and that there is no time series dependence (i.e. A is a
zero matrix). For any univariate random variable X_t,k, there
is always a monotonic increasing function f_k such that f_k(X_t,k)
is standard normal. For example, let F_k(x)=(X_t,k≤ x)
and F_k(x-)=(X_t,k<x), x∈ℝ.
By stationarity, the probability is independent of t≥1. Define
F̃_k(x,v)=(1-v)F_k(x-)+vF_k(x).
If the variables are continuous with density with no atoms, F̃_k(x,v)=F_k(x).
The purpose of this section is to consider the case where F_k
is not necessarily continuous and show that our methodology still
applies. Once we state the following result, it will become clear
how to discuss the case where the variables are time dependent.
Let V_t,1 and
V_t,2 be uniform random variables in [0,1] independent
of X_t,1 and X_t,2. The following hold.
* Ũ_t,k:=F̃_k(X_t,k,V_t,k) is a
uniform random variable in [0,1] and X_t,k=F_k^-1(Ũ_t,k)
almost surely, k=1,2.
* Let Φ^-1:[0,1]→ℝ be the quantile
function of the standard normal distribution. Then, Z̃_t,k:=Φ^-1(Ũ_t,k)
is a standard normal random variable, k=1,2.
* Define π_V:=𝔼V_t,1V_t,2. Then, ρ̃(π_V):=12Cov(F̃_1(X_t,1,V_t,1),F̃_2(X_t,1,V_t,2))
is a function of π_V. If V_t,1 and V_t,2 are independent,
π_V=1/4 and we have that
ρ̃(1/4)= 12𝔼[F_1(X_t,1-)+F_1(X_t,1)/2][F_2(X_t,2-)+F_2(X_t,2)/2]-3.
* Let U_t,1 and U_t,2 be uniform random variables in [0,1]
with Gaussian copula, and Z_t,k=Φ^-1(U_t,k),
k=1,2. Let ρ:=12Cov(U_t,1,U_t,2), then, r_Z:=Cov(Z_t,1,Z_t,2)=2sin(π/6ρ).
* Let Z_t,1 and Z_t,2 be standard Gaussian with correlation
coefficient r_Z. Let U_t,k=Φ(Z_t,k) and
X_t,k=F_k^-1(U_t,k), k=1,2, and
h(r)=𝔼^X_t,1𝔼^X_t,2Φ(Φ^-1(1-F_1(X_t,1)),Φ^-1(1-F_2(X_t,2));r),
where 𝔼^X_t,k is expectation w.r.t. the marginal law
of X_t,k, k=1,2. Then, the function h(r) is
strictly increasing w.r.t. r∈[-1,1]. Moreover 𝔼F_1(X_t,1)F_2(X_t,2)=h(r_Z),
where Φ(·,·;r_Z) is the bivariate distribution
of two standard normal random variables with correlation equal to
r_Z.
* Suppose that F̂_k is an estimator for F_k satisfying
sup_x∈ℝ|F̂_k(x)-F_k(x)|→0
in probability, k=1,2 and that the data is ergodic. Then
1/n∑_t=1^nF̂_1(X_t,1)F̂_2(X_t,2)→𝔼F_1(X_t,1)F_2(X_t,2)
in probability.
Lemma <ref> show how the transformation
in (<ref>) can be used construct uniform
random variables in [0,1] (Point 1). Once variables
are uniform, we can obtain standard Gaussian random variables (Point
2). Points 1 and 2 also mean that we can choose f_k in the definition
of (<ref>) such that f_k^-1(Z_t,k):=F_k^-1(Φ(Z_t,k)).
Spearman's rho is a commonly used measure of dependence which is invariant
under strictly monotone transformation. However, the population Spearman's
rho for the transformed variables depends on the dependence between
V_t,1 and V_t,2, i.e. π_V. Hence, it is no unique.
When π_V=1/4, the transformation produces a Spearman's rho
through independent linear interpolations between the discontinuity
points of the distribution functions of the two variables (Point 3).
On the other hand, if the variables are continuous, the transformation
produces produces uniform random variables with dependence structure
that maps into the dependence structure of the latent process via
a closed for expression (Point 4). However, discontinuities do not
preclude us from identification of the correlation coefficient r_Z
of the latent Gaussian variables (Points 5). All we need to do is
to replace the map ρ↦2sin(π/6ρ)
with 𝔼F_1(X_t,1)F_2(X_t,2)↦ h^-1(𝔼F_1(X_t,1)F_2(X_t,2)).
Mutatis mutandis, this observation has been made in Fan et al. (2017)
and can be used for identification of the distributional parameters
of the latent process when F_k is not continuous.
Suppose that X_t,1 and X_t,2 are binary
random variables with values in { 0,1} and such that
(X_t,k=1)=p_k, k=1,2. Suppose that their joint
dependence is captured by a Gaussian copula. Then from Lemma <ref>,
X_t,1=1_{ U_t,1≥ p_1} and X_t,2=1_{ U_t,2≥ p_2}
where (U_t,1,U_t,2) are uniform random variables
in [0,1] with Gaussian copula with scaling matrix Σ
such that the (1,2) entry is Σ_1,2=r_Z=2sin(π/6ρ)
where ρ=12Cov(U_t,1,U_t,2). Moreover, from Lemma
<ref> (Point 5), we have that,
𝔼F_1(X_t,1)F_2(X_t,2)= ∑_x_1∈{ 0,1}∑_x_2∈{ 0,1}Φ(Φ^-1((X_t,1≥ x_1)),Φ^-1((X_t,2≥ x_2));r_Z)
×(X_t,1=x_1)(X_t,2=x_2).
By strict monotonicity w.r.t. r_Z, if we know 𝔼F_1(X_t,1)F_2(X_t,2)
and p_1, p_2, we can uniquely identify r_Z.
Knowledge of r_Z essentially hinges on knowledge of 𝔼F_1(X_t,1)F_2(X_t,2).
A uniformly consistent estimator of the distribution function of the
data assures that such quantity is consistently estimated (Point 6).
A natural estimator F̂_k for F_k is the empirical distribution
function based on a sample of size n. In this case, the uniform
convergence is exponentially fast (Lemma <ref>
in the Electronic Supplement). We also note that with an additive
error O(1/n),
12/n∑_t=1^n(F̂_1(X_t,1)F̂_2(X_t,2)-1/4)
is equal to the sample rank correlation coefficient (sample Spearman's
rho) because nF̂_k(X_t,k)=∑_s=1^n1_{ X_s,k≤ X_t,k}
is the rank of variable X_t,k, k=1,2. This also means that
if F_k is discontinuous, we can use h^-1(ρ̂+3)
as an estimator of r_Z, where ρ̂ is the sample Spearman's
rho. See Fan et al (2017) for consistency when h needs to be estimated.
§ IMPULSE RESPONSE FUNCTIONS
Given that our model is Markovian, we define
𝔼[X_t+s,k|X_t-1=x,ξ_t,l=δ]-𝔼[X_t+s,k|X_t-1=x,ξ_t,l=0]
to be the impulse response of X_t+s to a shock in ξ_t,l
equal to δ and conditioning on a fixed value of X_t-1=x∈ℝ^K.
Recall that ξ_t,l is not necessarily the shock corresponding
to Z_t,l. The latter is given by the l^th entry in Πξ_t.
Integrating out x w.r.t. the marginal distribution of X_t
(<ref>) gives an unconditional impulse
response function. We introduce some notation to simplify the statement
of the details in what follows. For any matrix B, let [B]_k,l,
[B]_k,·, [B]_·,l be the k,l
entry, the k^th row and l^th column respectively. If B
is a column vector, write [B]_k for its k^th
entry.
Under the conditions
of Lemma <ref>, for any scalar δ,
in (<ref>) we have that
𝔼[X_t+s,k|X_t-1=x,ξ_t,l=δ]=𝔼f_k^-1([A^s+1z+∑_r=0^s-1Υ_rξ_t+s-r+Υ_sξ_t(δ,l)]_k)
where z∈ℝ^K has l^th entry z_l=f_l(x_l),
l=1,2,...,K and ξ_t(a,l) equals ξ_t except
for the l^th entry which is fixed to a value equal to a∈ℝ.
Let f_k'(x)=df_k(x)/dx. Then, for δ→0,
(<ref>) equals
𝔼[∂ X_t+s,k/∂ξ_t,l|X_t-1=x,ξ_t,l=δ]=𝔼[f_k'([A^s+1z+∑_r=0^s-1Υ_rξ_t+s-r+Υ_sξ_t(0,l)]_k)]^-1[Υ_s]_·,lδ
Despite the possibly involved notation, the conclusions of Lemma <ref>
are simple. To find the impulse response, we need to find the inverse
of f_k. This function is unknown. Our methodology to estimate
the parameters of the latent SVAR does not require explicit knowledge
of f_k. However, if we want to compute (<ref>)
such knowledge is needed. An estimator can be based on the truncated
inverse of the empirical distribution function (Liu et al., 2009).
Given that the latent model is Gaussian with i.i.d. innovations, the
expectation can be simply computed by Monte Carlo integration (Section
<ref> for more details). According
to Lemma <ref>, using the notation therein,
we have that f_k^-1(·):=F_k^-1(Φ(·)).
Finally, Lemma <ref> says that if we are interested
in the infinitesimal effect of a shock, we can linearize (<ref>).
In this case, up to a proportionality constant, (<ref>)
is equal to [Υ_s]_·,lδ. Hence, we
are only interested in the shape of the impulse response, knowledge
of Υ_s is sufficient.
§.§ Monte Carlo Integration
From Lemma <ref>, ξ_t=H^-1Πε_t.
Define Σ_ξ:=𝔼ξ_tξ_t' so that Σ_ξ=H^-1ΠΣ_ε(H^-1Π)'.
For each of the variables ξ_t+s-r simulate m i.i.d. Gaussian
random vectors with covariance matrix Σ_ξ. Use a superscript
to denote these simulated data, i.e. {ξ_t+s-r^(v):v=1,2,...,m}.
The expectation in (<ref>)
is approximated by
1/m∑_v=1^mf_k^-1([A^s+1z+∑_r=0^s-1Υ_rξ_t+s-r^(v)+Υ_sξ_t^(v)(δ,l)]_k).
To compute an unconditional impulse response function, we need to
integrate out z. To do so, we replace the above with
1/m∑_v=1^mf_k^-1([A^s+1Z^(v)+∑_r=0^s-1Υ_rξ_t+s-r^(v)+Υ_sξ_t^(v)(δ,l)]_k)
where the random vectors Z^(v) are Gaussian mean zero
with covariance matrix Γ as in (<ref>).
§ SUPPLEMENTARY MATERIAL TO “CONSISTENT CAUSAL INFERENCE FOR HIGH
DIMENSIONAL TIME SERIES” BY F. CORDONI AND A. SANCETTA
§ PROOFS
Throughout, we use c_0,c_1,c_2,... to denote constants.
We also recall a property of symmetric strictly positive definite
partitioned matrices. Let Σ=([ A_11 A_12; A_12' A_22 ]) where A_i,j i,j∈{ 1,2} is a partition of
Σ. Then, Σ^-1=Θ=([ B_11 B_12; B_12' B_22 ]) where
B_11=(A_11-A_12A_22^-1A_21)^-1, B_12=-B_11A_12A_22^-1, B_22=(A_22-A_21A_11^-1A_12)^-1
(e.g. Lauritzen, 1996, eq. B.2).
The conclusions from Lemma <ref>
will be used in a number of places. Hence, we prove this first.
§.§ Proof of Lemma <ref>
We prove one point at the time.
Proof of Point 1.
From the condition on A, we have that Var(X_t)=∑_i=0^∞A^iΣ_ε(A')^i.
We note that
∑_i=0^∞ eig_min(A^iΣ_ε(A')^i)≤ eig_j(∑_i=0^∞A^iΣ_ε(A')^i)≤∑_i=0^∞ eig_max(A^iΣ_ε(A')^i)
j=1,2,...,K, where eig_j(·), eig_min(·)
and eig_max(·) are the j^th eigenvalue,
the minimum and the maximum eigenvalue of the argument (Bhatia, 1996,
eq. III.13, using induction). Moreover, we have that
eig_min(Σ_ε) eig_min(A^i(A')^i)≤ eig_min(A^iΣ_ε(A')^i)
and
eig_max(A^iΣ_ε(A')^i)≤ eig_max(A^i(A')^i) eig_max(Σ_ε).
To see this note that
max_x:x'x=1x'AΣ_εA'x≤max_y:y'y=x'A'Axy'Σ_εy= eig_max(A'A) eig_max(Σ_ε)
and similarly for the lower bound and for i>1. Given that the eigenvalues
eig_j(A'A) are in (0,1) and the
eigenvalues eig_j(Σ_ε) are
in (0,∞) by assumption, we conclude that the eigenvalues
of Var(X_t) are bounded away from zero and infinity,
uniformly in K.
Proof of Point 2.
From the definition in (<ref>), we have the
following equality,
Σ=[([ I 0; 0 I ])+([ 0 A; A' 0 ])]([ Γ 0; 0 Γ ]),
where, here, 0 represents a K× K matrix of
zeros. From the assumption on A and the fact that Γ=Var(X_t),
we can use the definition of eigenvalues and, mutatis mutandis, the
previous inequalities, from the proof of Point 1, to deduce the result.
Proof of Point 3.
From (<ref>) and the definition of Σ
as variance of (Z_t',Z_t-1')', we deduce that the
(i,i) element in Θ_11 is the inverse of the
variance of Z_t,i conditioning on Z_t-1,i, all the other
variables and their first lag. Given that the eigenvalues of Σ
are bounded away from zero, uniformly in K, the random variables
are not perfectly correlated. Hence there must be a constant ν>0
as in the statement of the lemma.
Proof of Point 4.
The eigenvalues of Σ_ε are in some compact interval
inside (0,∞), uniformly in K, by assumption.
Hence, the innovation vector has entries that are not perfectly dependent.
This means that no conditional correlation between any two variables
can be equal to one, uniformly in K.
§.§ Proof of Proposition <ref>
It is clear that the process X is a stationary Markov chain. The
mixing coefficients are invariant of monotone transformations of the
random variables. Hence, we can consider the mixing coefficients of
Z in (<ref>). For the Gaussian VAR model in (<ref>),
Theorem 3.1 in Han and Wu (2019) says that the strong mixing coefficient
α(k) for variables k periods apart satisfies
α(k)≤ c|A|_ op^k where c
is the square root of the ratio between the largest and smallest eigenvalue
of Var(Z_t). This ratio is bounded by Lemma <ref>.
On the other hand, |A|_ op is the largest singular
value of A, which is smaller than one, uniformly in K, by assumption.
Hence, the strong mixing coefficients decay exponentially fast.
§.§ Proof of Lemmas <ref> and <ref>
The conditions in Proposition <ref> ensure
that the model is stationary. We use this with no explicit mention
in the following.
§.§.§ Proof of Lemma <ref>
This follows from (<ref>) and Lauritzen (1996,
eq. C3-C4) or from (<ref>).
§.§.§ Proof of Lemma <ref>
By the assumption of the lemma, all edges of the graph of ε_t
are directed. There are also no cycles. Hence, there must be a permutation
matrix Π of the elements in ε_t such that the
i element in Πε_t is not a parent of the i-1
element. This implies the structure Πε_t=Hξ_t
where H is a lower triangular matrix with diagonal entries equal
to one. Note that H can have diagonal elements equal to one because
we are not assuming that 𝔼ξ_tξ_t' is the identity.
The fact that the graph is acyclic means that H is full rank. Otherwise,
we would have a descendant that is an ancestor of itself. Now note
that the inverse of a lower triangular matrix is also lower triangular.
Moreover, if the matrix has diagonal elements equal to one, also the
inverse has diagonal elements equal to one. Hence, we can write H^-1=I-D
where D is as in the statement of the lemma and obtain (<ref>).
To find the infinite moving average representation, rewrite (<ref>)
as H^-1Π(I-AL)Z_t=ξ_t where, here, L is
the lag operator. By assumption, (I-AL) can be inverted
and has an infinite convergent series representation. Hence, we deduce
(<ref>) by standard algebra and the aforementioned
remarks on H.
§.§ Exponential Inequality for Spearman's Rho
For simplicity, we use notation that is local to this section only.
In this section we assume that ((X_t,i,X_t,j))_t≥1
are real valued stationary random variables with exponentially decaying
strong mixing coefficients. We also assume that the variables have
continuous distribution function F_i and F_j. As usual,
we denote by F̂_i and F̂_j the empirical distribution.
Let R_t,i:=∑_t=1^n1_{ X_t,i≤ X_t,i}
be the rank of variable X_t,j and similarly for R_t,j. In
Hoeffding (1948, p.318) we have that the sample version of Spearman's
rho is defined to be
ρ̂_i,j=12/n^3-n∑_t=1^n(R_t,i-n+1/2)(R_t,j-n+1/2).
This same statistic is also used in Liu et al. (2012, proof of Theorem
4.1). Note that other versions of of sample Spearman's rho can be
defined. These would essentially be equal to the above up to an additive
O(n^-1) term. Their analysis can be treated in a way
similar to what follows. For simplicity, we only focus on the above.
We recall the following Bernstein inequality from Merlevède et
al. (2009) which we shall use twice.
Let (Y_t)_t≥1
be a sequence of mean zero, stationary random variables whose absolute
value is uniformly bounded by y̅<∞, and with exponentially
decaying strong mixing coefficients. Then, for n≥4 and z≥0,
there is a constant c_1>0, depending on the mixing coefficients
only and such that
(|1/n∑_t=1^nY_t|≥ z)≤exp{ -c_1nz^2/y̅^2+zy̅ln n(lnln n)} .
A general main ingredient for our derivation of an exponential inequality
for (<ref>) is the following.
Under the assumptions
of this section, choose a c_2∈(0,∞), and let
z:=y-n^-c_2 for any y≥ n^-c_2. Then, there is a constant
c_1>0 such that
(sup_x∈ℝ|F̂_i(x)-F_i(x)|≥ y)≤2exp{ -c_1nz^2/1+zln n(lnln n)+c_2ln n} .
We can always find a continuous monotone transformation
x↦ g(x)∈[0,1] for x in the range
of X_t,i. Hence, given that 1_{ X_t,i≤ x}=1_{ g(X_t,i)≤ g(x)},
we can assume that X_t,i∈[0,1] for the purpose of
the proof. Note that continuity of g does not mean that g(X_t,i)
is a continuous random variable. Using standard techniques, we replace
the supremum by the maximum over a finite number of elements. We then
apply Lemma <ref>.
To do so, for fixed but arbitrary ϵ>0, we construct intervals
[x_l^L,x_l^U], l=1,2,...,N(ϵ),
such that |F_i(x)-F_i(z)|≤ϵ
for x,z∈[x_l^L,x_l^U]. The construction is
as follows and similar to the one of the Lebesgue integral. Fix an
arbitrary ϵ>0 and divide the interval [0,1]
into N(ϵ) intervals [t_l-1,t_l]
where 0=t_0<t_1<⋯<t_N(ϵ)=1 such t_l-t_l-1≤ϵ^-1
. Then, N(ϵ) is the smallest integer greater
than or equal to ϵ^-1. Define variables 0≤ x_1^L≤ x_2^L≤⋯≤ x_N(ϵ)^L=1
as x_l^L:=inf{ x>0:F_i(x)≥ t_l-1}.
Similarly, define variables 0≤ x_1^U≤ x_2^U≤⋯≤ x_N(ϵ)^U=1
as x_l^U:=sup{ x≤1:F_i(x)≤ t_l}.
It is not difficult to see that this construction has the aforementioned
properties. Note that we can have [x_l^L,x_l^U]
equal to a singleton, i.e. x_l^L=x_l^U, if there are discontinuities
in F_i and such discontinuities are larger than ϵ.
The following is a standard argument in the proof of the Glivenko-Cantelli
Theorem (van der Vaart and Wellener, 2000, proof of Theorem 2.4.1).
From the fact that F_i(x) and F̂_i(x)
are monotonically increasing, we have that F_i(x_l^L)≤ F_i(x)≤ F_i(x_l^U)
and F̂_i(x_l^L)≤F̂_i(x)≤F̂_i(x_l^U)
for x∈[x_l^L,x_l^U]. Also recall that 𝔼F̂_i(x)=F_i(x).
In consequence,
max_x∈[x_l^L,x_l^U](F̂_i(x)-F_i(x))= max_x∈[x_l^L,x_l^U](1-𝔼)F̂_i(x)≤(1-𝔼)F̂_i(x_l^U)
+max_x∈[x_l^L,x_l^U]𝔼(F̂_i(x_l^U)-F̂_i(x)).
≤ (1-𝔼)F̂_i(x_l^U)+ϵ
using monotonicity and the fact that |F_i(x_l^U)-F_i(x_l^L)|≤ϵ
by construction. In consequence,
max_x∈[0,1](F̂_i(x)-F_i(x))≤max_l∈{ 1,2,...,N(ϵ)}(1-𝔼)F̂_i(x_l^U)+ϵ.
Hence, using the union bound,
(max_x∈[0,1](F̂_i(x)-F_i(x))≥ y)≤ N(ϵ)max_x∈[0,1]((1-𝔼)F̂_i(x)≥ y-ϵ).
Set ϵ=n^-c_2. Apply Lemma <ref>
with Y_t=(1-𝔼)1_{ X_t,i≤ x}
for arbitrary, but fixed x, and z:=y-ϵ=y-n^-c_2.
A similar inequality holds for min_x∈[x_l^L,x_l^U].
Hence, we deduce the final result.
Control of the quantity below will be shown to be essentially equivalent
to control of (<ref>).
Under the assumptions
of this section, choose a c_2∈(0,∞), and let
z:=y-n^-c_2 for any y≥ n^-c_2. Then, there is a constant
c_1>0 such that
(|1/n∑_i=1^n(F̂_i(X_t,i)F̂_j(X_t,i)-𝔼F_i(X_t,i)F_j(X_t,i))|≥ y)
≤ 5exp{ -c_1nz^2/1+zln n(lnln n)+c_2ln n} .
By the triangle inequality and the uniform boundedness
of the empirical distribution function,
|1/n∑_i=1^n(F̂_i(X_t,i)F̂_j(X_t,i)-𝔼F_i(X_t,i)F_j(X_t,i))|
≤|1/n∑_i=1^n(F̂_i(X_t,i)-F_i(X_t,i))|+|1/n∑_i=1^n(F̂_j(X_t,j)-F_j(X_t,j))|
|1/n∑_i=1^n(F_i(X_t,i)F_j(X_t,i)-𝔼F_i(X_t,i)F_j(X_t,i))|.
We apply Lemma <ref> to the first two terms
on the r.h.s. and Lemma <ref> to the last one
to deduce the result.
The definition of the population version of Spearman's rho (e.g.,
Joe, 1997, p.32) between two random variables with joint distribution
F_i,j and marginals F_i and F_j is ρ_i,j=12∫∫ F_i(x)F_j(x)dF_i,j(x,y)-3.
Hence, we have the following.
Under the assumptions
of this section, there is a constant c_1>0 such that for n
large enough and any x≥6/n,
(max_i,j≤ K|ρ̂_i,j-ρ_i,j|≥ x)≤5exp{ -c_1nx^2/4(1+xln n(lnln n))+ln n+2ln K} .
Dividing and multiplying by n^2, (<ref>)
is equal to
12n/n^2-1∑_t=1^n(F̂_i(X_t,i)-n+1/2n)(F̂_i(X_t,i)-n+1/2n).
Again, by simple algebra, the triangle inequality and the fact that
F̂_i has range in [0,1], we have that for
n large enough, e.g. n≥24,
|ρ̂_i,j-12/n∑_i=1^n(F̂_i(X_t,i)F̂_j(X_t,i)-1/4)|≤24/n.
In consequence,
(|ρ̂_i,j-ρ_i,j|≥ x)≤(1/n∑_i=1^n(F̂_i(X_t,i)F̂_j(X_t,i)-𝔼F_i(X_t,i)F_j(X_t,i))≥ x-2/n).
We can then apply Lemma <ref> with
c_2=1 and y=x-2n^-1 to the r.h.s. of the above display.
In Lemma <ref> for x≥6/n,
we have z=(x-2/n)-1/n which implies
that z∈[x/2,x]. In Lemma <ref>,
replace z its lower bound and upper bound in the numerator and
denominator of the exponential function to deduce the result.
§.§ Lemmas on Control of the Sample Covariance Estimator and Related
Quantities
To avoid notational trivialities, suppose that K≥ n. If not,
replace K with n in what follows. Recall that ρ_i,j
is the rank correlation between W_t,i and W_t,j. By stationarity,
this does not depend on t. We have the following.
Under the Assumptions, for
n large enough, there is a finite constant c_0 such that
(max_i,j≤ K|ρ̂_i,j-ρ_i,j|≥ c_0√(ln K/n))≤ K^-1.
This follows from the inequality in Lemma <ref>.
There, we set x^2=32ln(K)/(c_1n) to
deduce that for c_0=√(32/c_1),
(max_i,j≤ K|ρ̂_i,j-ρ_i,j|≥ c_0√(ln K/n))≤5exp{ -8(ln K)-3(1+ϵ)ln K/1+ϵ}
for ϵ=√(32ln(K)/(c_1n))(ln n)(lnln n).
Under the Assumptions, for n large enough, ϵ≤1. Substituting
in the above display we find that the r.h.s. is bounded above by K^-1
and this proves the lemma.
We now show that the correlation matrix obtained from Spearman's rho
converges.
Under the Assumptions,
for n large enough, there is a constant c_0 (the same as in
Lemma <ref>), such that,
(max_i,j≤ K|Σ̂_i,j-Σ_i,j|≥3c_0/π√(ln K/n))≤ K^-1.
From Lemma <ref> we have
that Σ̂_i,j-Σ_i,j=2sin(π/6ρ̂_i,j)-2sin(π/6ρ_i,j).
If the variables were not continuous, we would need to use another
transformation (see the remarks in Section <ref>).
Given that sin(x) is Lipschitz with constant one, the
result follows from Lemma <ref>.
Suppose that the Assumptions
hold. Then, there is a constant c_3>0, such that, for n large
enough,
max_i,j≤ K(|Σ̂_i,j-Σ_i,j|≥ x)≤exp{ -nc_3x^2}
for any x satisfying xn^1/2→∞ and x(ln n)(lnln n)→0.
This follows from the remarks in the proof of Lemma
<ref> and then an application of Lemma <ref>using
the constraints on x.
§.§ Lemmas for the Control of the Precision Matrix Estimator
The following result for the control of the operator norm will be
used in the proofs.
Suppose that
Q̂ and Q are symmetric matrices such that Q has eigenvalues
bounded away from zero an infinity. If |Q̂-Q|_ op=ϵ,
then |Q̂^-1-Q^-1|_ op=O(|Q^-1|_ op^2ϵ)
as long as |Q^-1|_ op<ϵ^-1.
With the present notation, Lemma 4 Le and Zhong (2021)
says that
|Q̂^-1-Q^-1|_ op≤|Q^-1|_ op|Q^-1(Q̂-Q)|_ op/1-|Q^-1(Q̂-Q)|_ op.
Then, the result follows from the fact that |Q^-1(Q̂-Q)|_ op≤|Q^-1|_ op|Q̂-Q|_ op
together with the condition of the lemma to ensure that the denominator
is greater than zero.
The operator norm can be bounded by the uniform norm of the elements
using the following.
Suppose that
Q̂ and Q are symmetric matrices. Then, |Q̂-Q|_ op≤|Q̂-Q|_0,∞|Q̂-Q|_∞.
First, note that |Q̂-Q|_ op≤|Q̂-Q|_1,∞
because Q̂-Q is symmetric. This is well known because, for
any matrix A (not to be confused with the autoregressive matrix
in (<ref>)), A'Ax=σ^2x where σ^2
is the maximum eigenvalue of A'A and x is the corresponding
eigenvector. Hence, σ^2|x|_∞=|A'Ax|_∞.
By a special case of Holder inequality, |A'Ax|_∞≤|A'|_∞,1|A|_∞,1|x|_∞.
This implies that σ^2=|A|_ op^2≤|A|_1,∞|A|_∞,1.
Then, using the fact that, in our case, A=Q̂-Q is symmetric,
we deduce the inequality at the start of the proof. Moreover, |Q̂-Q|_1,∞≤|Q̂-Q|_0,∞|Q̂-Q|_∞
because |Q̂-Q|_0,∞ is the maximum number
of nonzero elements across the columns of Q̂-Q.
Define the event
E:={ 1_{Θ̂_i,j>0}=1_{Θ_i,j>0}}
We shall derive a number of results conditional on such event. The
event E means that {B̂_i:i∈[2K]}
in Algorithm <ref> correctly
identifies all the nonzero entries in Θ. The next result can
be found in the proof of Theorem 3 in Le and Zhong (2021).
Suppose that the Assumptions
hold. On the event (<ref>), there is a
constant c_4 such that
(|Θ̂-Θ|_∞≥ z)≤2K(|Σ̂-Σ|_∞≥ zc_4).
We can now use the lemmas from Section <ref>.
Suppose that the Assumptions
hold. On the event (<ref>), there is a
constant c_5>0, such that, for n large enough,
(|Θ̂-Θ|_∞≥ z)≤2exp{ -nc_5z^2+3ln K}
for any z satisfying zn^1/2→∞ and z(ln n)(lnln n)→0.
Moreover, |Θ̂-Θ|_∞=O_P(√(ln K/n)).
We bound the r.h.s. in the display of Lemma <ref>
using Lemma <ref> and the union bound. We can
then deduce that the r.h.s. of (<ref>) is bounded
above by 2K^3exp{ -nc_3c_4^2z^2}. Defining
c_5:=c_3c_4^2 and rearranging we deduce the first statement.
The second statement follows by choosing z large enough and proportional
to a quantity O(√(ln K/n)) so that the
first statement immediately gives that |Θ̂-Θ|_∞=O_P(√(ln K/n)).
Such choice of z is consistent with the constraint given in the
lemma.
We also need an exponential inequality for Θ̂_11^-1-Θ_11^-1.
For simplicity, we state the result for Θ̂^-1 rather
than Θ̂_11^-1.
Suppose that the Assumptions
hold and that s√(ln K/n)=o(1). On the event (<ref>),
there is a constant c_6>0 such that, for n large enough,
(|Θ̂^-1-Θ^-1|_∞≥ z)≤2exp{ -ns^-2c_6z^2+3ln K}
for any z satisfying zn^1/2→∞ and z(ln n)(lnln n)→0.
First, we note that for any symmetric matrix Q, |Q|_∞≤|Q|_ op.
This is because |Q|_ op=max_x,yx'Qy where
the maximum is over vectors with unit Euclidean norm. By this remark
and (<ref>) we deduce that the set {|Θ̂^-1-Θ^-1|_∞≥ z}
is contained in the set
{|Θ^-1|_ op|Θ^-1(Θ̂-Θ)|_ op/1-|Θ^-1(Θ̂-Θ)|_ op≥ z} .
For arbitrary events A and B, we shall use the trivial decomposition
A={ A∩ B}∪{ A∩ B^c}⊆{ A∩ B}∪ B^c,
where B^c is the complement of B. Then, we deduce that the
event in the above display is contained in the event
{|Θ^-1(Θ̂-Θ)|_ op≥1/2}∪{|Θ^-1|_ op|Θ^-1(Θ̂-Θ)|_ op≥ z/2}
For z/|Θ^-1|_ op→0, the above
union of two events is contained in the second event. This is the
case because the eigenvalues of Θ are bounded away from zero
and infinity by Lemma <ref>.
Hence, it is sufficient to bound the latter. Using a standard inequality
for operator norms, and then Lemma <ref>,
we deduce that
|Θ^-1(Θ̂-Θ)|_ op≤|Θ^-1|_ op|(Θ̂-Θ)|_0,∞|(Θ̂-Θ)|_∞.
On the event E in (<ref>), |(Θ̂-Θ)|_0,∞≤|Θ|_0,∞≤ s.
We assume E holds without making it explicit in the notation. In
consequence, recalling that, by Lemma <ref>,
σ_max is the largest singular value of Θ^-1=Σ,
which is bounded uniformly in K, we have that
(|Θ^-1|_ op|Θ^-1(Θ̂-Θ)|_ op≥ z/2)≤(|(Θ̂-Θ)|_∞≥ z/(2σ_max^2s)).
By Lemma <ref> and the conditions of the present
lemma, the r.h.s. is bounded above by 2exp{ -nc_5z^2/(2σ_max^2s)^2+3ln K}.
Setting c_6=c_5/(4σ_max^4), which is strictly
positive, gives the result.
The following result will be used in due course.
Suppose that
U, V_1, V_2 and Û, V̂_1, V̂_2
are random variables. Then, the event {|Û/V̂_1V̂_2-U/V_1V_2|≥ x}
is contained in the union of the following three events: {|Û(V̂_1-V_1)/V̂_1V_1V_2|≥ x/4},
{|Û(V̂_2-V_2)/V̂_1V̂_2V_2|≥ x/4}
and {|Û-U/V_1V_2|≥ x/2}.
Add and subtract Û/V_1V_2 to find
that
Û/V̂_1V̂_2-U/V_1V_2=(Û/V̂_1V̂_2-Û/V_1V_2)+(Û/V_1V_2-U/V_1V_2).
The first term on the r.h.s. can be written as
(Û/V̂_1V̂_2-Û/V_1V_2)=(Û/V̂_1V̂_2V_1V_2)[V̂_2(V̂_1-V_1)+V_1(V̂_2-V_2)].
We can then deduce the statement of the lemma by basic set inequalities.
Let Ξ̂_i,j=Σ̂_ε,i,j/√(Σ̂_ε,i,iΣ̂_ε,j,j)
and similarly for Ξ_i,j using Σ_ε in place
of Σ̂_ε. These are estimated and population
correlation coefficients between ε_t,i and ε_t,j.
Suppose that the Assumptions
hold. There is a constant c_7>0, such that, for n large enough,
max_i,j≤ K(|Ξ̂_i,j-Ξ_i,j|k|≥ z)≤16exp{ -ns^-2c_7z^2+3ln K}
for any z satisfying zn→∞ and z(ln n)(lnln n)→0.
We apply Lemma <ref>
to deduce that we need to bound the following probabilities
(E_1):=(|Σ̂_ε,i,j(Σ̂_ε,i,i-Σ_ε,i,i)/√(Σ̂_ε,i,iΣ_ε,i,iΣ_ε,j,j)|≥ z/4),
(E_2):=(|Σ̂_ε,i,j(Σ̂_ε,j,j-Σ_ε,j,j)/√(Σ̂_ε,i,iΣ̂_ε,j,jΣ_ε,j,j)|≥ z/4)
and
(E_3):=(|Σ̂_ε,i,j(Σ̂_ε,i,j-Σ_ε,i,j)/√(Σ_ε,i,iΣ_ε,j,j)|≥ z/2).
We further define the following events: E_4:={max_i,j≤ K|Σ̂_ε,i,j|≤3/2},
and E_5:={min_i≤ KΣ̂_ε,i,i≥σ_min/2}
where σ_min>0 is the minimum eigenvalue of Σ, by
Lemma <ref>. Then, (E_1)≤(E_1∩ E_4∩ E_5)+(E_4^c)+(E_5^c)
where, as usual, the superscript c is used to denote the complement
of a set. Before bounding each term separately, we note that by the
Cauchy interlacing theorem (Bhatia, 1996, Corollary III. 1.5), the
smallest eigenvalue of Σ_ε is no smaller than
σ_min. Moreover, Σ_ε,i,i≥σ_min.
To see this note that the l.h.s. is equal to e_i'Σ_εe_i,
where e_i is the vector with i^th entry equal to one and
all other entries equal to zero. On the other hand the r.h.s. is smaller
than min_x:x'x=1x'Σ_εx by the definition of
minimum eigenvalue and the Cauchy's interlacing theorem. Now,
(E_1∩ E_4∩ E_5)≤ (|3σ_min^-3/2(Σ̂_ε,i,i-Σ_ε,i,i)|≥ z/4)
≤ 2exp{ -ns^-212^-2σ_min^3c_6z^2+3ln K}
using the bounds implied by the events E_4 and E_5, the
aforementioned remarks on Σ_ε,i,i, and then Lemma
<ref>. Noting that Σ̂_ε,i,j≤Σ_ε,i,j+|-|
and that |Σ_ε,i,j|≤1 because ε_t
is the innovation of the variable Z_t with entries having variance
one, we deduce that (E_4^c)≤(|-|≥1/2)
and this probability is eventually bounded by (<ref>)
as long as z→0. By the same argument used to bound (E_4^c),
we deduce that (E_5^c) is eventually less than
(<ref>). Hence, (E_1) is bounded
by three times the r.h.s. of (<ref>) for n
large enough. By similar arguments, we also note that (E_2)
and (E_3) are bounded by three and two times, respectively,
the r.h.s. of (<ref>). Putting everything together,
and setting c_7:=12^-2σ_min^3c_6, the result follows.
For any set 𝐤⊂[K] we let Ξ̂_i,j|𝐤
be the correlation of ε_t,i with ε_t,j
conditioning on {ε_t,l:l∈𝐤}.
Under the Assumptions,
there is a constant c_7>0 (same as in Lemma <ref>),
such that, for n large enough,
max_i,j≤ K,𝐤∈𝒦_i,j(|Ξ̂_i,j|𝐤-Ξ_i,j|𝐤|≥ z)≤16exp{ -(n-m)s^-2c_7z^2+3ln K}
for 𝒦_i,j⊆{[K]∖{ i,j}}
of cardinality m and z satisfying
z(n-m)→∞ and z(ln(n-m))(lnln(n-m))→0.
By Lemma 2 in Kalisch and Bühlmann (2007) if the
distribution of the sample correlation coefficient is f(x;n)
where n is the sample size, the distribution of the partial correlation
coefficient is the same with n replaced by n-m, i.e. f(x;n-m).
Hence, we can use Lemma <ref> with n replaced
by n-m everywhere and the lemma is proved.
The next is a trivial variation of lemma 3 in Kalisch and Bühlmann
(2007) adapted to our inequalities.
Suppose that
the Assumptions hold. Define L:=1/(1-2^-2[1+σ̅]^2)
where σ̅ is as in Lemma <ref>.
For g(x)=2^-1ln(1+x/1-x), x∈(-1,1),
there is a constant c_7>0 (same as the one in Lemma <ref>),
such that, for n large enough,
max_i,j≤ K,𝐤∈𝒦_i,j(|g(Ξ̂_i,j|𝐤)-g(Ξ_i,j|𝐤)|≥ z)≤32exp{ -(n-m)s^-2c_8(z/L)+3ln K}
for 𝒦_i,j⊆{[K]∖{ i,j}}
of cardinality m and for z satisfying z(n-m)→∞
and z(ln(n-m))(lnln(n-m))→0.
By the mean value theorem g(x)-g(y)=∂ g(ỹ)(x-y)
for ỹ is in the convex hull of { x,y},
x,y∈(-1,1); here, ∂ g(ỹ)=1/(1-ỹ^2)
is the derivative of g evaluated at ỹ. Suppose |x-y|≤(1-σ̅)/2
and y∈[-σ̅,σ̅] for some σ̅<1.
Note that ỹ^2≤(y+|x-y|)^2,
so that ∂ g(ỹ)≤ L and substituting
the aforementioned upper bound for y and |x-y| in
terms of σ̅, and using the definition of L. Set V:=Ξ̂_i,j|𝐤-Ξ_i,j|𝐤
and U:=∂ g(Ξ̃_i,j|𝐤) where
Ξ̃_i,j|𝐤 is in the convex hull of {Ξ̂_i,j|𝐤,Ξ_i,j|𝐤}.
The event { UV≥ z} is contained in the union of
the events { V≥ z/L} and { U>L}.
From Lemma <ref> we have that (V≥ z/L)≤16exp{ -(n-m)s^-2c_7(z/L)+3ln K}
for z satisfying the conditions of that lemma. The lemma then follows
if we show that { U≥ L}⊆{ V≥ z/L}
for z→0, as in the statement of the lemma. To this end,
note that { U≥ L} is contained in the union of
the events { U>L,V≤(1-σ̅)/2}
and { V>(1-σ̅)/2}. The latter
event is eventually contained in { V≥ z/L} when
z→0. Finally, the event { U>L,V≤(1-σ̅)/2}
has probability zero because, by the remarks at the beginning of the
proof, we know that U≤ L when V≤(1-σ̅)/2
and |Ξ_i,j|𝐤|≤σ̅, which is
the case by Lemma <ref>, uniformly
in K, for any 𝐤∈𝒦_i,j. Hence, the lemma
is proved.
§.§ Technical Lemmas for Lasso
For S⊆[2K] and some constant L>0, recall
that the square of the compatibility constant is ϕ_ comp^2(L,S,Σ):=min{sb'Σ b/|b|_1^2:b∈ℛ(L,S)}
where ℛ(L,S):={ b:|b_S^c|_1≤ L|b_S|_1≠0}
(van de Geer and Bühlmann, 2009) . Here S^c is the complement
of S in [2K]. Throughout this section, the notation
is as in Algorithm <ref> and Section <ref>
and σ_min is as in Lemma <ref>.
We have the following.
Under the Assumptions,
for any S⊆[2K] of cardinality s, and L>0,
ϕ_ comp(L,S,Σ̂)≥σ_min^1/2-(L+1)√(s|Σ̂-Σ|_∞).
Note that the square root of the minimum eigenvalue
of a matrix is a lower bound for the compatibility constant. To see
this, note that sb'Σ b/|b_S|_1^2≥ sσ_min|b|_2^2/|b_S|_1^2≥σ_min
because s|b|_2^2≥ s|b_S|_2^2≥|b_S|_1^2.
Then, the lemma is special case of Corollary 10.1 in van de Geer and
Bühlmann (2009).
We now derive a basic bound for the Lasso procedure computed across
2K response variables, one at the time, using the sufficient statistic
Σ̂.
Define
λ_0=2(1+max_i∈[2K]∑_j∈[2K]:j≠ i|Θ_i,j/Θ_i,i|)|Σ̂-Σ|_∞.
Under the Assumptions, on the event E_ Lasso:={λ≥2λ_0},
we have that max_i∈[K]|β̂^(i)-β^(i)|_1=O_P(sλ/σ_min).
We prove first the result for a fixed i. We shall
then see that the bound is uniform in i∈[K]. To avoid
notational complexities, we use a notation that is only local to this
proof. Set Γ=Σ_-i,-i , γ=Σ_-i,i, b=β_-i^(i)
and b̂=β̂_-i^(i). Note that b=Γ^-1γ
by definition. As in the text we use the hat for estimators of various
quantities. Write δ=b̂-b. Given that the Lasso estimator
minimises the Lasso objective function we have that
-2γ̂'b̂+b̂'Γ̂b̂+λ|b̂|_1≤-2γ̂'b+b'Γ̂b+λ|b|_1.
This can be rearranged to give the following inequality
δ'Γ̂δ≤2(γ̂'-b'Γ̂)δ+λ(-|b̂|_1)
(Loh and Wainwright, 2012, eq. 5.1). Adding and subtracting b'Γ,
we write (γ̂'-b'Γ̂)=(γ̂'-b'Γ)+b'(Γ-Γ̂).
Given that b'Γ=γ', by definition of γ and γ̂,
we have that |γ̂-Γ b|_∞≤|Σ̂-Σ|_∞.
By definition of Γ and Γ̂ and a basic inequality,
|(Γ-Γ̂)b|_∞≤|b|_1|Σ̂-Σ|_∞.
However, |b|_1=∑_j∈[2K]:j≠ i|Θ_i,j/Θ_i,i|
because the regression coefficients can be obtained from the precision
matrix: β_j^(i)=-Θ_i,j/Θ_i,i.
Hence, by definition of λ_0 as in the statement of the
lemma and the last display, we deduce that δ'Γ̂δ≤λ_0|δ|_1+λ(-|b̂|_1).
This is in the form of the basic inequality in van de Geer and Bühlmann
(2009, last display on p.1387). On the set {λ≥2λ_0},
the r.h.s. of the previous inequality is bounded above by 2^-1λ|δ|_1+λ(-|b̂|_1).
Then, by arguments in van de Geer and Bühlmann (2009, second
and third display on p.1388, replacing λ_0 with 2^-1λ
in their definition of L, so that here L=3), we deduce that
|δ|_1≤4√(sδ'Γ̂δ/ϕ̂_ comp^2)
where ϕ̂_ comp:=ϕ_ comp(L,S,Σ̂)
is the compatibility constant, which we shall show to be strictly
positive. Lemma 11.2 in van de Geer and Bühlmann (2009) says
that √(δ'Γ̂δ)=O(λ√(s)/ϕ̂_ comp)
once we replace λ_0 with λ/2 in their lemma. By
Lemmas <ref> and <ref>,
ϕ̂_ comp=σ_min^1/2-O_P(√(sln K/n))
choosing L=3 in Lemma <ref>. We also
have that √(sln K/n)=o(σ_min^1/2).
By these remarks and the above display, we deduce |δ|_1=O_P(sλ/σ_min).
The bound is uniform in i∈[K] because Lemma <ref>.
Hence, the result follows.
Suppose that the
Assumptions hold. Then, for λ_0 as in (<ref>),
λ_0=O_P((ω/ν^2)√(ln K/n))
where ν is as in Lemma <ref>.
Under the Assumptions, an upper bound for (<ref>)
is given by 2(1+ω/ν^2)|Σ̂-Σ|_∞.
This is O_P((ω/ν^2)√(ln K/n))
using Lemma <ref>. Hence, the result follows.
§.§ Proof of Theorem <ref>
This follows from Lemma <ref>.
§.§ Proof of Theorem <ref>
An upper bound for (<ref>) is given by 2(1+ω/ν^2)|Σ̂-Σ|_∞.
Then, in Lemma <ref>, the set (E_ Lasso)→1
as K→∞, for λ=4(1+ω/ν^2)×3c_0/π√(ln K/n),
by Lemma <ref>. Therefore, by Lemma <ref>,
max_i∈[K]|β̂^(i)-β^(i)|_1=O_P(ω s√(ln K/n))
and we can choose c=12(1+ν^-2)c_0/π in the statement
of the theorem. Hence, the result follows.
§.§ Proof of Theorem <ref>
Note that θ_min is a lower bound on min_i,j{|β_j^(i)|:|β_j^(i)|>0}.
This is because |β_j^(i)|=|Θ_i,j/Θ_i,i|.
Note that -Θ_i,i is the variance of Z_t,i conditioning
on all other covariates. Hence, |Θ_i,i|≤1
because Var(Z_ti)=1 so that |β_j^(i)|
is either zero or greater than θ_min. Then, the event in
the probability of the theorem is contained in the event max_i∈[K]|β̂^(i)-β^(i)|_1>τ,
because τ=o(θ_min). The latter event has
probability going to zero according to Theorem <ref>.
§.§ Proof of Theorem <ref>
By Theorem 6 in Cai et al. (2011), |Ω̂-Θ|_∞≤4|Θ|_1,∞λ_n,
on the event E_ Clime:={λ_n≥|Θ|_1,∞|Σ̂-Σ|_∞}.
Choosing λ_n=ω(3c_0/π√(ln K/n))
, by Lemma <ref>, (E_ Clime)→1
as K→∞.
§.§ Proof of Theorem <ref>
Due to the fact that |Θ_i,j|∈{ 0}∪[θ_min,∞)
and |Ω̂_i,j|∈{ 0}∪[τ,∞)
uniformly in i,j∈[2K], the event in the probability
of the theorem is eventually contained in {|Ω̂-Θ|_∞≥τ}.
This goes to zero by Theorem <ref>
because τ is of larger order of magnitude than |Ω̂-Θ|_∞.
§.§ Proof of Theorem <ref>
Under the event E in (<ref>), we are
within the framework of the results in Le and Zhong (2021). When such
event is true, the result follows from Theorem 3 in Le and Zhong (2021).
The proof of their result requires a bound in probability for _∞;
see the third display on their page 12. In their proof this is denoted
by the symbol |W_X,nj|_∞. We control this quantity
using Lemma <ref>. To finish the proof note
that (E)→1 using either Theorem <ref>
or Theorem <ref>.
§.§ Proof of Theorem <ref>
From Lemma <ref>, recall that Σ_ε=Θ_11^-1
and A= -Θ_11^-1Θ_12. By Lemmas <ref>
and <ref>, the Assumptions and
Theorem <ref>, we deduce that
|Θ̂_11^-1-Θ_11^-1|_ op=O_P(s√(ln K/n))
on the event E in (<ref>); note that
|Θ_11|_0,∞≤ s. The event E has probability
going to one by either Theorem <ref>
or Theorem <ref>. This proves the first
bound in the theorem. To prove the convergence of the autoregressive
matrix estimator, we note that A-Â=Θ̂_11^-1Θ̂_12-Θ_11^-1Θ_12.
The r.h.s. can be rewritten as Θ̂_11^-1(Θ̂_12-Θ_12)+(Θ̂_11^-1-Θ_11^-1)Θ_12.
The first term in the sum is equal to
Θ_11^-1(Θ̂_12-Θ_12)+(Θ̂_11^-1-Θ_11^-1)(Θ̂_12-Θ_12).
Then, by standard inequalities and the previous bounds, it is not
difficult to deduce that its operator norm is O_P(s√(ln K/n)).
The same follows for the operator norm of (Θ̂_11^-1-Θ_11^-1)Θ_12.
This concluded the proof of the theorem.
§.§ Proof of Theorem <ref>
The assumptions in Kalisch and Bühlmann (2007) are satisfied
by our Assumptions together with the faithfulness condition stated
in the theorem. In particular, from Kalisch and Bühlmann (2007,
proof of Lemma 4), it is sufficient to bound the probability of a
Type I and Type II error, as given by the following
(|g(Ξ̂_i,j|𝐤)-g(Ξ_i,j|𝐤)|≥ z)≤32exp{ -(n-m)s^-2c_7(z/L)^2+3ln K}
where m is the cardinality of 𝐤, g is as defined
in Lemma <ref>, and setting z=c_n
where c_n is as in Kalisch and Bühlmann (2007): c_n≍ n^-η_c.
Choosing m equal to the maximal number of adjacent nodes, there
are O(K^m) hypotheses to test. By Lemma 5 in Kalisch
and Bühlmann (2007), we can assume m≤ s with probability
going to one. By this remark and the union bound we need the following
to go to zero: K^s32exp{ -(n-s)s^-2c_7(c_n/L)^2+3ln K}.
By the Assumptions, s=O(n^η_s)=o(n^1/2)
and K^s=O(n^sη_K) for some finite η_K.
Hence we must have n^η_sln n=o(n^1-2(η_s+η_c)).
This is the case if 2η_c+3η_s<1, as stated in the theorem.
The theorem is then proved following the steps in the proof of Lemma
4 in Kalisch and Bühlmann (2007).
§.§ Proof of Theorem <ref>
Define the set E_G:={Ĝ=G}, where Ĝ
is the PCDAG estimated using Algorithm <ref> and G
is the true PCDAG. Hence, on E_G we have that that 𝒱̂(i)=𝒱(i).
By Theorem <ref>, the event E_G has
probability going to one. Hence, in what follows, we shall replace
𝒱̂(i) with 𝒱(i).
By the assumption of the present theorem, G has all edges that
are directed. Let
Ψ̂:=[[ Σ̂_ε,𝒱̂(1),𝒱̂(1) 0 ⋯ 0; 0 Σ̂_ε,𝒱̂(2),𝒱̂(2) ⋱ ⋮; ⋮ 0 ⋱ 0; 0 ⋯ 0 Σ̂_ε,𝒱̂(K),𝒱̂(K) ]]
and
Φ̂:=[[ Σ̂_ε,𝒱̂(1),1 0 ⋯ 0; 0 Σ̂_ε,𝒱̂(2),2 ⋱ ⋮; ⋮ 0 ⋱ 0; 0 ⋯ 0 Σ̂_ε,𝒱̂(K),K ]];
where the symbol 0 denotes a generic conformable matrix
of zeros. Then, the nonzero consecutive entries in the i^th column
of Ψ̂^-1Φ̂ is equal to d̂_i as defined
in Algorithm <ref>. Here, we shall define the
population version of the above by Ψ and Φ. We define
a matrix R such that Δ=(RΨ̂^-1Φ̂)'.
The matrix R reshapes Ψ̂^-1Φ̂ so that we can
find Δ. We write such matrix R as
R:=[[ R_1^(1) R_1^(2) ⋯ R_1^(K); R_2^(1) R_2^(2) ⋯ R_2^(K); ⋮ ⋮ ⋱ ⋮; R_K^(1) R_K^(2) ⋯ R_K^(K) ]],
where R_k^(i) is a 1×𝒱(i)
vector defined as follows. If k∉𝒱(i),
then, R_k^(i) is a row vector of zeros; for example
R_k^(k)=0, k∈[K]. If k∈𝒱(i),
R_k^(i) will have a one in the position such that
R_k^(i)d̂_i'ε_t,𝒱(i)=d̂_i,jε_t,k,
where j is the position of the element in 𝒱(i)
that is equal to k; d̂_i,j is the estimated regression
coefficient of ε_t,k in the regression of ε_t,i
on ε_t,𝒱(i). This also means that
the number of ones in the k^th row of R is equal to the number
of direct descendants of the variable ε_t,k. We denote
such number by κ_k. Now, note that |RΨ̂^-1Φ̂-RΨ^-1Φ|_ op≤|R|_ op|Ψ̂^-1Φ̂-Ψ^-1Φ|_ op.
Then, |R|_ op^2 is the maximum eigenvalue of
RR' and the latter matrix is diagonal with (k,k)
entry equal to κ_k. It is easy to see that RR' is diagonal
because the positions for two different parents cannot overlap, i.e.
R_k^(i)(R_l^(i))'=0 when
k≠ l. Then, |R|_ op=κ^1/2, where
κ:=max_kκ_k, as defined in the theorem. Hence, it
remains to bound |Ψ̂^-1Φ̂-Ψ^-1Φ|_ op;
note that the singular values of a matrix are invariant of transposition.
Adding and subtracting Ψ^-1Φ̂ , using the triangle
inequality, and a basic norm inequality,
|Ψ̂^-1Φ̂-Ψ^-1Φ|_ op≤|Ψ̂^-1-Ψ^-1|_ op|Φ̂|_ op+|Ψ^-1|_ op|Φ̂-Φ|_ op.
By Lemma <ref>, |Ψ̂^-1-Ψ^-1|_ op≤|Ψ^-1|_ op^2|Ψ̂-Ψ|_ op.
The maximum singular value of a block diagonal matrix is the maximum
of the singular values of each of the blocks. By Cauchy's interlacing
theorem, |Ψ̂-Ψ|_ op≤|Σ̂_ε-Σ_ε|_ op
and the latter is O_P(s√(ln K/n)) by
Theorem <ref>. Using again
Cauchy's interlacing theorem, we deduce that the largest singular
value of Ψ^-1 is bounded above by the largest singular value
of Θ, which is finite. Moreover, |Φ̂|_ op≤|Φ|_ op+|Φ̂-Φ|_ op.
The maximum singular value of Φ is just the maximum of Σ_ε,𝒱̂(i),i'Σ_ε,𝒱̂(i),i
w.r.t. i∈[K]. It is increasing in the cardinality
of 𝒱̂(i). Hence, Σ_ε,𝒱̂(i),i'Σ_ε,𝒱̂(i),i≤Σ_ε,·,i'Σ_ε,·,i,
recalling the notation at the start of Section <ref>.
The latter is bounded above by max_x'x≤1x'Σ_ε'Σ_εx=|Σ_ε|_ op^2,
which is bounded, by the Assumptions. By the same argument as before,
the maximum singular value of Φ̂-Φ is the square root
of the largest, w.r.t. i∈[K], of the maximum eigenvalue
of
(Σ̂_ε,𝒱̂(i),i-Σ_ε,𝒱(i),i)'(Σ̂_ε,𝒱̂(i),i-Σ_ε,𝒱(i),i)
where on E_G, 𝒱̂(i)=𝒱(i).
This quantity is increasing in the cardinality of 𝒱(i)
so that the square root of the above display is bounded above by |Σ̂_ε-Σ_ε|_ op,
which is O_P(s√(ln K/n)) by Theorem <ref>.
Using the derived upper bounds, it is easy to deduce that (<ref>)
is O_P(s√(κln K/n)).
From Lemma <ref>, deduce that Πε_t=DΠε_t+ξ_t.
This can be rewritten as ε_t=Π^-1DΠε_t+Π^-1ξ_t.
Hence, ε_t=Δε_t+Π^-1ξ_t, where
Δ=Π^-1DΠ. Now, note that on the event E_G, as defined
at the start of the proof, any permutation matrix Π̂ that
makes Π̂Δ̂Π̂^-1 lower triangular, with
diagonal entries equal to zero, also satisfies (<ref>) when
we replace Π with it. According to Algorithm <ref>
we choose the one that requires the least number of row permutations
of the identity, which is unique. Then, on E_G, Π̂=Π
because also Π is unique. Therefore, on E_G, D̂:=Π̂ΔΠ̂^-1
converges to D:=ΠΔΠ^-1. This shows the first statement
of the theorem. The convergence rate of Ĥ-H to zero can be
deduce from the first statement of the theorem together with Lemma
<ref>, and Cauchy's interlacing theorem
and the definition Σ_ε=H(𝔼ξ_tξ_t')H'
in order to bound the singular values of H^-1:=(I-D).
§.§ Proof of Results in the Appendix
§.§.§ Proof of Lemma <ref>
We prove each point separately.
Points 1-2.
It follows from Rüschendorf and de Valk (1993, Proposition 1)
and the fact that Φ^-1 is the quantile function of a standard
normal random variable.
Point 3.
Recall that V_t,1,V_t,2 are independent of X_t,1,X_t,2
and uniformly distributed in [0,1]. It is clear that
the population Spearman's rho obtained using the transformation (<ref>)
depends on π_V=𝔼V_t,1V_t,2. When, 𝔼V_t,1𝔼V_t,2=1/2,
we can deduce the result by computing expectation w.r.t. to V_t,1
and V_t,2 and then using simple algebra and the fact that F̃_1(X_t,1,V_t,1),F̃_2(X_t,1,V_t,2)
are uniformly distributed.
Point 4.
Note that ρ is the definition of the population Spearman's rho
(Joe, 1997, p.32) and Z_t,1,Z_t,2 are standard normal. Then,
their correlation is the stated function of Spearman's rho (Liu et
al., 2012).
Point 5.
Let X_t,1' and X_t,2' be two independent copies of X_t,1
and X_t,2, independent of each other. Note that F_i(x)=𝔼1_{ X_t,i'≤ x},
i=1,2. By these remarks and Fubini's Theorem,
𝔼F_1(X_t,1)F_2(X_t,2)=𝔼^X_t,1'𝔼^X_t,2'(X_t,1≥ X_t,1',X_t,2≥ X_t,2')
where 𝔼^X_t,k' is expectation w.r.t. the marginal
law of X_t,k', k=1,2. By the fact that X_t,k has same
distribution as X_t,k', k=1,2, the r.h.s. of the above display
is equal to 𝔼^X_t,1𝔼^X_t,2C̅(F_1(X_t,1),F_2(X_t,2)),
where C̅ is a survival copula. This will not be unique everywhere,
unless the marginals are continuous. However, by assumption we can
choose C̅ as the survival Gaussian copula, among possibly
other copulae. Recall the definition of the bivariate Gaussian copula
with scaling matrix Σ with (1,2) entry Σ_1,2=r_V:
C(u_1,u_2):=Φ(Φ^-1(u_1),Φ^-1(u_2);r_V).
By symmetry of C, we have that
C̅(F_1(X_t,1),F_2(X_t,2))=C(1-F_1(X_t,1-),1-F_2(X_t,2-)).
Taking marginal expectations 𝔼^X_t,1𝔼^X_t,2,
the r.h.s. of the above display is exactly h(r_V).
The strict monotonicity of h(r) w.r.t. r is a property
of the normal distribution and follows from Fan et al. (2017, Lemma
2).
Point 6.
This follows by repeated use of the triangle inequality and the fact
that 1/n∑_t=1^n(1-𝔼)F_1(X_t,1)F_2(X_t,2)
converges to zero in probability by ergodicity.
§.§.§ Proof of Lemma <ref>
By the assumption of the model, X_t,k:=f_k^-1(Z_t,k).
From (<ref>) we deduce that Z_t=AZ_t-1+Π^-1Hξ_t
and in consequence that Z_t+s=A^s+1Z_t-1+∑_r=0^sA^rΠ^-1Hξ_t.
Then, (<ref>) follows by taking
conditional expectation. The second result in the lemma follows by
the chain rule.
§ CHOICE OF TUNING PARAMETERS
Algorithms <ref> and <ref> require
to choose the penalty parameter λ and the threshold τ.
As shown in Theorems <ref> and <ref>
we need τ>λ. The exact values can be chosen by cross-validation
(CV). CV may not be suitable for time series problems. However, it
has been shown to work for prediction problems in the case of autoregressive
process of finite order (Burmann and Nolan, 1992). To this end, we
divide the sample data into n_ CV nonoverlapping blocks
of equal size each. Each block is a test sample. Given the i^th
test sample, we use the remaining data as i^th estimation sample.
Compute Θ̂ on the i^th estimation sample and denote
this by Θ̂_ est(λ,τ,i) to
make the dependence on the parameters and block explicit. Compute
the scaling matrix Σ̂ on the i^th test sample using
Algorithm <ref> and denote it by Σ_ test(i)
to make the dependence explicit. We minimize the negative loglikelihood:
1/n_ CV∑_i=1^n_ CV[ Trace(Σ̂_ test(i)Θ̂_ est(λ,τ,i))-ln(Θ̂_ est(λ,τ,i))]
w.r.t. (,)∈𝒯
where 𝒯⊂(0,∞)^2. Here, for any
matrix A, diag(A) a diagonal matrix with same
diagonal entries as A.
In the simulations the parameter τ is fixed to 2λ,
and we select λ employing CV with n_CV=5. Starting with
a penalization equal to λ=0.10, we first search (by dividing
iteratively by two) a value for the minimum λ such that all
off-diagonal elements of Θ̂_11 are zero (precisely
smaller than 1e-6). We denote this value as λ_0. Then we
search for the optimal λ in {λ_0/2,λ_0/(2^2),…,λ_0/(2^5)}.
Computing both optimal parameters and a causal graph from the PC algorithm
can be time consuming over many simulations. Hence, in our simulations,
we employ an additional simplification. Rather than carrying out CV
for each simulation, we use two separate simulation samples to compute
two values of λ according to the aforementioned procedure.
We then use the average of these two values as tuning parameter λ
in all simulations with the same design.
§.§ Choice of VAR Order Using AIC
To choose a number of lags greater than one, as in Section <ref>,
we can use Akaike's information criterion (AIC). The likelihood of
the latent Gaussian VAR (<ref>) of order greater than
one is proportional to -ln(Σ̅_ε)
where Σ̅_ε is the estimator computed from
Algorithm <ref> modifying Ω̂
so that Ω̂_i,j=1 for i,j∈[K]. This
means that no zero restriction is imposed on the submatrix Θ_11=Σ_ε^-1.
We can use the number of nonzero elements in Θ̂_12,
as number of parameters for the penalty in AIC.
§ FINITE SAMPLE ANALYSIS VIA SIMULATIONS
We assess the finite sample performance of the different estimators
and evaluate their asymptotic properties for various degrees of time
series persistence and cross-sectional dimension. We compare our results
to naive methods that either do not account for sparsity in Θ
or ignore the time series structure of the data.
§.§ The True Model
To generate the time series of equation (<ref>) the
K variables are divided into K independent clusters.
Each cluster is composed by N variables and shares the same causal
structure as well as the autoregressive matrix. We denote with A
and H the related coefficients of equation (<ref>)
for each cluster. The matrix H is the matrix which
relates ε_t with the associated structural shocks ξ_t
of a selected cluster. For the sake of simplicity, for each cluster,
the variables' order coincides with the topological order so that
the matrix Π in Lemma <ref> can be
set equal to the identity.
We consider N=3 and N=4. When N=3 the three basic causal
structures are selected for each cluster, i.e., the causal chain,
common cause and v-structure. Given three variables X, Y and
Z, if X→ Y→ Z, the causal structure is called
causal chain while if X← Y→ Z it is termed common
cause. The causal relation is named v-structure or immorality if X→ Y← Z.
We also consider two additional structures when N=4: diamond 1
and diamond 2. These are defined as X→ Y← Z,X→ U← Z,
and X→ Y← Z,Y→ U, respectively.
The PC algorithm cannot distinguish between causal chain and common
cause, since these structures are in the same Markov equivalence class.
Then, the PC algorithm will provide the same graph with undirected
edges: X-Y-Z. Conversely, the v-structure, diamond 1 and diamond
2 can be identified by the PC algorithm. In this case, the PC algorithm
will return the causal graph with edges correctly oriented.
To monitor the persistence of the time series, for each cluster, the
autoregressive matrix A is equal to a lower triangular
matrix with all elements (including the diagonal) equal to a constant
a, which describes the persistence of the series. The matrix H
is a function of the selected causal structure. For the v-structure
H=[ 1 0 0; 0 1 0; 1 1 1 ]
which is related to the causal structure ε_t,1→ε_t,3←ε_t,2.
Each variable causes itself, but may also affect other variables.
Finally, for simplicity, we suppose that the data have Gaussian marginals.
In this case, simulation of (<ref>) reduces to simulation
of a VAR(1) together with some linear transformations to ensure that
all the covariates have variance equal to one. The details are given
in Algorithm <ref>.
§.§ Simulation Results
To study the effect of time series persistent, three values of such
parameter a are considered: 0.25, 0.5 and 0.75. These
values of a produce a wide range of time series dependence. For
example, Figure <ref> shows the autocorrelation function
of a cluster for a v-structure. To analyze the relevance of sparsity
in our approaches, we select K=3,30,50 clusters. We
investigate the finite sample properties of our estimator by considering
a sample size n=1000,5000.
We use Algorithms <ref> and <ref>
find the moral graph. Recall that the moral graph is defined from
the nonzero entries in Θ̂ as in Algorithm <ref>.
We then follow Algorithms <ref> and <ref>
to estimate any remaining parameters. The tuning parameters for Algorithms
<ref> and <ref> are chosen by
CV as described in Section <ref>. This means
only choosing λ. We denote the estimated parameter by λ_CV.
We use 250 simulations to compute the performance of our methodology.
We also test the performance of the PC algorithm when we impose the
restrictions provided by Lasso and CLIME. The elements of Θ̂_11
which are equal to zero represent those edges which we exclude from
the skeleton. These restrictions can be embedded in the PC algorithm
using the appropriate “fixedGaps” command, which guarantees that
will be no edge between nodes j and i if the element
of Θ̂_11 in position (i,j) is equal to zero. We
obtain improved compute time performance of the PC algorithm in this
case. This is particularly relevant in the high dimensional case.
Imposing the restriction has however nontrivial implications for the
PC algorithm, as an edge is deleted without a test so that no variable
is included in the separation set. We refer to Algorithm 1 and 2 in
Kalisch and Bühlmann (2007) for the details. In general, imposing
the restrictions might ensure that we obtain a DAG rather than a CPDAG.
It may also be advisable to use a tuning parameter λ smaller
than the one suggested by CV. This is because the PC algorithm can
only delete edges, but not add them back. To verify if this is the
case, we also report results for λ_CV/2 and λ_CV/4.
We find no general evidence in favour of this claim.
We compare our results with two benchmarks. One does not account for
sparsity and is essentially equivalent to choosing λ=0 in
the estimation. The second does not account for time series dependence,
and carries out the PC algorithm directly on the observed data. We
shall refer to these benchmarks as λ=0 and A=0, respectively.
The case λ=0 should produce sensible results in the low-dimensional
case. On the other hand, given that the simulated data are Gaussian,
the case A=0 should be appropriate when the time series dependence
is low.
All approaches are compared on their performance to estimate the contemporaneous
causal structure. To achieve this, we report the average structural
Hamming distance (SHD) of the estimated causal graph to the true (Acid
and de Campos, 2003, Tsamardinos et al., 2006). The SHD between two
partially directed acyclic graphs counts how many edge types do not
coincide. For instance, estimating a non-edge instead of a directed
edge contributes an error of one to the overall distance. We remark
that the PC algorithm estimates the Markov equivalence class of a
given graph, i.e., the related CPDAG, and some causal structure, as
common cause and causal chain, shares the same class, i.e., the same
CPDAG, (e.g., for the v-structure the Markov class coincides with
the related DAG). Therefore, as the true causal structure in SHD analysis
we consider the (block) equivalence class attained by the PC algorithm,
with a very high significance level, 1-10^-13, to obtain a deterministic
estimate performed on the theoretical correlation matrix of each cluster.
Tables <ref> and <ref> display the
average SHD and standard errors computed over 250 simulations for
all approaches. For the sake of conciseness we only report results
for the v-structure for the persistency parameter a∈{ 0.25,0.75}
and the number of clusters K∈{ 3,50}[The complete results are available upon request.].
Our approach produces estimators with superior finite sample performance,
relatively to the benchmarks, regardless of the considered causal
structures. While not reported here, we note that for both the causal
chain and common cause, the performance of the PC algorithm deteriorates
when we impose the a priori restrictions from the zeros of Θ̂_1,1
even if we undersmooth.
The discrepancy among the contemporaneous causal structure is also
investigated by computing the number of nonzero elements of Θ_11.
Indeed, we recall that nonzero elements of Θ_11 correspond
to possible edges between variables of the corresponding row and column.
We also compute the number of false positive and negative between
the estimated and true Θ_11 of nonzero elements[We say that an element of Θ_11 is a false positive, if it
is estimated as nonzero element while it is zero. Vice versa, it is
a false negative, if it is estimated as zero element while it is different
from zero.]. Tables <ref> and <ref>
summarize the results for the high and low dimensional case, respectively.
We only report the results for the v-structure, as we can draw similar
conclusions for the other causal structures.
[H]
False Positives and Negatives for a Causal V-Structure. Expected number
of true plus false positives (TP+FP), false positives (FP) and false
negatives (FN) for the off-diagonal terms of Θ_11 approximated
using 250 Monte Carlo simulations (standard errors in parenthesis).
The contemporaneous causal structure is a v-structure with K=150
variables with K=50 clusters. The number of nonzero
off diagonal elements is 300, where the total number of the off-diagonal
elements is 22350. Results are reported for different values of λ
, where λ_CV is the value obtained using cross-validation
and denoted by λ_CV. The column λ=0 refers to
the benchmark that does not account for sparsity.
9cLasso 3cλ_CV 3cλ_CV/2 3cλ_CV/4 3cλ=0n a TP+FP FP FN TP+FP FP FN TP+FP FP FN TP+FP FP FN 1000 0.25 313.44 13.44 0 2197.832 1897.8 0 9711.5 9411.5 0 22350 22050 0 (0.33) (0.33) (0) (4.24) (4.24) (0) (7.83) (7.83) (0) (0) (0) (0) 0.75 210.344 4.72 94.376 549.104 249.1 0.024 2109.6 1809.6 0 22350 22050 0 (0.3) (0.19) (0.23) (1.31) (1.31) (0.01) (3.35) (3.35) (0) (0) (0) (0)5000 0.25 302.52 2.52 0 1472.928 1172.9 0 8488.2 8188.2 0 22350 22050 0 (0.15) (0.15) (0) (3.14) (3.14) (0) (7.81) (7.81) (0) (0) (0) (0) 0.75 200.096 0 99.904 300 0 0 343.08 43.08 0 22350 22050 0 (0.03) (0) (0.03) (0) (0) (0) (0.59) (0.59) (0) (0) (0) (0) 9cCLIME 3cλ_CV 3cλ_CV/2 3cλ_CV/4 n a TP+FP FP FN TP+FP FP FN TP+FP FP FN 1000 0.25 300.928 0.928 0 1189.848 889.8 0 6638.7 6338.7 0 - - - (0.09) (0.09) (0) (3.18) (3.18) (0) (6.96) (6.96) (0) - - - 0.75 106.144 0 193.856 187.472 13.248 125.7 1024 807.4 83.424 - - - (0.21) (0) (0.21) (0.63) (0.36) (0.5) (2.75) (2.71) (0.33) - - -5000 0.25 300.56 0.56 0 760.752 460.7 0 4570.88 4270.8 0 - - - (0.06) (0.06) (0) (2.45) (2.45) (0) (6.59) (6.59) (0) - - - 0.75 235.48 0.032 64.552 318.344 19.544 1.2 764.216 464.2 0 - - - (0.5) (0.02) (0.5) (0.4) (0.39) (0.1) (2) (2) (0) - - -
[H]
False Positives and Negatives for a Causal V-Structure. Expected number
of true plus false positives (TP+FP), false positives (FP) and false
negatives (FN) for the off-diagonal terms of Θ_11 approximated
using 250 Monte Carlo simulations (standard errors in parenthesis).
The contemporaneous causal structure is a v-structure with K=9
variables with K=3 clusters. The number of nonzero
off diagonal elements is 18, where the total number of the off-diagonal
elements is 72. Results are reported for different values of λ
, where λ_CV is the value obtained using cross-validation
and denoted by λ_CV. The column λ=0 refers to
the benchmark that does not account for sparsity.
9cLasso 3cλ_CV 3cλ_CV/2 3cλ_CV/4 3cλ=0n a TP+FP FP FN TP+FP FP FN TP+FP FP FN TP+FP FP FN 1000 0.25 22.968 4.968 0 41.584 23.584 0 57.48 39.48 0 72 54 0 (0.24) (0.24) (0) (0.41) (0.41) (0) (0.34) (0.34) (0) (0) (0) (0) 0.75 12.504 0.016 5.512 18.456 0.456 0 22.168 4.168 0 72 54 0 (0.06) (0.01) (0.06) (0.06) (0.06) (0) (0.16) (0.16) (0) (0) (0) (0)5000 0.25 18 0 0 20.808 2.808 0 38.04 20.04 0 72 54 0 (0) (0) (0) (0.16) (0.16) (0) (0.35) (0.35) (0) (0) (0) (0) 0.75 12 0 6 18 0 0 18.12 0.12 0 72 54 0 (0) (0) (0) (0) (0) (0) (0.03) (0.03) (0) (0) (0) (0) 9cCLIME 3cλ_CV 3cλ_CV/2 3cλ_CV/4 n a TP+FP FP FN TP+FP FP FN TP+FP FP FN 1000 0.25 19.256 1.256 0 30.792 12.792 0 46.072 28.072 0 - - - (0.11) (0.11) (0) (0.34) (0.34) (0) (0.37) (0.37) (0) - - - 0.75 6.16 0 11.84 9.456 0.016 8.56 14.104 1.152 5.048 - - - (0.05) (0) (0.05) (0.12) (0.01) (0.11) (0.12) (0.1) (0.08) - - -5000 0.25 18 0 0 19.056 1.056 0 28.872 10.872 0 - - - (0) (0) (0) (0.1) (0.1) (0) (0.29) (0.29) (0) - - - 0.75 13.096 0.04 4.944 19.088 1.096 0.008 22.408 4.424 0.016 - - - (0.09) (0.02) (0.09) (0.09) (0.09) (0.01) (0.17) (0.17) (0.01) - - -
Finally, in Tables <ref> and <ref>,
we assess the finite sample performance of the estimators of A
and Σ_ε and analyse their asymptotic properties
stated in Theorem <ref>.
We compute the average distance from the true matrices, where the
distance is measured in terms of the operator's norm: the largest
singular value. These statistics are compared only to the case λ=0.
10
key-1-1-1Bhatia, R. (1996) Matrix Analysis. New York: Springer.
key-7-1Burman, P. and D. Nolan (1992) Data Dependent Estimation
of Prediction Functions. Journal of Time Series Analysis 13, 189-207.
key-9-1Cai, T., W. Liu and X. Luo (2011) A Constrained
ℓ_1 Minimization Approach to Sparse Precision Matrix Estimation.
Journal of the American Statistical Association 106, 594-607.
key-2Han, F. and W.B. Wu (2019) Probability Inequalities
for High Dimensional Time Series Under a Triangular Array Framework.
https://arxiv.org/abs/1907.06577v1.
key-29-1Joe, H. (1997) Multivariate Models and Dependence
Models. London: Chapman & Hall.
key-27-1Kalisch, M. and P. Bühlmann (2007) Estimating
High-Dimensional Directed Acyclic Graphs with the PC-Algorithm. Journal
of Machine Learning Research 8, 613-636.
key-7-1Lauritzen, S. L. (1996) Graphical Models. Oxford:
Oxford University Press.
key-32-1Le, T.-M. and P.-S. Zhong (2021) High-Dimensional
Precision Matrix Estimation with a Known Graphical Structure. Stat
11, e424.
key-3Liu, H., F. Han, M. Yuan, J. Lafferty and L. Wasserman
(2012) High Dimensional Semiparametric Gaussian Copula Graphical Models.
The Annals of Statistics 40, 2293-2326.
key-36-1Loh, P.-L. and M. J. Wainwright (2012) High-Dimensional
Regression With Noisy and Missing Data: Provable Guarantees With Nonconvexity.
The Annals of Statistics 40, 1637-1664.
key-28-1Meinshausen, N. and P. Bühlmann (2006) High-Dimensional
Graphs and Variable Selection with the Lasso. The Annals of Statistics
34, 1436-1462.
key-24-1Rüschendorf, L. and V. de Valk (1993) On Regression
Representation of Stochastic Processes. Stochastic Processes and their
Applications 46, 183-198.
key-5-1van de Geer, S. A. and P. Bühlmann (2009) On
the conditions used to prove oracle results for the lasso. Electronic
Journal of Statistics 3, 1360-1392.
key-17-1van der Vaart, A. and J.A. Wellner (2000) Weak
Convergence and Empirical Process Theory. New York: Springer.
|
http://arxiv.org/abs/2307.01802v2
|
20230704161203
|
Open Quantum System Dynamics from Infinite Tensor Network Contraction
|
[
"Valentin Link",
"Hong-Hao Tu",
"Walter T. Strunz"
] |
quant-ph
|
[
"quant-ph"
] |
Institut für Theoretische Physik, Technische Universität Dresden,
D-01062, Dresden, Germany
Institut für Theoretische Physik, Technische Universität Dresden,
D-01062, Dresden, Germany
Institut für Theoretische Physik, Technische Universität Dresden,
D-01062, Dresden, Germany
[email protected]
Recently developed methods to compute dynamics of strongly coupled non-Markovian open systems are based on a representation of the so-called process tensor in terms of a tensor network, which can be contracted to matrix product state (MPS) form. We show that for Gaussian environments the stationarity of the bath response can be exploited in order to construct this MPS using infinite MPS evolution methods. The result structurally resembles open system evolution with auxiliary degrees of freedom, as in hierarchical or pseudomode methods. Here, however, these degrees of freedom are generated automatically by the MPS evolution algorithm. Furthermore, our algorithm for contracting the process tensor network leads to significant computational speed-ups for strong coupling problems over existing proposals.
Open Quantum System Dynamics from Infinite Tensor Network Contraction
Walter T. Strunz
August 1, 2023
=====================================================================
Introduction Computing the dynamics of open quantum systems that are strongly coupled to their environment in general poses a challenging problem that requires advanced numerical tools for simulation on a classical computer <cit.>. Over the past decades, a wide array of methods has been developed, each with its unique strengths and limitations <cit.>.
Many of the most sophisticated approaches realize the open system evolution by substituting the original environment with physical or non-physical auxiliary degrees of freedom.
These auxiliary degrees of freedom must be carefully tailored to accurately reproduce the dynamics of the original bath. Prominent methods in this category include the well established HEOM (hierarchical equations of motion) <cit.> and pseudomode approaches <cit.>, among others <cit.>.
However, identifying suitable auxiliary environments is generally a complex task that depends nontrivially on the specific characteristics of the bath structure <cit.>.
A different strategy to treat open system dynamics avoids this issue by working directly with the exact form of the so-called process tensor, an object that encapsulates all dynamical properties of the reduced dynamics, including unequal-time correlation functions <cit.>. This tensor has a representation as a two-dimensional tensor network <cit.>, which can be contracted to a matrix product state (MPS) to allow for efficient computations. The PT-TEMPO method (Process Tensor Time Evolving Matrix Product Operators) utilizes MPS compression during the contraction to reduce the bond dimensions, such that larger evolution times and strong coupling regimes can be reached <cit.>.
In this paper we derive an alternative representation of the process tensor in terms of an infinite tensor network.
With this result, infinite time evolving block decimation (iTEBD) can be used for network contraction, leading to a fast algorithm that scales linearly with the bath memory time. The resulting MPS representation of the process tensor has the same structure as for methods using auxiliary degrees of freedom, bridging a gap between the two different approaches. This structure delivers the crucial advantages that, using only a single time-local propagator, stationary states can be determined directly and arbitrary evolution times can be reached with low memory requirements. In contrast to established methods such as HEOM, the auxiliary degrees of freedom are generated in a systematic and automated way during the network contraction.
Open system evolution
As a model for open system dynamics we consider the standard Hamiltonian
H=H_sys(t)⊗𝕀_env+S⊗ B(t),
where H_sys and S are hermitian operators in the Hilbert space of the system and B(t) is an operator that describes the collective degrees of freedom of a Gaussian environment
consisting of a continuum of bosonic modes [b(ω),b^†(ω')]=δ(ω-ω') with coupling strengths g(ω)
B(t)=∫ω g(ω) ^-ω tb(ω)+h.c. .
For a Gaussian environment the influence of the bath on the system is completely determined by the so-called bath correlation function
α(t,s)= ρ_env(0)B(t)B(s),
where ρ_env(0) is a Gaussian environment initial state [We assume without loss of generality that ρ_env(0)B(t)=0.].
The bath is said to be stationary if the bath correlation function depends only on the time difference α(t,s)≡α(t-s). While notable exceptions exist <cit.>, this is the standard scenario in open system dynamics, in particular all thermal environments are stationary. In order to arrive at a description of the dynamics in terms of the process tensor, one considers a Trotter splitting of the full unitary time evolution operator <cit.>
U(t,t+Δ)= U_sys(t,t+Δ)U_int(t,t+Δ) +𝒪(Δ^2),
where U_sys(t,t+Δ) is the unitary evolution operator generated by H_sys for a time step Δ, and U_int(t,t+Δ) is the evolution operator generated by the interaction term in (<ref>) [Second-order Trotter splitting can also be used here.]. With this splitting the evolution of the reduced system density operator can be decomposed into a part that describes the system evolution, and a part that describes the influence of the bath, the so-called influence functional <cit.>. Together, these two terms form the process tensor from which all dynamical properties of the system can be extracted <cit.> (some authors call the influence functional itself the process tensor <cit.>). To give a simple example, we consider the computation of the system density matrix after N time steps Δ. How more general observables can be computed in the same framework is described in Refs. <cit.>. We use a Liouville-space (density matrix space) notation where a single index μ≡ (μ_l,μ_r) labels a (left and right) pair of eigenstates |μ_l⟩, |μ_r⟩ of the coupling operator S. Thus, if the dimension of the system Hilbert space is d, μ runs from 1 to d^2. The time evolution of the system state is given as <cit.>
ρ^μ_N_sys(NΔ)=ℱ_N^μ_1...μ_N(∏_k=1^N𝒰_sys^μ_k,μ_k-1(k))ρ_sys^μ_0(0)
with summation implied if indices appear twice. In detail, the objects in this equation are
ρ_sys^μ(t) = ⟨μ_l|ρ_sys(t)|μ_r|
⟩ 𝒰_sys^μ,ν(k)=⟨μ_l|U_sys((k-1)Δ,kΔ)|ν_l|
⟩ ×⟨ν_r|U_sys^†((k-1)Δ,kΔ)|μ_r|.⟩
ℱ_N is the time-discrete influence functional <cit.>, a tensor with N indices. This object fully encapsulates the influence of the system-bath interaction onto the system dynamics and can be used to express the full process tensor. For a stationary Gaussian environment, it can be computed analytically and has the well known structure <cit.>
ℱ_N^μ_1...μ_N=∏_i=1^N∏_j=1^i I_(i-j)(μ_i,μ_j)
I_k(μ,ν)= exp((S_μ_l-S_μ_r)(η_k S_ν_l-η_k^*S_ν_r)),
where the discretized bath correlation function is given by
η_k= ∫_kΔ^(k+1)Δ t∫_0^Δ s α(t-s), k>0
∫_0^Δ t∫_0^t s α(t-s), k=0
and S_i denotes the eigenvalue of the coupling operator S to the eigenstate with label i. Note that with this exact expression one can in general not directly compute the time evolution according to Eq. (<ref>) because this involves a sum over exponentially many terms (exponential in N).
When using auxiliary degrees of freedom to effectively describe the open system evolution, the time-discrete influence functional takes the MPS form
ℱ_N^μ_1...μ_N=v⃗_l^T f^μ_1f^μ_2⋯ f^μ_Nv⃗_r
with identical tensors f and two boundary vectors v_l/r. For instance, in the hierarchical equations of motion (HEOM) approach, f is the propagator of the hierarchy for a time step Δ and the bond dimension is the number of auxiliary density operators <cit.>. With this MPS form the open system evolution Eq. (<ref>) can be performed with iterative tensor contractions. Our goal is to construct a representation of the type (<ref>) in a way that is systematic and automated, starting from an established network representation of ℱ_N.
Tensor Network Representation of the Influence Functional
It has been shown in Refs. <cit.> that the time-discrete influence functional (<ref>) can be represented as a two-dimensional tensor network. For this one can define a set of tensors
b^μν_ij(k)=δ_ijδ_μν I_k(μ,j), k>0
δ_ijδ_μνδ_jμ I_0(μ,j), k=0
= [inner sep=1mm, x=.7cm,y=.7cm]
[tensor] (b_tens) at (0,0) k;
(b) at (0, 1) ν;
[-] (b_tens) – (b);
(a) at (0, -1) μ;
[-] (b_tens) – (a);
(j) at (0-1, 0) j;
[-] (b_tens) – (j);
(i) at (0+1, 0) i;
[-] (b_tens) – (i);
Then the influence functional is given by the network depicted in Fig. <ref>. In the PT-TEMPO scheme <cit.>, this network is iteratively contracted to a matrix product state by multiplying adjacent rows followed by SVD compression which is required to keep the bond dimension manageable. This compression step is the demanding operation in the algorithm.
To compute a process tensor for N time steps, 𝒪(N^2) singular value decompositions are required to contract the network.
Usually one assumes a finite memory time of the bath such that all b(k) tensors for k> N_c can be neglected [This is no restriction because the system evolution for N time steps only depends on α(t) for t<NΔ. Thus, if the correlation function does not decay, we can add an artificial smooth cutoff to the bath correlation for times after t=NΔ.].
Then, using advanced algorithms, the scaling of the network contraction can be improved to 𝒪(N_clog N_c)
<cit.>.
Due to finite-size boundary effects, these contraction methods based on finite MPS evolution do not deliver a periodic representation with a single time step propagator as in Eq. (<ref>). In the following we derive a new method that conserves this form and requires exactly N_c singular value decompositions.
For the new scheme that we propose the network has to be modified slightly. All index dimensions of the tensors b(k) are d^2 where d is the size of the system Hilbert space. We extend the index dimension by one, introducing a zero dimension via I_k(0,i)=I_k(i,0)≡ 1, and keeping the definition (<ref>) as it is. This does not have a significant effect on the complexity of the network. If one index of an extended b(k) tensor is zero, then the tensor reduces to a trivial product of delta functions. As demonstrated in the supplementary material <cit.>, this property allows us to obtain the influence functional for M<N time steps from the influence functional for N time steps by inserting zeros at the boundary
ℱ_M^μ_1...μ_M=ℱ_N^0...0,μ_1...μ_M=ℱ_N^μ_1...μ_M,0...0.
We can even factor the influence functional into two by piercing the train of indices with at least N_c zeros
ℱ^μ_1...μ_M,0...0,ν_1...ν_K_N=ℱ_M^μ_1...μ_Mℱ_K^ν_1...ν_K.
This suggests the following strategy to compute ℱ^i_1...i_N_N. We compute an influence functional with periodic boundary condition in the infinite time-step limit in the form
ℱ^...μνδ..._∞=[ ⋯ f^μ f^ν f^δ⋯]
where f^μ are χ×χ matrices (bond dimension χ, μ=0,1,...,d^2). We can then obtain the desired influence functional for N steps via
ℱ_N^μ_1...μ_N=[(f^0)^γ f^μ_1f^μ_2⋯ f^μ_N]
where γ≥ N_c. In practice one can set γ=∞ such that (f^0)^γ=v⃗_r∘v⃗_l with v⃗_l/r the leading left and right eigenvectors of f^0 (eigenvalue one). We have indeed recovered a representation of the type (<ref>).
Therefore, one only needs to compute and store the single tensor f instead of 𝒪(N_c) such tensors as in the finite contraction schemes <cit.>. The stationary state can also be determined efficiently by computing the leading eigenvector of the full propagator Φ_(μ,i)^(ν,j)= f^μ_ij𝒰_sys^μ,ν and contracting the non-physical dimensions with v⃗_r.
iTEBD
To find the tensor f of the infinite network we propose a contraction in an anti-diagonal direction starting from k=N_c, as shown in Fig. <ref>. Then the network already has a structure suitable for time evolving block decimation (TEBD). The gates b(k) can formally be seen as nearest neighbor coupling alternating between left and right sites. Thus, it is straightforward to apply infinite TEBD algorithms <cit.> to the network in Fig. <ref> (right panel) with evolution from top to bottom. Because the gates b(k) are only weakly entangling for large k, the bond dimension increases significantly only for the last few evolution steps, making this an excellent contraction scheme.
We find that the simple algorithm from Ref. <cit.> already performs very well, resulting in similar bond dimension for a given accuracy as the contraction of the finite network in PT-TEMPO.
As a benchmark we consider the two-spin boson model. The model consists of noninteracting spins A and B that are coupled to the same bath via
S=1/2(σ_z^A+ σ_z^B)
We choose a sub-ohmic bath with exponential cutoff at zero temperature such that the bath correlation function reads <cit.>
α(t)=αω_c^2Γ(s + 1)/2(1 + ω_c t)^s+1 .
In this expression, α is a dimensionless coupling strength, ω_c is the cutoff frequency, and s<1 is the exponent of the low frequency behavior ∝ω^s of the spectral density.
This function decays only algebraically for large times, possibly making it challenging for simulations due to a resulting long memory time. In Fig. <ref> we show the computation time, the bond dimension χ, and the accuracy of the method for a challenging parameter regime. As a comparison we computed the same problem with the finite contraction scheme from Ref. <cit.> (PT-TEMPO) for N=N_c=300. In order to assess the accuracy we consider the average absolute distance of the compressed influence functional to the exact value (from Eq. (<ref>)) for a set of 1000 random paths (collection of indices μ_1,...,μ_N). Our new approach leads to a comparable bond dimension and accuracy of the final compressed influence functional. However, iTEBD requires only 𝒪(N_c) instead of 𝒪(N_c^2) matrix operations, which gives a large speedup in the computation time.
As a practical demonstration of the capabilities and limitations of the method we consider dynamics in the standard spin boson model for strong coupling. For this model excellent results from specialized methods are available <cit.>. We take the Hamiltonian to be
H(t)=Ωσ_x⊗𝕀_env+σ_z⊗ B(t)
and consider a sub-ohmic spectral density with s=1/2 and ω_c=20Ω. The results are depicted in Fig. <ref>. Since the iTEBD approach is economical and fast, we can reach strong coupling regimes where the bond dimension χ becomes large. As the coupling strength α is increased, the system changes from a symmetric phase where asymptotically ⟨σ_z|=⟩0 to a symmetry broken phase ⟨σ_z|≠⟩0. We are able to faithfully capture this transition with our method. It can be anticipated from Fig. <ref> that these calculations would require very long computation times within the finite-size contraction approaches. However, due to a quite substantial bond dimension increase, the ultra-strong coupling regime beyond α=0.2 is still practically inaccessible, even though it can be captured with specialized TD-DMRG, ML-MCTDH or HEOM approaches <cit.>. One issue there is that the Trotter time step Δ must be decreased in order to achieve convergence, which further increases the memory time (on the time scale of Δ). Still, the calculation for α=0.15, well beyond the phase transition, takes less than five minutes on consumer hardware and does not require any special fitting procedure, only the bath correlation function and coupling operator is provided as an input.
Conclusions
We have introduced a new method to automatically generate auxiliary environments for simulations of open system dynamics. These degrees of freedom realize the exact bath response to a controlled level of accuracy and with a time-local propagator. Our approach is based on an established tensor network representation of the process tensor <cit.>. This network can be modified so that the contraction to MPS form can be performed using infinite MPS evolution methods. We used iTEBD which is suggested from the network structure and achieve a linear scaling of the computation effort with the bath memory time (i.e. linear in N_c). The resulting influence functional is represented in terms of a single tensor f, with insignificant memory requirements in comparison with the finite network contraction algorithms, which require to store at least N_c tensors [If N<N_c only N tensors are required. In general, open system evolution can always be extended periodically if the dynamical map is known on the full bath memory time <cit.>.]. This tensor encodes both propagator and initial state for a set of auxiliary degrees of freedom.
Since such a representation is time-local, the open system evolution can be trivially extended to arbitrary long times and stationary states can be determined efficiently using power methods. The proposed algorithm constitutes a powerful method for open quantum systems that is simple to implement and that can be used readily for any stationary Gaussian environment. Still, we believe there is substantial potential for further optimization. For instance, using advanced infinite MPS evolution schemes <cit.> could lead to a better accuracy at a given bond dimension, which becomes relevant for large system sizes and ultra strong coupling.
Acknowledgements V.L. and W.T.S. gratefully acknowledge discussions with Richard Hartmann concerning the numerical examples. We also thank Jonathan Keeling for valuable comments on a previous version of the manuscript. H.-H.T. is supported by the Deutsche Forschungsgemeinschaft (DFG) through project A06 of SFB 1143 (project No. 247310070).
§ SUPPLEMENTARY MATERIAL
§.§ A. MPS representation of the influence functional from HEOM
We show how Eq. (<ref>) can be obtained from open system dynamics with auxiliary environments, considering HEOM as an example <cit.>. In the standard HEOM scheme, the bath correlation function is represented approximately as a sum over few exponentials <cit.>
α(t)≈∑_j=1^MG_j^-ReW_j|t|-ImW_j t
with complex parameters G_j,W_j∈ℂ. One then defines an infinite set of auxiliary density operators labeled by a pair of multiindices n,m∈ℕ_0^M. These auxiliary states satisfy the hierarchical equation of motion
∂_tρ^(n,m)= -i[H_sys, ρ^(n,m)] - (W·n+W^*·m)ρ^(n,m)+∑_j(G_jn_jSρ^(n-e_j,m) + G_j^*m_jρ^(n,m-e_j)S)
+∑_j[ρ^(n+e_j,m), S] + [S, ρ^(n,m+e_j)].
We denote e_i the unit vector in direction i.
Choosing as an initial state ρ^(0,0)(0)=ρ(0) and all other states zero, one can obtain the reduced system evolution from
ρ(t)=ρ^(0,0)(t). In practice, the hierarchy is cut at sufficiently high index values such that a finite system can be evolved numerically. We can reformulate the hierarchy by embedding it as an operator in an extended Hilbertspace ρ^(n,m)≡⟨n|R|m|$⟩. Defining raising and lowering operatorsA_i^±|n⟩=|n±e_i⟩, and counting operatorsN_i|n⟩=n_i|n⟩the hierarchy is mapped onto
∂_tR = -i[H_sys, R] - (W·NR+RW^*·N)+∑_j(G_jA^+_jN_jSR+ G_j^*RSN_j A^-_j)+∑_j([A^-_jR, S] + [S, RA^+_j])
≡ℒ_sys R + ℒ_int R .
Because the evolution is linear we can make a Trotter splitting with time stepΔbetweenℒ_sys=-[H_sys,·]andℒ_int.ℒ_intcommutes with the coupling operatorS, so we consider the action of this generator on a product ofS-eigenstates|μ_l⟩⟨μ_r|labeled with the indexμ=(μ_l,μ_r)ℒ_int(|μ_l⟩⟨μ_r|⊗ x)=|μ_l⟩⟨μ_r|⊗ℒ_int^μ x.
The generator of the hierarchy dynamics conditioned on the system basis state is given as
ℒ_int^μ x= - (W·Nx+xW^*·N)+∑_j(G_jA^+_jN_jS_μ_l x+ G_j^*xS_μ_r N_j A^-_j)+∑_j(A^-_jx (S_μ_r-S_μ_l) + (S_μ_l-S_μ_r)xA^+_j) .
The tensorf^μis just the propagator generated byℒ_int^μfor a time stepΔf^μ=^Δℒ_int^μ.
The bond dimension offis exactly the number of auxiliary density operators that are taken into account in the hierarchy. We obtain an influence functional in the form (<ref>) with the boundary vectorsv_l=v_r=|0⟩⟨0|(the system state is given byρ=⟨0|R|0|$⟩).
§.§ B. Reduction of the influence functional
We provide a proof for the statements (<ref>) and (<ref>). These allow us to recover the influence functional for finite times from the infinite version. From our definition the extended tensors b(k) have the property that, if one index is zero, they reduce to a product of delta functions. In particular
b^0ν_ij(k)=δ_ijδ_0ν, k>0
δ_ijδ_0νδ_j0, k=0
, b^μν_0j(k)=δ_0jδ_μν, k>0
δ_0jδ_μνδ_jμ, k=0
.
We write this as
b^μν_0j(k>0) = [inner sep=1mm, x=.7cm,y=.7cm]
(b) at (0, 1) ν;
[-] (0, 0) – (b);
(a) at (0, -1) μ;
[-] (0, 0) – (a);
(j) at (0-1, 0) j;
[-] (0-.1, 0) – (j);
(i) at (0+1, 0) 0;
[-] (.1, 0) – (i);
b^0ν_ij(k>0) = [inner sep=1mm, x=.7cm,y=.7cm]
(b) at (0, 1) ν;
[-] (0, 0) – (b);
(a) at (0, -1) 0;
[-] (0, 0) – (a);
(j) at (0-1, 0) j;
[-] (0-.1, 0) – (j);
(i) at (0+1, 0) i;
[-] (.1, 0) – (i);
b^0ν_ij(0) = [inner sep=1mm, x=.7cm,y=.7cm]
(b) at (0, 1) ν;
[-] (0, 0) – (b);
(a) at (0, -1) 0;
[-] (0, 0) – (a);
(j) at (0-1, 0) j;
[-] (0, 0) – (j);
(i) at (0+1, 0) i;
[-] (0, 0) – (i);
The proof of statements (<ref>) and (<ref>) is given pictorially in Figs. <ref> and <ref>, respectively.
|
http://arxiv.org/abs/2307.02703v1
|
20230706003730
|
A Logical Way to Negotiate Services
|
[
"Glenn Bruns",
"Mauricio Cortes"
] |
cs.LO
|
[
"cs.LO",
"cs.NI",
"C.2.1; D.2.11"
] |
definitionDefinition
propositionProposition
theoremTheorem
P>p
L>l<C>c<R>r<BNFarray[1][c][ 1]Glenn Bruns2]Mauricio Cortes[1]School of Computing and Design, California State University, Monterey Bay, 100 Campus Center, Seaside, 93955, California, USA[2]Joyent, 645 Clyde Avenue, Suite 502, Mountain View, 94043, California, USA
Service providers commonly provide only a fixed catalog of
services to their clients. Both clients and service providers
can benefit from service negotiation, in which a client makes
a query for a specific service, and the provider counters with
an offer. The query could include parameters that control the
performance, reliability, and function of the service.
However, a problem with service negotiation is that it can
be expensive for a service provider to support.
In this paper we define a formal negotiation policy language that
enables automated service negotiation.
In the model supported by the language, service providers
can recursive obtain the services they need from sub-providers.
The queries made by clients, and the offers
returned from service providers, are expressed in quantifier-free
first-order logic. Quantifier elimination is used to transform
constraints between providers and sub-providers.
The pattern of interaction between clients
and service providers is defined in process algebra.
We show a correctness
property of our language: if sub-providers respond
positively to queries, then so does the provider itself.
A Logical Way to Negotiate Services
[
August 1, 2023
===================================
§ INTRODUCTION
Service providers – such as internet service providers, wireless
service providers, storage service providers, and providers of
specialized online services – typically provide a static
catalog of services. As a simple example, an internet service
provider might offer two options: 100 Mbps down and 10 Mbps up,
or 50 Mbps down and 2 Mbps up. While the simplicity of this
“service catalog" approach is helpful, it lacks flexibility.
For example, one customer might need a download speed of at least 300 Mbps
but an upload speed of only 10 Mbps, while another customer might be happy
with an download speed of 50 Mbps and an upload speed of 20 Mbps. The lack of flexibility leads to lost opportunities for service providers and low
perceived value for customers.
In a more flexible approach, a client could negotiate with a
service provider to obtain the service that fits her needs.
However, this kind of flexibility is commonly only available
for important clients and involves expensive manual work by
the service provider.
A solution to the problem would be a framework for service negation
that provides flexibility but reduces costs through automation. Here we define
such a framework. The main features of our framework are as
follows:
* Services have a hierarchical structure. A service provider
may depend on “sub-providers". For example, the provider of
a service to edit and compile LaTeX documents might pay sub-providers
for storage, computation, and payment services. Thus, a service
provider can also play the part of client with respect to other
service providers.
* A service provider receives a query from a client that
defines constraints on the service needed by the client.
For example, a query to an internet service provider might specify
a minimum download speed of 100 Mbps, a minimum upload speed of
20 Mbps, and a maximum price of 75 USD per month.
The service provide responds to the client with an offer that
defines the service that can be provided. Both queries and offers
are defined as quantifier-free formulas of first-order logic.
* If a service provider depends on sub-providers, then responding
to a query will require sending sub-queries to the sub-providers, and
then combining the received sub-offers to create a top-level offer.
Also, a sub-offer from one sub-provider can affect sub-queries sent to other
sub-providers.
For example, a document processing provider may need a certain
amount of storage, which can be provided by two sub-providers.
If the first sub-provider can provide most of the needed storage,
the query to the second provider can request less storage.
* The negotiation policy of a service provider is captured in
a formal policy that defines the service parameters, constraints
on the services that can be offered, and relationship between
services provided by sub-providers and the top-level service that
is provided.
* If service providers define their negotiation policies, then
negotiation can be automated. The pattern of interaction between
clients and providers is defined in process algebra.
In a simple running example used throughout this paper, a storage
provider is a broker that obtains storage from two other
storage providers. The negotiation policy of the top-level
provider might specify that the parameters of the service are
the amount of storage (in GBytes) and the yearly cost of service
(in USD), that the storage offered is the sum of the storage offered
by the sub-providers, and that the price offered includes a 10%
markup over the cost of the storage obtained from the sub-providers.
Suppose the storage provider receives a query for 10 GBytes of
storage at a price of 5 USD/year. The storage provider then sends
a query to sub-provider 1 for 10 GBytes of storage at a price of
4.55 USD (a 10% markup on 4.55 USD gives 5.00 USD). Suppose an
offer is received for 5 GBytes of storage
at a price of 4.20 USD. The storage provider would then send
a query to sub-provider 2 for 5 GBytes of storage at a price of
4.90 USD. At this price the top-level provider can still obtain
a 10% markup on the combined storage from the two sub-providers.
The top-level provider than makes an offer of 10 GBytes of storage
at a price of 5 USD/year.
In this example, negotiation of the storage provider with the
two sub-providers takes place sequentially. Later in the paper
we describe both sequential and parallel patterns of interaction
with sub-providers.
The main contribution of our work is the formal definition of a
language for service negotiation. We define the syntax and semantics
of the language, and show an important formal property of negotiation
behavior: if sub-negotiators respond positively to queries, then so
does the top-level negotiator.
We also define several extensions to the policy language, including
support for making parallel queries to sub-providers.
In the following section of the paper we briefly review our
hierarchical service negotiation model, which was presented
in <cit.>. In Sections <ref> and <ref>,
we define the syntax and semantics of the the policy language.
In Section <ref>, we define and
prove a correctness condition of the language. In
Section <ref> several extensions to the
language are discussed, and an implementation of the
language is described in Section <ref>.
The last two sections of the paper describe related work,
and offer concluding remarks.
§ HIERARCHICAL SERVICE NEGOTIATION
In hierarchical service negotiation, the negotiation process has this form:
* The client makes a request to a negotiation server to
initiate negotiation.
* The negotiation server acknowledges, returning the
terms of negotiation, which identifies service parameters
and the constraints on them.
* The client sends the negotiation server a query, which is
a condition over the service parameters.
* The negotiation server makes queries, and obtains
offers, from one or more “sub”-negotiation servers.
* The negotiation server sends an offer to the client.
* This query/offer process is repeated until either the
client accepts an offer, in which case the negotiation
server returns an invoice, or one of the two parties
terminates negotiation.
The details of this process are explained in <cit.>,
which includes a description of a negotiation protocol. Not
every negotiation server will contact sub-servers – some
servers must be “base cases” that make offers without
contacting sub-servers. However, our focus here is in the
“inductive case”.
In this section we define how the terms of negotiation are
defined, and the kinds of formulas used for queries and
offers. Please note that in what follows negotiation
servers are sometimes referred to as “negotiators”,
and sometimes simply as “servers”.
§.§ Configuration Types
A configuration type (“config. type” for short)
defines the negotiable parameters of a service,
their types, and any constraints over the parameters. The
syntax of config. types is defined in
Fig. <ref>. For example, the config. type
of a simple storage service might be
{capacity decimal, price decimal; capacity≥ 0 price≥ 0}.
Here every parameter has basic type “decimal”; later in
the paper we discuss support for additional types.
We write ct for the set of parameter names
appearing in configuration type ct, and for
the set of all configuration types.
§.§ Linear Constraints
To express queries and offers on a service, as well
as constraints in config. types, we use a
first-order logic in which the atomic predicates are
conditions on parameters of config. types. The
abstract syntax of the logic we used is defined in
Fig. <ref>. For example, a formula describing
a condition on a storage service is capacity≥
10 price≤ 5, where capacity is given in
GBytes and price is a yearly price in dollars.
We write for the set of formulas generated by this
syntax. As usual, other logical operators, like ,
∀, and , are derived. The notion of free
variables of a formula is assumed to be understood. If all
free variables appearing a formula are elements of
ct for some config. type ct, then we say ϕ
is a formula overct. The set of all formulas over
ct is written ct. For example, letting ct
be the example config. type defined above for a storage
service, a formula in ct is capacity = 2
price≤ 5.
A query over a service with config. type ct is then
defined to be a quantifier-free formula of ct.
An offer is also a quantifier-free formula of
ct. When a client accepts an offer, it
provides a quantifier-free formula of ct that
logically implies the offer, and that specifies values
for all parameters of ct. For example, in the storage
example, accepting an offer capacity = 2 price≥ 5 might be the formula capacity = 2
price = 5.
The logic we use defines linear constraints – in other
words, systems of linear inequalities. When this logic is
used over the domain of the naturals, it is referred to as
Presburger Arithmetic<cit.>. The key property we
require of the logic is that it is possible to compute, from
an arbitrary formula, an equivalent, quantifier-free
formula. Algorithms for quantifier elimination exist for
this logic when it is interpreted over the naturals, the
integers, the reals, or the rationals. For example,
Fourier-Motzkin elimination <cit.> can
be used when the logic is interpreted over the reals or the
rationals. In what follows we write QF(ϕ) for a
formula that is logically equivalent to PA formula ϕ, but
quantifier-free.
Intuitively, the use of existential quantification can be
viewed as a way to project a formula with free variables
onto some of the variables it contains. For example, let
ϕ be x < 5 x > y y > 0. To see what ϕ
says about x, we “quantify away” the y in ϕ, to
get ∃ y.(x < 5 x > y y > 0. Applying
quantifier elimination, we get QE(ϕ) = x < 5 x >
0. Intuitively, this formula expresses how ϕ
constrains x.
We interpret logical formulas in the usual way. Briefly, an
interpretationI consists of a non-empty domain of
objects, plus interpretations of constant, function, and
predicate symbols. The logic we use has
{=,≠,≤,≥,<,>} as its predicate symbols, and +
and ∗ as its function symbols. A valuationv
for an interpretation maps variable symbols to elements of
the interpretation's domain. In this paper we interpret
formulas over the rationals, and adopt the usual arithmetic
interpretation of the function and predicate symbols.
We write I,v ϕ if a formula ϕ of , possibly
containing free variables, is satisfied by interpretation
I under valuation v. We write v ϕ if I,v
ϕ where I is the expected interpretation that was
just described. We write ϕ if ϕ is
logically valid; i.e. is satisfied by all I and v.
Also, we also write ϕ to mean that ϕ
is satisfied by some I and v– in other words, ϕ. We write ϕψ if whenever
I,v ϕ then I,v ψ. We write ϕψ if ϕψ and ψϕ.
§ A NEGOTIATION POLICY LANGUAGE
In this section we define the syntax of our negotiation
policy language. Informally, a policy consists of a name, a
list of the negotiators it refers to, the config. type it
supports, and a collection of rules. Each rule defines a
relationship between the negotiation service being supported
and the negotiation services being used.
Fig. <ref> shows a simple
negotiation policy for a storage service provider. The
provider is acting as a broker, obtaining storage from
two other providers. Informally, the policy says that
sub-providers s1 and s2 are used, and that the offer
made by the provider is related to the offers from the
sub-providers in the following way: the capacity offered
is the sum of the capacities offered by s1 and s2,
and the price offered is 10% more than the sum of the
prices offered by s1 and s2.
The “serves” part specifies the config. type supported
by the policy. In this example, 'storage' is defined to
be the configuration type shown in Section <ref>:
{capacity decimal, price decimal; capacity≥ 0 price≥ 0}.
The single rule of the policy specifies how the service is
provided. The “trigger” part of the rule is a condition
on queries from clients. The rule is “applicable" if the
conjunction of the trigger condition and the query are
satisfiable. In this example no condition is imposed on
queries.
The “uses” part of the rule shows that
storage negotiation servers with names s1 and s2 are to
be used as sub-negotiators.
The “offer” part of the rule
specifies how parameters of the offered service depend on
parameters of the sub-services. In this example, the
offered capacity is the sum of the capacities of the
sub-services, and the offered price is the sum of the prices
of the sub-services, plus a 10% markup.
The “constraint” part of the rule is a condition that must
hold between the service provided by the policy and the
negotiated sub-services. In this example no constraint is
used, but the constraint could be used to specify, for
example, that each sub-service provide half of the total
storage.
Fig. <ref> defines the syntax of
policies. We use an extended version of BNF in which List(β)
indicates zero or more repetitions of the phrases defined by
grammar symbol β enclosed in square brackets and
separated by commas. Also, Struct(β_1,…,β_n)
indicates the phrases defined by symbols
β_1,…,β_n enclosed within curly brackets.
If a policy p serves config. type ct, then the trigger
of every rule must be a formula over ct. The constraint of
every rule must be a formula over ct,ct_1,…,ct_n, where
ct_i is the config. type of negotiation server s_i
appearing in the “uses" part of the rule. In an assignment
(nonterminal assn), every id must be a parameter
of ct, and the variables appearing in every term t must be
parameters of config. types ct_1,…,ct_n.
In the policy example of Fig. <ref>,
server prefixes are used to distinguish parameters
associated with different negotiation servers. This feature
is supported in our language, and is used in examples, but
for simplicity we will not support this feature in formal
definitions.
§ INTERPRETING NEGOTIATION POLICY
In this section we define the meaning of policies. The
basic idea is that a policy is interpreted as a process that
has a port for accepting queries, a port for returning
offers, and ports for interacting with sub-negotiators.
To get intuition for what follows, look at
Fig. <ref>, a message sequence diagram
showing how the process derived from the policy of
Fig. <ref> might behave. The
negotiator process accepts query c = 100 p ≤ 5 on
its input port (in the figure c is used for
capacity and p for price.) From this
query, the query to s_1 is formed in two steps. First the
“offers” part of the policy is used to relate the query to
a condition on the storage needed from s_1 and s_2.
Then this condition is “projected onto” the parameters of
s_1 itself by quantifying existentially, and then
eliminating the quantifier. The query to s_1 contains c
≤ 100 because sub-negotiator s_2 can provide any
memory not provided by s_1.
The offer returned from s_1 is c = 50 p = 3. This
offer must be used in computing the query for s_2. The
offer returned from s_2 is c = 50 p ≤ 17/11.
The offer sent to the client is formed by combining the two
sub-offers and the “offers” part of the policy, and then
projecting the result onto the parameters of the storage
service, again using existential quantification. In this
example the offer sent to the client exactly determines
values for all parameters of the service being negotiated,
but in general an offer is any formula over the
configuration type of the service being negotiated.
We define a negotiator as a process with an interface
consisting of port in, which accepts a query, and a
port out, which produces an offer. In what follows
we use the process algebra CCS <cit.>
to describe processes.
§.§ Core policy language
Rather than defining the meaning of policies directly, we
use the “core language approach”. In this approach a
minimal language is used for defining the language
semantics, and the full language is defined by translation
to the core language.
The syntax of the core language is as follows. A policy is
either a (core language) rule, or the composition p_1
⊕ p_2 of two policies, which are themselves expressed
in the core language. A rule has the form
ctϕs_1:ct_1,…,s_n:ct_nψ, where
ct is the config. type of the rule, ϕ is the
trigger condition of the rule, s_1:ct_n,…,s_n:ct_n
are the names and types of the negotiation servers used in
the rule, and ψ is the condition of the rule.
The translation from the full language to the core language
is straightforward, and so we only sketch it here. Let p
be a policy in the full language, containing rules r_1,…,r_n.
From p we derive r'_1 ⊕⋯⊕ r'_n, where
r'_i is the core language form of r_i. The config. type of
every r'_i is the config. type of p. The trigger
condition of r'_i is the trigger of r_i. The servers of
r'_i are the servers of r_i. Finally, the condition ψ
of r'_i is the conjunction of the constraint of r_i and
the formula derived from the assignment of r_i. A formula
is obtained from an assignment simply by replacing each
assignment symbol := with logical symbol .
§.§ Semantics of a rule
We now define the semantics of policies, which are
interpreted as processes. We begin with the case of a
policy that is a single rule. Informally, the process
denoted by a rule will accept query q, query each negotiation
server mentioned in the rule, awaiting a response before querying
the next server, and finally output an offer. The query for
each server incorporates responses from previous servers.
Let r be a rule
ctϕs_1:ct_1,…,s_n:ct_nψ in the core
language syntax. Then the process r denoted by
r is defined as follows:
r P
where process P, and supporting processes P_1,…,P_n
are defined as follows:
P in(q). ( q ϕ) P_1 out().P
P_i in_s_i(q_i).out_s_i(r_i).P_i+1
P_n+1 out(r).P
This definition says that when a query q arrives, the offer
is returned if the offer does not satisfy the trigger condition of the rule.
Otherwise, sequentially, for each supporting process (representing a sub-provider),
a query is made and an offer is received. Finally, a top-level offer r is made.
We have not yet defined the queries q_i that are made to the
sub-servers, and the final response r.
As a first step, we define variable sets X, X_0, and X_i (for 1 ≤ i ≤ n):
X ct∪⋃_1 ≤ i ≤ mct_i
X_0 X - ct
X_i X - ct_i
Variable X consists of the parameters of the top-level service and
its immediate sub-services. Variable X_0 consists of the parameters
of the the sub-services only.
Now we can define the final response r, and the queries
q_i to the sub-servers in terms of the responses r_i
from the sub-servers. If X = {x_1,…,x_n} is a set
of variables, then we write ∃ X. ϕ as shorthand
for ∃ x_1,…,x_n. ϕ.
q_i QE(∃ X_i. (q ψ⋀_1 ≤ j < i r_i))
r QE(∃ X_0. (ψ⋀_1 ≤ i ≤ n r_i))
Intuitively, the first of these definitions says that the query to the i^th
sub-provider is a quantifier-free formula expressing that
Recall that QE(ϕ) stands for a formula logically
equivalent to ϕ but with quantifiers eliminated.
Also, note that formula q_i is a formula over ct_i.
§.§ Semantics of policy composition
Informally, the meaning of a composite policy is a process
that inputs a query, sends it to each of the sub-policies,
and then takes the disjunction of the responses. Formally,
the denotation p_1 ⊕ p_2 is defined as
follows
p_1 ⊕ p_2p_1⊕p_2
where process operator ⊕ is defined as follows:
P_1 ⊕ P_2 (P_3 | P'_1 | P'_2) ∖{in_1,out_1,in_2,out_2}
P_3 in(q).in_1(q).in_2(q).out_1(r_1).out_2(r_2).out(r_1 r_2).P_3
P'_1 P_1[in/in_1,out/out_1]
P'_2 P_2[in/in_2,out/out_2]
§.§ Closing a policy process
The process denoted by a policy communicates with
sub-services through ports of the form in_s and
out_s, where s is a server name. We need to also
consider how one constructs a server – i.e. a process with
only ports in and out– from such an “open
server”, along with a collection of servers that will be
used as sub-negotiators. Suppose p is a policy that
contains server names s_1,…,s_n, and P_1,…,P_n
is a collection of server processes. Then we define
p(P_1,…,P_n)
(p| P_1[f_1] |…| P_n[f_n])L
where relabelling function f_i maps in to
in_s_i and out to out_s_i, and
label set L is
{in_s_1,out_s_1,…,in_s_n,out_s_n}.
The (visible) ports of this process are only in and
out.
§ CORRECTNESS
We now look at whether policies behave as expected. We
focus on one correctness property: First, if sub-negotiators
positively respond to queries, will the negotiator defined
by the policy also do so? This is a kind of preservation
property, and can also be regarded as an assume/guarantee
condition. By “positively respond”, we mean that a query
q will be responded to with an offer that logically
“intersects” with q– in symbols: q r.
A negotiator s over a configuration type ct is
responsive if, for all queries q in ct, it
responds to query s with an offer r, such that
q r.
Responsiveness is a strong property not expected of real
negotiators. A responsive negotiator is nearly miraculous,
in that it will at least partially satisfy every query. The
point of defining responsiveness is simply to show that the
server defined by a policy will be responsive if the servers
it uses are, too.
Let p be a policy and s_1,…,s_n be servers of the
appropriate type. Then if s_1,…,s_n are all responsive,
so is p(s_1,…,s_n).
We shall only sketch the proof here. The core idea is
illuminating and simple to understand, but the full proof
becomes awkward because of notation. The proof sketch is by
induction on the structure of policies. We first consider
the case in which a policy is a single rule. We need the
following simple fact about first-order logic.
Let ϕ and ψ be formulas of first-order logic, such
that variable x does not appear free in ϕ. Then
∃ x. (ϕψ) ϕ∃ x. ψ.
Using this fact we can establish a fact about a first-order
logic formula that models a simple policy rule. Figure
<ref> shows the negotiators involved. For
simplicity, suppose the config. type of negotiator s
concerns only variable x, and that the config. types for
negotiators s_1 and s_2 concern only x_1 and x_2,
respectively. Suppose we have a query q (concerning only
x), and a formula ϕ that has at most variables x,
x_1, and x_2 free. This formula represents the
conjunction of the policy rule's constraint and the formula
derived from the rule's assignment. We want to show that if
q_1 r_1, and q_2 r_2, then q
r.
Suppose q is a formula containing at most x free, ϕ
is a formula containing at most x, x_1, and x_2
free, andformulas q_2 and r are defined as follows:
q_2 ∃ x,x_1. (q ϕ r_1)
r ∃ x_1,x_2. (r_1 r_2)
and
∃ x_2. (q_2 r_2). Then
∃ x. (q r).
This proposition may seem to be missing the assumption
that ∃ x_1. q_1 r_1, capturing that
the first negotiation server is responsive. It is a little
surprising that this assumption is not needed. The
proof of the proposition is simple. We have
[ ∃ x_2. q_2 r_2 ; ∃ x_2. (∃ x,x_1. q ϕ r_1) r_2 ; ∃ x,x_1,x_2. (q ϕ r_1 r_2) ; ∃ x. (q ∃ x_1,x_2. r_1 r_2 ϕ) ; ∃ x. (q r) ]
which proves the proposition.
We now consider the second case of the proof sketch, in
which a policy is the composition of two policies.
Suppose the query is q, and the formula from the response
of the first policy is r_1, and the formula from the second
is r_2. By the def. of policy composition, we need to
show that q (r_1 r_2). By induction we
assume q r_1 and q r_2 and from
this it is trivial to show that q (r_1 r_2).
(End of proof sketch.)
§ EXTENSIONS TO THE LANGUAGE
§.§ Preferences
Offers can sometimes simplified by taking into account known
preferences of a clients. For example, suppose this offer
is computed by a negotiation server:
speed = 6 price≤ 10 price≥ 8
Clients seek lowest-priced offers, so the upper-bound is not
useful, and the offer should be simplified to speed =
6 price = 8.
To support such simplifications, we need a way to express
client preferences in policy, and logical manipulation to
support these preferences. For the language, we can
introduce two new clauses at the policy level, just after
the serves clause: maximize and
minimize. Each takes a list of parameters of the
configuration type served by the policy. The parameters
listed after maximize are ones the client
seeks to maximize, and similarly for minimize.
For example in the example policy of Fig. <ref>,
we could write .
For the logical handling, if x is the parameter that the
client seeks to minimize, and ϕ is a condition with x
free, we can write the following.
ϕ(∃ x'. ϕ[x'/x] x' < x).
This formula strengthens a condition on x by saying
there exists no value x', which must satisfy the
same condition as x does, but that is less than x.
The second conjunct is in L, so the existential
quantifier can be eliminated.
For example, consider again the formula
speed = 6 price≤ 10 price≥ 8.
By the construction we obtain formula
s = 6 p≤ 10 p≥ 8
(∃p'.
(s = 6 p'≤ 10 p'≥ 8 p' < p))
The existentially-quantified formula is equivalent to s = 6 p >
8, so the formula as a whole is equivalent to s = 6 p = 8.
§.§ Extended Offers
In previous sections, an offer from a negotiator has been
defined as a formula over the configuration type supported
by the negotiator. An alternative is to define an offer
as a pair (r,t), where r is a formula as before, and
t is a token. The token is what is sometimes called
“opaque”– its structure is not visible to a client.
The token serves several purposes. First, the negotiator
can require that whenever a client accepts an offer, the
token for the offer is supplied. This is a means to avoid
counterfeit offers. Second, the token can be used by
the negotiator to record the sub-offers that “support”
a given offer. Then, if a client accepts an offer, the
negotiator can use the token to see which previously-received
sub-offers should be accept.
Finally, tokens that capture sub-offers
are helpful in expressing policy correctness properties.
In particular, whether every offer is indeed supported by
sub-offers previously received by the negotiator.
We now briefly outline a form of token that records
sub-offers. For example, an offer from a home builder might
contain information, not visible to the potential client,
about the offers made by electrical and plumbing contractors
used by the home builder in defining its own offer. We call
such offers “extended offers”. The syntax of extended
offers is defined in Fig. <ref>. The
general form of an extended offer is (ϕ, t). The
specific form that records sub-offers is a policy-offer.
To support extended offers, the policy semantics given in
Section <ref> must be appropriately modified.
For example, in the semantics of policy composition, the
def. of process operator ⊕ must be modified so that
the value sent on port out is not r_1 r_2,
but (r_1 r_2, t_1 + t_2), where the values received
on ports out_1 and out_2 are t_1 and t_2,
respectively.
Finally, we briefly mention how extended offers can be
used to help capture a correctness condition of policies.
Suppose a negotiator defined by policy provides an
extended policy offer po. Then we expect the following
to hold:
* If po has the form (ϕ, (ϕ_1,t_1) + ⋯ + (ϕ_n,t_n)),
then ϕϕ_1 ⋯ phi_n.
* If po contains a rule-offer of the form
(ϕ, (assn, (ct_1, (ϕ_1,t_1)) ×⋯× (ct_n, (ϕ_n,t_n)))),
and X_0 ⋃_1 ≤ i ≤ mct_i,
then ϕ∃ X_0. (ϕ_1 ⋯ϕ_n ϕ_assn).
where ϕ_assn means the logical condition derived from
assignment assn.
§.§ Parallel Queries to Sub-negotiators
The process derived from a policy rule queries the negotiators
that appear in the rule sequentially. In general this is
required, because constraints may exist over the sub-offers
from these negotiators. For example, in our storage example,
the combined capacity of the sub-offers is constrained.
However, the querying of two negotiators of a rule can be
performed in parallel if the rule places no constraints
between the two negotiators. For example, a storage policy
could be defined that would simply seek 50 GBytes of
storage from each of two servers, with a maximum price.
These queries could be run in parallel.
Let us be more precise about what it means for a rule to
place no constraints between negotiators. Suppose we have a
formula of containing variables x and y. If
ϕ (∃ y.ϕ) (∃ x.ϕ),
then ϕ can be said to express no constraints between
x and y. Intuitively, this equivalence says that
condition ϕ on x and y can be fully captured as a
condition on x alone and a condition on y alone.
Using this idea it is straightforward to work out, using the
semantics of policy rules of
Section <ref>, whether the servers of a
rule can be split into two groups such that servers of the
two groups can be processed in parallel. This analysis
depends on quantifications of the formula q ψ that
appears in the semantics. In some cases the parallelization
can be done at compile time, independently of knowledge of
q (except to know that it is a formula over ct).
§ IMPLEMENTATION
We have developed software building blocks to support
service negotiation. As part of these we have developed
support for parsing and interpreting policies in our
policy language, including Java code to perform
quantifier elimination for linear inequalities over
the rationals.
Sometimes quantifier elimination is part of a
theorem-proving system, with the aim to show the validity of
a formula. Our goal is different: to compute a formula that
is logically equivalent to another formula, but without
quantifiers. The practical impact of this requirement is
that certain methods used with PA over the naturals, which
do not preserve logical equivalence, are not applicable in
our work. The situation is similar to the use of
Skolemization in theorem proving, which again does not
generally preserve logical equivalence.
Our current implementation uses the Fourier-Motzkin
algorithm <cit.> for quantifier
elimination. To eliminate a quantifier in a formula ϕ,
we first put ϕ into disjunctive normal form, convert
ϕ to a system of linear inequalities, eliminate the
variable of interest, and then convert it back to a logical
formula.
Simplification steps are essential in keeping the
generated queries and replies simple. We perform
simplification both on linear inequalities and on
formulas. For the simplification of formulas we
developed a simple rewriting system that attempts
to apply rewrite objects throughout the abstract
syntax tree representation of a formula.
An obvious question in the application of logic to service
negotiation is whether it is practical, especially because
quantifier elimination in PA is doubly
exponential in the size of the formula <cit.>. We have
not yet run experiments, but there are reasons to be
optimistic. First, we expect that many services will not
have more than one or two dozen parameters in their
configuration types, meaning that the formulas should not be
large. (We say this in relation to problems like SAT
solving, which is applied to propositional formulas
containing hundreds of thousands, or even millions of
symbols.) Second, negotiation of a service happens
before its use, and we imagine that for most services
negotiation will happen much less frequently than service
use – although we do also anticipate “one-shot” services
(e.g., a high-def, secure video conference call) which are
used only once after being negotiated on.
§ RELATED WORK
There is much work in service negotiation, and the concept
of negotiation performed through a hierarchy of agents is
not new. For example, see the work on SNAP in <cit.>.
There is also existing work on negotiation policy languages;
for example, see <cit.> and <cit.>. However,
these languages are not based on a hierarchical negotiation
model. We know of no other work on negotiation policy
designed to support hierarchical negotiation.
There is also much work on the automation of service
composition and service selection. For example, see
<cit.>. It is important
to understand the difference between that work and
the work presented here. In this work we do not
compose services– we compose negotiations.
One can understand a rule in our policy language
as reflecting that a specific implementation of a
service is the composition of other (sub-) services.
The point of the policy negotiation is to make
sure that compatible variants of these sub-services
are obtained, and that the negotiated offer to the
client reflects the offers from the needed sub-services.
On the other hand, a rule of our policy language does
not express how the sub-services that are being
negotiated for can be put together to form a service.
§ CONCLUSIONS
We have seen how, by expressing client query and server
offers in logic, and by using quantifier elimination, it is
possible to support hierarchical negotiation using a simple
negotiation policy language. A service provider, to define
its negotiation strategy, must specify in policy only the
sub-negotiators to be used, and how negotiable parameters of
the service being negotiated on can be defined in terms of
parameters the negotiated service will need to use. Work
remains to be done to understand the range of services for
which this approach to negotiation is practical.
§.§.§ Acknowledgments
We thank Michael Benedikt for pointing us to quantifier
elimination as a logical means for projecting relations,
and Alan Jeffrey for suggesting that sets of offers be
expressed as logical conditions, and also for other
helpful discussion on the topic of service negotiation.
plain
10BDSN02
B. Benatallah, M. Dumas, Q.Z. Sheng, and A.H.H. Ngu.
Declarative composition and peer-to-peer provisioning of dynamic web
services.
In Data Engineering, 2002. Proceedings. 18th International
Conference on, pages 297–308. IEEE, 2002.
BC2011a
Glenn Bruns and Mauricio Cortes.
A hierarchical approach to service negotiation.
In Proceedings of IEEE International Conference on Web
Services. IEEE, 2011.
CF02
Karl Czajkowski and Ian Foster.
SNAP: A protocol for negotiating service level agreements and
coordinating resource management in distributed systems.
In 8th Workshop on Job Scheduling Strategies for Parallel
Processing, pages 153–183, 2002.
FR74
M.J. Fischer and M.O. Rabin.
Super-exponential complexity of Presburger arithmetic.
Symp. Appl. Math, volume VII of SIAM-AMS Proc., pp. 27-41,
1974.
GLD03
H. Gimpel, H. Ludwig, A. Dan, and B. Kearney.
PANDA: Specifying policies for automated negotiations of service
contracts.
ICSOC 2003, pages 287–302, 2003.
IRG06
O.H. Ibarra, B. Ravikumar, and C.E. Gerede.
Quality-aware service delegation in automated web service
composition: An automata-theoretic approach.
Journal of Automata, Languages, and Combinatorics, 11(2):169,
2006.
LBK06
A. Ludwig, P. Braun, R. Kowalczyk, and B. Franczyk.
A framework for automated negotiation of service level agreements in
services grids.
In Business Process Management Workshops, pages 89–101.
Springer, 2006.
Milner89
Robin Milner.
Communication and concurrency.
Prentice Hall, 1989.
Presburger1930
M. Presburger.
Über die Vollständigkeit eines gewissen Systems der
Arithmetik ganzer Zahlen, in welchen die Addition als einzige Operation
hervortritt.
1930.
Presburger1991
M. Presburger and D. Jabcquette.
On the completeness of a certain system of arithmetic of whole
numbers in which addition occurs as the only operation.
History and Philosophy of Logic, 12(2):225–233, 1991.
English translation of <cit.>.
schrijver1998theory
A. Schrijver.
Theory of linear and integer programming.
John Wiley & Sons Inc, 1998.
§ BRIEF OVERVIEW OF CCS
CCS is a process algebra created by Robin Milner <cit.>.
In CCS, a process is an algebraic term built up from a
collection of operators. Roughly, one can think of a CCS
term as a textual description of a state machine.
The operator 0 is the nil or deadlocked
process. This is like a state machine with a single state
and no outgoing transitions.
The operator . is the prefix operator, which takes
an action on the left and a process term on the right.
An action is just a name, like a, or a complimented
name, or co-name, like b. An example of the
prefix operator is a.0. This process can perform an
a action and then deadlocks. We say that a.0 can
perform an a action and then evolve to process 0.
The process a.b.0 can perform an a action
and then evolve to process b.0. In general, a
process α.P, where α is some action, and P
is a process, can perform α and evolve to process
P.
The operator + is the choice operator, which takes
two processes. An example is a.0 + b.0. This
process can perform either an a action or a b,
and then in either case deadlocks. Generally, if P can
perform an action α and evolve to process P', then
P + Q can perform α and evolve to process P', and
symmetrically for Q.
To provide for “looping”, recursive process definitions
are allowed. For example, one can define A a.P.
Process A can repeatedly perform a actions.
Generally, if A P, and if P can perform action
α and evolve to process P', then A can also
perform action α and evolve to P'. Another example
is P (a.P + b.0). Process definitions need
not be recursive.
The names in a process can be changed by using a
relabelling function. The relabelling operator of CCS
is written [f], where f is a relabelling function. For
example, (a.0)[a/b] is exactly like the process
b.0. Generally, if P can perform an action
α and evolve to P', then P[f] can perform action
f(α) and evolve to P'[f].
The formal meaning of a CCS process term is given as a
transition system, which is a directed graph in which the
nodes are CCS terms and the edges are labelled with actions.
For example, we understand the process a.0 as a
transition system with nodes a.0 and 0, and a
transition from the first to the second, labelled with
a. A CCS transition system differs from a finite
state machine: it can have infinitely many nodes, and no
states are marked as end states.
Two CCS process can be put “in parallel” using the
parallel composition operator |. For example,
(a.0 + b.0) | (a.0 + b.0).
When two processes are put in parallel, the resulting
process can behave in two ways. First, one of the
two components can act independently. In the example,
the first component can perform a, and the
composite evolves to (0 | (a.0 + b.0).
Second, the two components can synchronize, provided
they can perform complimentary actions. In the
example, the two components can synchronize on b
and b, resulting in distinguished action τ, and
the composite evolves to (0 | 0), which incidentally
behaves identically to 0.
So, in general, if P can perform α and evolve to
P', then P | Q can perform α and evolve to P'
| Q (and symmetrically for P). Also, if P can
perform α and evolve to P', and Q can perform
α and evolve to Q', then P | Q can
perform τ and evolve to P' | Q'. Action τ is
special: it is neither a name nor a co-name, and cannot be
complimented. It is therefore impossible for τ actions
to synchronize with other actions. Intuitively, a τ
action represents activity internal to a system that cannot
be observed outside the system.
The remaining CCS operator is the restriction operator.
The restriction operator is written L, where L
is a set of non-τ actions. Restriction prevents a
process from performing an action in L. An example is
(a.0 |a.0){a}. The two
components of this process can synchronize, but cannot
act independently. Generally, if P can
perform action α and evolve to process P', and
α and α are not in set L, then
PL can perform α and evolve to P'L. ]
|
http://arxiv.org/abs/2307.02215v3
|
20230705114457
|
Stronger Quantum Speed Limit For Mixed Quantum States
|
[
"Shrobona Bagchi",
"Dimpi Thakuria",
"Arun Kumar Pati"
] |
quant-ph
|
[
"quant-ph"
] |
[email protected]
Center for Quantum Information, Korea Institute of Science and Technology, Seoul, 02792, Korea
[email protected]
Atominstitut, Technische Universität Wien, Stadionallee 2, 1020 Vienna, Austria
Quantum Information and Computation Group, Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019, India and Homi Bhabha National Institute, Anushaktinagar, Training School Complex, Mumbai 400085, India
[email protected]
Quantum Information and Computation Group, Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019, India and Homi Bhabha National Institute, Anushaktinagar, Training School Complex, Mumbai 400085, India
We derive a quantum speed limit for mixed quantum states using the stronger uncertainty relation for mixed quantum states and unitary evolution. We also show that this bound can be optimized over different choices of operators for obtaining a better bound. We illustrate this bound with some examples and show its better performance with respect to some earlier bounds.
Stronger Quantum Speed Limit For Mixed Quantum States
Arun Kumar Pati
August 1, 2023
=====================================================
§ INTRODUCTION
The uncertainty relations are of fundamental importance in quantum mechanics since the birth of quantum mechanics in the early nineties. The uncertainty principle was first proposed by Werner Heisenberg heuristically <cit.>. He provided a lower bound to the product of standard deviations of the position and the momentum <cit.> of a quantum particle. Not only this, the uncertainty relations are also capable of capturing the intrinsic restrictions in preparation of quantum systems, which are termed as the preparation uncertainty relations <cit.>. In this direction, Robertson formulated the so called preparation uncertainty relation for two arbitrary quantum-mechanical observables which are generally non-commuting <cit.>. However, the Robertson uncertainty relation do not completely express the incompatibility nature of two non-commuting observables in terms of uncertainty quantification and is not the most optimal nor the most tight one. It also suffers from the triviality problem of uncertainty relations. To improve on these deficiencies, the stronger variations of the uncertainty relations have been proved which capture the notion of incompatibility more efficiently and also provide an improved lower bound on the sum and product of variances of the generally incompatible observables <cit.>. On another note, and along the same lines of formulatio of uncertainty relations, the energy-time uncertainty relation <cit.> proved to be quite different from the preparation uncertainty relations of other observables such as the position and momentum or that of the angular momentum because time is not treated as an operator in quantum mechanics <cit.>. Thus, time not being a quantum observable, time-energy uncertainty relation lacked a good interpretation like for those of the other quantum mechanical observables such as position and momentum. Mandelstam and Tamm derived an uncertainty relation <cit.> which is now called an energy-time uncertainty relation. It follows from the Robertson uncertainty relation when we consider the initial quantum state and the Hamiltonian as the corresponding quantum mechanical operators <cit.> and Δ t as the time interval between the initial and final state after the evolution. An interpretation of this time energy uncertainty relation was given in terms of the so called quantum speed limit <cit.>. In the current literature, there are several other approaches to obtain quantum speed limits for closed quantum system dynamics <cit.> as well as for open quantum system dynamics <cit.>. Quantum speed limits have also been generalised to the cases of arbitrary evolution of quantum systems <cit.>, unitary operator flows <cit.>, change of bases <cit.>, and for the cases of arbitrary phase spaces <cit.>. Most recently, in another direction exact quantum speed limits have also been proposed <cit.>.
The notion of quantum speed limit is not only of fundamental importance, but also has many practical applications in quantum information, computation and communication technology. The quantum speed limit bounds have proven to be very useful in quantifying the maximal rate of quantum entropy production <cit.>, the maximal rate of quantum information processing <cit.>, quantum computation <cit.> in optimal control theory <cit.>, quantum thermometry <cit.> and quantum thermodynamics <cit.>. These explorations motivate us to find better quantum speed limit bounds that can go beyond the existing bounds in the literature. In this paper, we use the stronger uncertainty relation developed in <cit.>, then generalised to the case of mixed quantum states to derive a stronger form of quantum speed limit for mixed quantum states undergoing unitary evolution. We show that the new bound provides a stronger expression of quantum speed limit compared to the MT like bound for mixed quantum states. This bound can also be optimized over many operators. We then find various examples for mixed states and some example Hamiltonians that shows the better performance of our bound over the MT like bound for mixed quantum states and the bounds for mixed states in Ref. <cit.>.
The present article is organised as follows. In sections <ref> and <ref>, we give the background that includes the various forms of quantum speed limit for mixed quantum states <ref>, followed by the stronger uncertainty relations for mixed quantum states in <ref>. In section <ref>, we derive the stronger quantum speed limit for mixed quantum states respectively and show methods to calculate the set of operators obeying a necessary condition for the bound to hold true. In section <ref>, we show its better performance with examples of random Hamiltonians, specific examples of Hamiltonians that are useful in quantum computation, random quantum states respectively over three different previous bounds of quantum speed limit for mixed quantum states . Finally, in Section <ref> we conclude and point out to future directions.
§ BACKGROUND
§.§ Quantum Speed Limits
Quantum speed limit is one of the interpretations of the time energy uncertainty relation in quantum mechanics. In particular Mandelstam and Tamm derived the first expression of the quantum speed limit time as τ_QSL = π/2Δ H, where Δ H is the variance of the Hamiltonian driving the quantum system H <cit.>. As an interpretation of their bound, they also argued that τ_QSL quantifies the life time of quantum states. Their interpretation was further solidified by Margolus and Levitin <cit.>, who derived an alternative expression for τ_QSL in terms of the expectation value of the Hamiltonian as τ_QSL = π/2⟨ H⟩. Eventually, it was also shown that the combined bound,
τ_QSL=max{πħ/2Δ H,πħ/2⟨ H⟩}
is tight. Many more versions of quantum speed limits have been proposed since then, with an intent to improve the previous bounds in terms of tightness and performance. In this direction, recently a stronger quantum speed limit for the pure quantum states has been proposed as follows.
τ≥ħ s_0/2 Δ H+∫_0^τR(t)dt,
where we have
R(t)=1/2|⟨Ψ^⊥(t)|A/Δ A± iH/Δ H |Ψ(t)⟩|^2.
The stronger quantum speed limit bound generally performs better than the MT bound for pure quantum states since it can be shown that for pure quantum states R(t)≥ 0 in general. On the other hand, quantum speed limits for the mixed quantum states have also been proposed in various forms <cit.>. Quantum speed limit can be extended to the case of mixed quantum states by defining the distance between the initial state ρ_0 and the final state ρ_t as their Bures angle ℒ (ρ _0,ρ _t)=arccos (ℱ (ρ _0,ρ _t)), with ℱ (ρ _0,ρ _t)=tr[√(√(ρ _0)ρ _t√(ρ _0))] being the Uhlmann root fidelity,
τ_ℒ=ℒ(ρ_0,ρ_t)/min{H,Δ H},
where, ħ=1 has been set for convenience. It bounds the evolution time required to evolve the mixed state ρ_0 to the final state ρ_t by means of a unitary operator U_t, i.e., ρ _t=U_tρ_0 U_t^†, where the quantum system is governed by a time-dependent Hamiltonian H_t. There are many other forms of speed limits for mixed quantum states, which we leave for later investigation in future research.
In <cit.> another bound tighter than the MT bound was derived for the speed of unitary evolution. According to this bound, the minimum time required to evolve from state ρ to state σ by means of a unitary operation generated by the Hamiltonian H_t is bounded from below by
T_Θ(ρ,σ)=τ_2=Θ(ρ,σ)/Q_Θ where
Q_Θ=1/T∫_0^T dt√(2Tr(ρ_t^2H_t^2-(ρ_tH_t)^2)/Tr(ρ_t^2-1/N^2)) and
Θ(ρ,σ)=arccos√((Tr(ρσ)-1/N)/(Tr(ρ^2)-1/N))
where N is the dimension of the quantum system undergoing unitary evolution due to the time independent Hamiltonian H. We mention this bound since this bound does not reduce to the MT bound in general. However, there is another bound proposed in the same paper that reduces to the MT bound for the case of pure states. It is given as follows
T_Φ(ρ,σ)=τ_2=Φ(ρ,σ)/Q_Φ where
Q_Φ=1/T∫_0^T dt√(Tr(ρ_t^2H_t^2-(ρ_tH_t)^2)/Tr(ρ_t^2)) and
Φ(ρ,σ)=arccos√(Tr(ρσ)/Tr(ρ^2))
We work with these different quantum speed limits for mixed quantum states and point out some examples where the newly derived quantum speed limit bound for mixed quantum states here performs better than the above bounds.
§.§ Stronger Uncertainty Relations for general mixed quantum states
Robertson gave a rigorous and quantitative formulation of the heuristic Heisenberg's uncertainty principle, which are called the preparation uncertainty relations <cit.>. This is stated as the following. For any two noncommuting operators A and B, the Robertson-Schroedinger uncertainty relation for the state of the system |Ψ⟩ is given by the following inequality:
Δ A^2Δ B^2≥ |1/2⟨[A,B]⟩|^2+|1/2⟨{A,B}⟩-⟨ A⟩⟨ B⟩|^2,
where the averages and the variances are defined over the state of the quantum system ρ. However, this
uncertainty bound is not optimal. There have been several attempts to improve the bound. Here, we state a stronger bound obtained from an alternative uncertainty relation also called the Maccone-Pati uncertainty relation <cit.> and is also state dependent.
Δ AΔ B≥i/2Tr(ρ[A,B])/(1-1/2|Tr(ρ^1/2(A/Δ A± i B/Δ B)σ) |^2),
where Tr(ρ^1/2σ)=0 and ||σ||_2=1. This uncertainty relation has been proved to be stronger than Robertson-Schrodinger uncertainty relation. It is optimized to an equality when maximized over all possible σ possible, such that we have the optimized bound as
Δ AΔ B≥max_σi/2Tr(ρ[A,B])/(1-1/2|Tr(ρ^1/2(A/Δ A± i B/Δ B)σ) |^2).
We can take the absolute values on both sides and then perform optimization, so that we get the following uncertainty relation
Δ AΔ B≥max_σ1/2|Tr(ρ[A,B])|/|(1-1/2|Tr(ρ^1/2(A/Δ A± i B/Δ B)σ) |^2)|.
We will use the above stronger uncertainty relations for mixed quantum states to derive a stronger version of quantum speed limits for mixed quantum states. See <cit.> for the proof of the stronger uncertainty relations for mixed quantum states.
§ RESULT: STRONGER QUANTUM SPEED LIMIT FOR UNITARILY DRIVEN MIXED QUANTUM STATES
The time evolution of a general mixed quantum state governed by a unitary operation generated by a Hamiltonian is given by the following equation
τ≥τ_SQSLM= √(Tr(ρ_0^2))/2Δ H ×
∫_s_0(0)^s_0(τ)sin s_0(t)/(1-R(t))coss_0(t)/2√((1-Tr(ρ_0^2)cos^2 s_0(t)/2))ds_0,
where τ_SQSLM stands as a short form for the stronger quantum speed limit for mixed quantum states and we have the following definitions of the quantities expressed in the above equation
s_0(t)=2cos^-1|√(Tr(ρ(0)ρ(t))/Tr(ρ_0^2))|,
Δ H=Tr(H^2ρ)-(Tr(Hρ))^2
R(t)=1/2|Tr(ρ^1/2(A/Δ A± i B/Δ B)σ) |^2,
where Tr(ρ^1/2σ)=0 and ||σ||_2=1,
where we have denoted ρ_0=ρ(0), ρ=ρ(t) and used this interchangeably everywhere, ||σ||_2=(∑_n∈ I⟨ e_n|σσ^†|e_n⟩)^1/2, {|e_n⟩} forming a complete orthonormal basis in Hilbert space ℋ, σ∈ L^2(ℋ), i.e., σ belongs to the set of all Hilbert Schmidt linear operators.
The proof of the above theorem goes as follows. We start by writing out the stronger uncertainty relation for mixed quantum states as is given by the following
Δ AΔ B≥1/2|Tr(ρ[A,B])|/|(1-1/2|Tr(ρ^1/2(A/Δ A± i B/Δ B)σ) |^2)|,
See <cit.> for the derivation of the above inequality. From the stronger uncertainty relation for mixed quantum states, we get the following
Δ A Δ H (1-R(t))≥1/2|Tr(ρ[A,H])|,
where we have defined R(t) as the following
R(t)=1/2|Tr(ρ^1/2(A/Δ A± i B/Δ B)σ) |^2
and have taken A=ρ_0 and B=H for our purpose of deriving the stronger quantum speed limit for mixed quantum states. This particular choice of these operators help us to formulate our inequality for the quantum speed limit for mixed quantum states. Also for mixed quantum states, from Eahrenfest's theorem we get the following
iħdTr(ρ A)/dt=Tr(ρ[A,H])
Therefore from the above equations, we get the following
Δ A Δ H (1-R(t))≥ħ/2|d⟨ A⟩/dt|
The variance of the operator A is then given by
Δ A^2 =Tr(ρ(0)^2ρ(t))-(Tr(ρ(0)ρ(t)))^2
=Tr(ρ_0^2ρ_t)-(Tr(ρ_0ρ_t))^2,
where we have used the notation ρ(0)=ρ_0 and ρ(t)=ρ_t. We can now take the following parametrization
⟨ A⟩=Tr(ρ(0)ρ(t))=Tr(ρ_0^2)cos^2s_0(t)/2.
Now, using the equation of motion for the average of A
|ħd/dt⟨ A⟩|=|⟨[A,H]⟩|,
where the averages are all with respect to the mixed quantum state ρ and the quantum mechanical hermitian operator A has no explicit time dependence. Thus, using Eq.(<ref>), we get
|d⟨ A⟩/dt| =Tr(ρ_0^2)sin s_0(t)/2ds_0/dt
Now let us analyze the structure of Δ A^2 as follows
Δ A^2=Tr(ρ_0^2ρ_t)-(Tr(ρ_0ρ_t))^2.
Let {|k⟩} be the eigenbasis from the singular value decomposition of the density matrix ρ_0. Then we have the following expression
ρ_0=∑_kλ_k|k⟩⟨ k| and ρ_0^2=∑_kλ_k^2|k⟩⟨ k|.
Using the above equation we obtain the following quantities
Tr(ρ_0ρ_t) =∑_kλ_k⟨ k|ρ_t|k⟩ andTr(ρ_0^2ρ_t) =∑_kλ_k^2⟨ k|ρ_t|k⟩.
Since, we know that 0≤λ_k^2≤λ_k≤ 1 ∀ k and also ⟨ k|ρ_t|k⟩≥ 0 ∀ k because ρ_t is a positive operator. Therefore, we get the following inequality
Tr(ρ_0ρ_t)≥Tr(ρ_0^2ρ_t).
Adding -(Tr(ρ_0ρ_t))^2 on both side of the above equation we get
Tr(ρ_0ρ_t)-(Tr(ρ_0ρ_t))^2 ≥Tr(ρ_0^2ρ_t)-(Tr(ρ_0ρ_t))^2
=Δ A^2.
Now, using Eq.(<ref>) we get
Tr(ρ_0^2)cos^2 s_0(t)/2(1-Tr(ρ_0^2)cos^2 s_0(t)/2)≥Δ A^2
Taking square root on both sides and multiplying by Δ H we get
√(Tr(ρ_0^2))coss_0(t)/2√((1-Tr(ρ_0^2)cos^2 s_0(t)/2))Δ H≥Δ AΔ H.
From here, we get the following
√(Tr(ρ_0^2))coss_0(t)/2√((1-Tr(ρ_0^2)cos^2 s_0(t)/2))Δ H (1-R(t))
≥Δ AΔ H (1-R(t)),
since (1-R(t)) is a positive quantity here.
From the previous equations we get the following
√(Tr(ρ_0^2))coss_0(t)/2√((1-Tr(ρ_0^2)cos^2 s_0(t)/2))Δ H (1-R(t))
≥Δ AΔ H (1-R(t))≥ħ/2|d⟨ A⟩/dt|=Tr(ρ_0^2)sin s_0(t)/2ds_0/dt,
Therefore, from the above equations we get the following
coss_0(t)/2√((1-Tr(ρ_0^2)cos^2 s_0(t)/2))Δ H ≥
√(Tr(ρ_0^2))/(1-R(t))sin s_0(t)/2ds_0/dt,
Integrating the above equation with respect to t and s over their corresponding regions on both sides, we get for the case of time independent Hamiltonian the following expression for quantum speed limit
τ≥√(Tr(ρ_0^2))/2Δ H ×
∫_s_0(0)^s_0(τ)sin s_0(t)/(1-R(t))coss_0(t)/2√((1-Tr(ρ_0^2)cos^2 s_0(t)/2))ds_0,
where the definitions of the parametrizations have been stated in the statement of the theorem. One can also derive the quantum speed limit bound for mixed quantum states in a different way. Writing out the previous equations and rearranging terms on the right hand side and the left hand side in a different way, it can be shown that the quantum speed limit bound for the mixed quantum states can also be written following the procedure as stated below step by step. We start from the following inequality after rearranging the terms
coss_0(t)/2√((1-Tr(ρ_0^2)cos^2 s_0(t)/2))Δ H (1-R(t)) ≥
√(Tr(ρ_0^2))sin s_0(t)/2ds_0/dt,
Integrating the above equation we get the following quantum speed limit bound for mixed quantum states
τ≥
∫_s(0)^s(τ)√(Tr(ρ_0^2))sin s_0(t)/2Δ H coss_0(t)/2√((1-Tr(ρ_0^2)cos^2 s_0(t)/2))ds_0+
∫_0^τR(t)dt,
From the above equations, we get the following
τ≥[2 cos^-1(√(Tr(ρ_0^2))coss_0/2)/Δ H]_s(0)^s(τ)+∫_0^τR(t)dt
Putting the values, we get the following equation for time independent Hamiltonians
τ≥[2 (cos^-1(√(Tr(ρ_0ρ_t)) )-cos^-1(√(Tr(ρ_0^2)) ))/Δ H]
+∫_0^τR(t)dt
It is easy to see that the above bound reduces to that of the stronger quantum speed limit bound for pure states when we take Tr(ρ_0^2)=1, which performs better than the MT bound for pure quantum states.
§.§ Method to find σ, such that Tr(ρ^1/2σ)=0
For the purpose of calculating our bound, we need to find ways to derive the structure of σ or identify the set of σ such that the condition Tr(ρ^1/2σ)=0 is satisfied. In the preceding paragraphs, we find out two different ways to do so and apply them to examples thereafter.
§.§.§ Method I: ρ and σ ∈ orthogonal subspaces
In this section we derive the method that can be useful to find σ such that the condition Tr(ρ^1/2σ)=0 holds. First let us state the properties of σ that should be satisfied in that case. It should satisfy ||σ||_2=1, where ||σ||_2=(∑_n∈ I⟨ e_n|σ^†σ|e_n⟩)^1/2 and σ∈ L^2(H). Let us take the following definitions
ρ=∑_kλ_k|k⟩⟨ k|, |' ρ^1/2=∑_kλ_k^1/2|k⟩⟨ k|,
where we have ∑_kλ_k=1 fixed by the normalization constraint of ρ and we have taken the positive square root of λ_k. Note that we have written ρ in its eigenbasis and can be reverted back to any other basis by unitary transformation and the same holds for ρ^1/2 in a corresponding way. In this way ρ^1/2 is also a positive semidefinite Hermitian operator as ρ. Let us denote λ_k^1/2=η_k for convenience. Therefore, following this notation, we have
ρ^1/2=∑_kη_k|k⟩⟨ k|.
Therefore from the condition Tr(ρ^1/2σ)=0, we get
Tr(∑_kη_k|k⟩⟨ k|σ)=0.
This translates to the following condition
∑_kη_k⟨ k|σ|k⟩=0.
We know that η_k≥ 0 ∀ k from our own constraint which we have specifically chosen that we only take the positive square root of λ_k ∀ k as η_k.
Also when we impose the condition that σ is also a positive operator, then we get the condition that ⟨ k|σ|k⟩≥ 0 ∀ k. One of the ways this condition can be obtained is that if ρ and σ are chosen from orthogonal subspaces. Let us note here that ρ is fixed here and we do not have a choice to fix ρ and we only have the freedom to choose any σ from the orthogonal subspace to that of ρ. As a result we can optimize our bound for the stronger quantum speed limit over all possible choices of such σ chosen from the orthogonal subspaces to that of ρ. For mixed quantum states, this choice of σ becomes relevant only in higher dimensional Hilbert spaces than the qubit space.
§.§.§ Method II: A form of σ written directly in terms of ρ and Hermitian operators.
There is another method that allows one to derive an operator that satisfies the condition Tr(ρ^1/2σ)=0 in a more easier way. This set of σ can be written down in the following form
σ=O-⟨ O⟩/Δ Oρ^1/2,
where, O is any Hermitian operator. This way the conditions Tr(ρ^1/2σ)=0 and Tr(σσ^†)=1 are satisfied automatically. The proof of this claim in given in the following paragraph.
The proof of the first condition Tr(ρ^1/2σ)=0 goes as follows.
Tr(ρ^1/2σ)=Tr(ρ^1/2O-⟨ O⟩/Δ Oρ^1/2)
=1/Δ OTr(ρ(O-⟨ O⟩))=0
Now we show that the σ defined in this way also satisfies the condition Tr(σσ^†)=1. This is as follows.
Tr(σσ^†)=Tr(O-⟨ O⟩/Δ Oρ^1/2(O-⟨ O⟩/Δ Oρ^1/2)^†)
=Tr((O-⟨ O⟩/Δ O)ρ^1/2ρ^1/2(O-⟨ O⟩/Δ O))
=Tr((O-⟨ O⟩/Δ O)ρ(O-⟨ O⟩/Δ O))
=Tr(ρ(O-⟨ O⟩/Δ O)^2)=1
As a result, we have derived another set of operators σ that satisfies the required conditions essential for deriving the stronger quantum speed limit bound for mixed quantum states. Also we see that since O can be any Hermitian operator, therefore we can have a large set of σ as stated above that satisfies our required criterion based on the different Hermitian operators that we can choose. Using this way of finding σ, the stronger quantum speed limit bound is simplified further as follows. We start with the expression of R(t) which is as follows
R(t)=1/2|Tr(ρ^1/2(A/Δ A± i B/Δ B)σ) |^2.
We put the expression of σ as described in this section and find the following expression for R(t)
R(t)=1/2|Tr(ρ^1/2(A/Δ A± i B/Δ B)(O-⟨ O⟩/Δ Oρ^1/2)) |^2.
Using the cyclic property of the trace function, therefore we arrive at the following simplified version of R(t)
R(t)=1/2|Tr(ρ(A/Δ A± i B/Δ B)(O-⟨ O⟩/Δ O)) |^2.
The above expression is clearly computationally much more efficient and less time consuming, where for the calculation of the stronger speed limit bound for mixed quantum states, one does not have to compute the square root of ρ, making the calculation of the bound more efficient, fast and simple. We will apply this technique for the examples in the next section.
§ EXAMPLES
§.§ Random Hamiltonians
In this section, we calculate and compare the bound given by the tighter quantum speed limit bound with that of the MT like bound of mixed state generalization using random Hamiltonians from the Gaussian Unitary Ensemble or GUE in short. Random Hamiltonians from GUE have found use in many different areas. But our reason for choosing Hamiltonians randomly from GUE is that they give vaild Hamiltonians that are also diverse such that we can show the performance of our stronger quantum speed limit bound for mixed quantum states and unitary evolutions for diverse cases.
Mathematically, a random Hamiltonian is a D×D Hermitian operator H in D×D dimensional Hilbert space, drawn from a Gaussian unitary ensemble (GUE). The GUE is described by the following probability distribution function
P(H)= Ce^-D/2Tr(H^2)
where C is the normalization constant and the elements of H are drawn from the Gaussian probability distribution. In this way H is also Hermitian. A random Hamiltonian dynamics is an unitary time- evolution generated by a fixed time-independent GUE Hamiltonian.
We take the Hilbert space of dimension 3 for our numerical example as shown in Fig.<ref>. The initial state is taken as the following
ρ_0=0.2|0⟩⟨ 0|+0.5|1⟩⟨ 1|+0.3|2⟩⟨ 2|
Following the second method of generating appropriate σ using a set of Hermitian operators O, we obtain the quantum speed limit bound for the mixed quantum states. We compare the performance of our optimized bound with the previous bounds and non optimized version of our bound as given in the figures. From both the subfigures <ref> and <ref> in Fig.<ref>, we clearly see that our theory is correct and we have Δ=τ_SQSL-τ_MT as always positive, showing that the stronger quantum speed limit bound always outperforms the MT like bound for mixed quantum states and unitary evolution. In Fig.<ref>, at t=0, all the values of Δ are zero because all the random Hamiltonians start with being identity at t=0. All the Hamiltonians taken here are time independent by construction. In subfigure <ref>, we perform an optimization over different sets of σ so as to get a better bound, whereas in subfigure <ref>, we still get good results even without any optimization. In the figures and everywhere later in the later examples in the next sections, dp represents the difference of our bound with the MT like bound as in Eq.(<ref>) when one uses a + sign in front of R(t) and dm represents the difference of our bound with the MT like bound as in Eq.(<ref>) when one uses a - sign in front of R(t), unless stated otherwise. We also perform optimization of our bound over small sets of σ and note that our bound performs better with or without optimization in these cases, as exemplified by the figures. When we perform optimization, it is simple and easily completed within about a minute in most cases for such small sets of σ such as 5 or 10 number of σ as stated in the caption of the figures. This makes our method computationally practical and feasible. This simple optimization also gives noticeable improvement on the bounds as demonstrated by the figures, in this example as well as other examples, in the following sections. However, since we cannot tell a priori which optimized version will give the best bound and in which region due to no closed form of the optimized version for arbitrary Hamiltonian, as a result we keep this as an open question for future investigation.
§.§ Anisotropic multiqubit Heisenberg spin chain
A lot of attention has been devoted to the study of graph states, which play an important and central resource in quantum error correction, quantum cryptography and practical quantum metrology in the presence of noise. As a result, owing to its importance in quantum information processing tasks, we write here the entangling Hamiltonian of the graph state generation for the multiqubit case as follows.
H=∑_i=1^Nλ^z_iσ^z_i+∑_i=1^Nλ^zzσ^z_iσ^z_i+1-
∑_i=1^Nλ^xxσ^x_iσ^x_i+1-∑_i=1^Nλ^yyσ^y_iσ^y_i+1
In terms of experiements, the above Hamiltonian is used in the physical implementation of optical lattice of ultracold bosonic atoms. This is also the anisotropic Heisenberg spin model in the optical lattice model which can be written down in appropriate way using the creation and the annihilation operators. The Hamiltonian has the local terms as well as the interaction terms and in general for N spins which can be mapped to N qubits. In general, the coefficients {λ} are time dependent. However for simplicity we take this to be time independent in our case and calculate the quantum speed limit bound for evolution under this Hamiltonian for initially mixed quantum states.
We take the Hilbert space of dimension 4 for numerical example 1 as shown in the subfigures <ref> and <ref> of Fig.<ref>, i.e., for the case of two qubits. The initial state is taken as the following
ρ_0=0.7|0⟩⟨ 0|+0.1|1⟩⟨ 1|+0.1|2⟩⟨ 2|+0.1|3⟩⟨ 3|
Following the second method of generating appropriate σ, we obtaining the quantum speed limit bound for the mixed quantum states. We check our bound for initial mixed quantum state as above under the action of the anisotropic Heisenberg spin chain Hamiltonian and compare the performance of our optimized bound with the previous bound. From the figures as in <ref> and <ref> of Fig.<ref>, we clearly see that our theory is correct and we have Δ=τ_SQSL-τ_MTL as always positive, showing that the tighter quantum speed limit bound always outperforms the MT like bound for mixed quantum states. The same holds for the example 2 as given in <ref> and <ref> of Fig.<ref>, where a different instance of the anisotrpic Heisenberg spin has been considered with a different set of parameters but with the same underlying model as stated here. Since we cannot tell a priori which optimized version will give the best bound and in which region, as a result we keep this as an open question for future investigation.
§.§ Perfect state transfer Hamiltonian
Here, we take the example of a Hamiltonian which is useful for the case of perfect quantum state transfer, as quantum state transfer is one of the important quantum information processing tasks. The Hamiltonian describing the case of perfect state transfer is given by the following
H=∑_n=1^N-1 J_nσ_n^zσ^z_n+1+∑_n=1^N B_nσ_n^x,
where N is the number of qubits.
As specific numerical examples, we take the Hilbert space of dimension 4, i.e., for the case of two qubits. In this case, we take J_k=1/2, B_k=1/2 and then the Hamiltonian reads as the following for the case of two qubits as
H=J_1(σ^z⊗σ^z)+B_1(σ^x⊗𝕀)+B_2(𝕀⊗σ^x).
The initial state is taken as the following
ρ_0=0.7|0⟩⟨ 0|+0.1|1⟩⟨ 1|+0.1|2⟩⟨ 2|+0.1|3⟩⟨ 3|.
We obtain the quantum speed limit bound for the mixed quantum states in the similar procedure as the other examples stated before. We check our bound for initial mixed quantum state as stated above under the action of the quantum walker Hamiltonian as stated before and compare the performance of our optimized bound with the previous MT like bound for mixed quantum states. From the subfigures <ref> and <ref> of Fig.<ref>, we clearly see that our theory is correct and we have Δ=τ_SQSL-τ_MTL as always positive, showing that the tighter quantum speed limit bound always outperforms the MT like (MTL) bound for mixed quantum states.
§.§ Hamiltonian evolution of a separable state
Here, we take the example of another type of Hamiltonian which drives the evolution of an initially mixed quantum state which we take to be a separable quantum state. The Hamiltonian describing this case is given by the following
H=∑_i=1^M H_i , H_i=ωħ∑_n=0^N-1 n |n⟩⟨ n|
where M is the number of qubits and N is the dimension of each subsystem.
As we mentioned, we take the initial state as a separable mixed state. This choice bears no particular importance. For our case of numerical example, we take the case of a quantum system of two qutrits. Even for this case of two qutrits, the derivation of the stronger quantum speed limit for mixed states is done within a fraction of a minute, even for an optimization over a set of 5 number of σ operators. This implies that the derivation of the quantum speed limit for mixed quantum states can be done for a wide variety of quantum systems of different dimensions, in this case the dimension being 9. We demonstrate here a particular example by taking the following initial quantum state
ρ_0=a|0⟩⟨ 0|+b|1⟩⟨ 1|+c|2⟩⟨ 2|
+(1-a-b-c-d-e)|3⟩⟨ 3|+d|7⟩⟨ 7|+e|8⟩⟨ 8|,
where we have the following parameters a = 0.175, b = 0.25, c = 0.15, d = 0.105, e = 0.255. We have also set ωħ=1 without any loss of generality. The choice of these parameters are arbitrary. A different choice of these parameters do not bear any effect on the computational complexity of the stronger quantum speed limit bound for mixed quantum states. Next, we obtain the quantum speed limit bound for the mixed quantum states in the same procedure as the other examples mentioned before. We plot our results in Fig.<ref> From this figure, we again see that our theory give good improvement over the previous MTL quantum speed limit bound and we have Δ=τ_SQSL-τ_MTL as always positive. The apparent difference in various points can be attributed to the fact that we always choose a random eigenbasis for the calculation of our bound.
§.§ Two qubit CNOT Hamiltonian
Two qubit CNOT gate is an important case of a Hamiltonian as this is a part of the universal gates that can be used for performing all sorts of quantum computation. Therefore we choose a Hamiltonian that will represent a two qubit CNOT gate. The form of one such Hamiltonian also called the principal Hamiltonian is given by
H=πσ_z^-⊗σ_x^-
where we have used the following notation
σ_z^±=𝕀±σ_z/2, σ_x^±=𝕀±σ_x/2.
We calculate the quantum speed limit bound for evolution under this Hamiltonian for initially mixed quantum states.
We take the Hilbert space of dimension 4 for our numerical example as represented in subfigures <ref> and <ref> of Fig.<ref>, i.e., for the case of two qubits. The initial state is taken as the following
ρ_0=0.7|0⟩⟨ 0|+0.1|1⟩⟨ 1|+0.1|2⟩⟨ 2|+0.1|3⟩⟨ 3|
As with all the examples before, we calculate the stronger quantum speed limit bound using the same methods. We check our bound for the above choices of initial mixed quantum state and the Hamiltonian and compare the performance of our optimized bound with the previous bound. The optimization is over 10 such operators σ as in all the above cases. From the figure, we clearly see that we always have Δ=τ_SQSL-τ_MTL as positive, showing that the stronger quantum speed limit bound derived in this article outperforms the MT like (MTL) bound for mixed quantum states. Also it is natural to expect that our stronger speed limit bound will outperform the MT like bound for mixed quantum states even better when the optimization will be performed over a larger set of σ.
§.§ Comparison with other bounds: Perfect state transfer Hamiltonian.
Here, we take the example of perfect quantum state transfer for comparing our stronger quantum speed limit bound for mixed quantum states with two other existing important bound for quantum speed limit for mixed quantum states. The Hamiltonian describing this case is given by Eq.(<ref>), Eq.(<ref>), and the initial quantum state as given by Eq.(<ref>). We obtain the quantum speed limit bound for the mixed quantum states in a similar way as before and compare the performance of our optimized bound with the previous two quantum speed limit bound for mixed quantum states as given in <cit.>. Note that the quantum speed limit bounds given in <cit.> are better than MT like bounds for most qubit states. We check from the subfigures <ref> and <ref> of Fig.<ref> that our bound is better than the second and the third existing quantum speed limit bounds as given in <cit.> in these cases with minimum number of optimizations as stated in their respective figures. The optimization is simple and minimal is completed within about a minute for five optimizations. As a result, this optimization is highly practical and feasible. We notice that the figures <ref> and <ref> look almost identical. As a result, we check whether they are actually numerically identical or their is a difference between them. We plot the difference between the second and third quantum speed limit bounds as given in the paper <cit.> and plot it in <ref>, which shows that they are actually different by a small margin. Next we check that whether the + and - signs in front of R(t) in our stronger quantum speed limit bounds makes a difference in our stronger quantum speed limit bounds. We again choose the perfect state transfer Hamiltonian as before and plot these bounds as represented in <ref>. As explained in the Fig.<ref>. We see that there are differences with the stronger speed limit bound for plus sign in R(t) with the stronger speed limit bound for minus sign in R(t) for mixed quantum states for the perfect state transfer Hamiltonian from the second and the third previous quantum speed limit bounds as given in the paper <cit.>. dp represents the difference of our bound with the second (blue) and the third (red) when one uses a + sign in front of R(t) in Eq.(<ref>) and dm represents the difference of our bound with the second (orange) and the third (green) when one uses a - sign in front of R(t) in Eq.(<ref>), which highlights all the essential differences between these bounds. This plot also demonstrates that our bound represented by Eq.(<ref>) performs better than the previous bounds for both the cases of + and - signs in front of R(t).
§ CONCLUSIONS
In this work, we have derived a stronger quantum speed limit for mixed quantum states using the mixed state generalization of stronger preparation uncertainty relations. We have shown that this bound reduces to that of the pure states under appropriate conditions. Thereafter, we have discussed methods to derive the suitable operators that allows us to calculate our bound. Hereafter we have shown numerically using random Hamiltonians obtained from Gaussian Unitary ensemble that our bound performs better than the mixed state version of the MT bound. The reason for taking random Hamiltonians is nothing but that the technqiue provide valid Hamiltonians that are unlike each other. Also, we have then shown using many suitable analytical examples of Hamiltonians useful in quantum information and computation tasks that the stronger quantum speed limit bound derived here for mixed quantum states also perform better than the MT like bound and also two more existing quantum speed limit bounds for mixed quantum states existing in the current literature. Future directions remain open for comparing our bound to those of other bounds in the literature for mixed quantum states.
§ ACKNOWLEDGEMENTS
S.B. acknowledges discussions with Abhay Srivastav of Harish-Chandra Research Institute, Allahabad, India on an earlier version of the draft of this paper. S. B. acknowledges support from the National Research Foundation of Korea (2020M3E4A1079939, 2022M3K4A1094774) and the KIST institutional program (2E31531). D.T. acknowledges the support from the INFOSYS scholarship and hospitality at Harish-Chandra Research Institute, Allahabad and affiliation of Homi Bhaba National institute during her stay at Harish-Chandra Research Institute. A. K. P. acknowledges the support from the QUEST Grant Q-117 and J C Bose grant from the Department of Science and Technology, India.
|
http://arxiv.org/abs/2307.01325v1
|
20230703195453
|
Robust Uncertainty Estimation for Classification of Maritime Objects
|
[
"Jonathan Becktor",
"Frederik Scholler",
"Evangelos Boukas",
"Lazaros Nalpantidis"
] |
cs.LG
|
[
"cs.LG",
"cs.RO"
] |
Sensitivities on the anomalous quartic γγγγ and γγγ Z couplings at the CLIC
E. Gurkanli[[email protected]]
August 1, 2023
============================================================================
empty
empty
We explore the use of uncertainty estimation in the maritime domain, showing the efficacy on toy datasets (CIFAR10) and proving it on an in-house dataset, SHIPS. We present a method joining the intra-class uncertainty achieved using Monte Carlo Dropout, with recent discoveries in the field of outlier detection, to gain more holistic uncertainty measures. We explore the relationship between the introduced uncertainty measures and examine how well they work on CIFAR10 and in a real-life setting. Our work improves the FPR95 by 8% compared to the current highest-performing work when the models are trained without out-of-distribution data. We increase the performance by 77% compared to a vanilla implementation of the Wide ResNet. We release the SHIPS dataset and show the effectiveness of our method by improving the FPR95 by 44.2% with respect to the baseline. Our approach is model agnostic, easy to implement, and often does not require model retraining.
§ INTRODUCTION
The autonomous operation of robots, including autonomous ships, heavily relies on the perception of the world around them. Camera-based perception has seen a lot of growth in the past years. This is especially present in many well-known datasets, such as CIFAR10 and ImageNet, where models achieve almost perfect performance. However, this level of performance is often only achieved after careful reparameterization. The resulting models produce very accurate results on the specific datasets, but their performance degrades dramatically when applied to real-world robot operations, such as noisy and previously-unseen targets. Furthermore, the output of these models does not provide a usable measure of the certainty of their predictions, which is often required for robust autonomous operation. Even though the softmax function allows mapping of the logits into a probability distribution, this mapping is commonly not well calibrated, leading to over-confident predictions. While the softmax function also provides an intra-class measure of uncertainty, we will, in this work, present a method to classify samples and produce a more holistic uncertainty measure. We showcase this on a simple, commonly used dataset (CIFAR10) and on our own curated dataset (SHIPS) while also showing the performance against six outlier datasets.
Our primary focus is on autonomous operation at sea and, more precisely, on the predictive uncertainty for the classification of common maritime vessels and objects for the GreenHopper vessel, see Figure <ref>. In our previous work <cit.>, we proposed an object detection network tasked with robust detection of two coarse classes, buoys and ships; given an image, a detection consisted of an object bounding box and class confidence. This work extends our efforts of creating a more reliable and robust object detection system <cit.>, by focusing on producing higher quality classification outputs, that is a more precise label (e.g. from boat to sail-boat or motorboat) and providing a usable uncertainty metric for said classification.
§.§ Contributions
The main contributions of this paper are:
(i) we propose a joint method to produce intra-class (aleatoric) and out-of-distribution (epistemic) uncertainty of our predictions. (ii) We show that our method, to the best of our knowledge, performs better than any other current work at outlier detection when only trained on ID data. (iii) We explore the relationship between the explored uncertainty measures and how they can be combined for better out-of-distribution detection. Finally, (iv) our work produces well-calibrated networks with usable uncertainty estimates.
§ RELATED WORK
Probabilistic inference has heavily inspired the work of network uncertainty estimates. Early work in this field, such as Bishop et al. <cit.>, proposed using Mixture Density Networks (MDN) to estimate predictive distributions. The MDN models approximate the conditional distribution over a scalar response as a mixture of Gaussians. The parameters of a Gaussian mixture describing the predictive distribution are estimated by training a model to output parameters maximizing the overall log-likelihood. The work of <cit.> proposed calibrating the output of neural networks by scaling the logits by a constant factor before the softmax function. It showed how a calibrated network could give a better probabilistic estimate of the likelihood of a prediction. Charpentier et al. <cit.> estimated the latent distribution of classes to detect out-of-distribution (OOD) examples. Training a model to output the parameters of a Dirichlet distribution, it was possible to estimate predictive uncertainty with a single forward pass and classification of OOD examples. The uncertainty estimate was used to improve segmentation results on brain scans. Hendrycks et al. <cit.> proposed to use the maximum softmax probability as a metric for outlier detection. The same group extended that work in <cit.> where they instead proposed using a subset of OOD datasets for the basis of OOD detection. Several methods for OOD detection have since been explored, such as energy score <cit.>, where the authors propose to use the energy of the output vector for OOD detection. Du et al. <cit.> extend this by introducing a scheme that uses a multivariate distribution of a latent layer to create “virtual outliers", which are then used for training. The work of Lee et al. introduced the Mahalanobis distance <cit.> as a measure for OOD.
A method for detecting OOD examples in neural networks, dubbed ODIN, was introduced by Liang et al. in <cit.>
which applies small perturbations to the input calibrated from an OOD sample. Hsu et al. expanded ODIN by introducing the generalized-ODIN <cit.>, which proposes to decompose confidence scoring, removing the need to calibrate on OOD-Data.
Further work, such as the generation and collection of outlier data to be used as regularization data for OOD detection, has been explored in several works, including <cit.>. In contrast, Grcic et al. proposed to train a generative model to synthesize outliers in the pixel space <cit.>.
Producing usable uncertainty estimates for object classification is difficult; neural networks can produce a probability estimate by adding the softmax function. However, Guo et al. <cit.> highlights the importance of model calibration, where the miss-match in class confidence and the true positive rate is often skewed towards overconfidence. Kuleshov et al. <cit.> explores the use of model calibration to produce usable softmax probability estimates from networks. Lakshminarayanan et al. argue in <cit.> that using an ensemble of models can produce well-calibrated uncertainty estimates. The ensembles are generated by training multiple instances of a model on a random permutation of a given data set. The ensemble is treated as a uniformly-weighted mixture model, and the predictions are combined using the average of the outputs. Bayesian Neural Networks (BNN) <cit.>, on the other hand, define all model parameters as a Gaussian distribution, with μ and Σ as the mean and covariance. When training a BNN, the model parameters are updated using Bayes theorem. In practical terms, this is done by minimizing the Kullback–Leibler divergence <cit.>, where a predictive uncertainty is found by sampling the model weights.
§ BACKGROUND
The following section will outline the theory and methods used for our proposed work.
§.§ Epistemic and Aleatoric Uncertainty
In our work, we use the concepts of epistemic and aleatoric uncertainty <cit.> as two orthogonal uncertainty estimation metrics, inspired by Wang et al. <cit.>.
Epistemic uncertainty refers to the uncertainty arising from a lack of knowledge. In machine learning, this occurs when our parameters provide an inadequate fit, often due to a lack of data, causing our posterior over parameters to be broad. Fig. <ref> displays an example classification problem, where the middle plot captures epistemic uncertainty.
Aleatoric uncertainty, on the other hand, captures the stochasticity or variability in the data. Given a large dataset of high variance labels, the best possible prediction may be a high entropy one resulting in poor intra-class uncertainty. The example classification problem depicted in Fig. <ref> shows the uncertainty between classes.
A more practical measure of uncertainty is the combination of epistemic and aleatoric uncertainty, as seen in the final plot of Fig. <ref>. In this case, the regions that the network has not encountered are identified as uncertain regions.
§.§ Monte Carlo Dropout
Dropout <cit.> was introduced to prevent overfitting in neural networks by disabling a percentage of randomly selected neurons during training. Each neuron has some probability of being disabled, called the dropout rate.
Monte Carlo Dropout (MC Dropout), proposed by <cit.>, allows for the use of dropout as a Bayesian approximation of the Gaussian process. The network weights are alternately dropped and simulate Monte Carlo (MC) samples from the space of all available models.
Each dropout configuration Θ_t corresponds to a different sample from the approximate parametric posterior distribution Q(Θ|𝒟). Thus, sampling from the approximate posterior distribution P(T|D) allows for MC sampling of the model likelihood.
§.§ Virtual Outlier Detection
Outlier estimation estimates whether a sample is in-distribution (ID) or out-of-distribution (OOD). Our approach for estimating ID and OOD samples is based on the virtual estimation synthesis (VOS) method presented in <cit.>.
During training, a class conditional multivariate Gaussian for classes c∈1,...,C is created P_θ(f(𝐱) | y=c)=𝒩(μ_c, Σ_c), where μ_c is the class mean and Σ is the covariance. By sampling the ϵ-likelihood on the class-conditional distribution, we sample outliers 𝒱_k.
These outliers are the basis for the work done in <cit.>. The outliers and the correct samples are minimized as a Binary classification problem. The goal is to reduce the energy for OOD samples and increase it for ID samples. The energy term is calculated as follows:
E(𝐱 ; θ)=-log∑_k=1^Kexp ^f_k(𝐱 ; θ)
We scale the Energy term during training: ES(𝐱 ; θ)=log(μ_c, ID/E(𝐱 ; θ)) to directly minimize the Energy with binary cross-entropy. To better control the VOS loss during training we compute the running mean for each class μ_c, ID, given the respective energy E(x;θ), resulting in better regulation of class energy inconsistencies. We use the binary cross-entropy loss (BCE) on the two meta-classes, positive samples (ID) and the negative samples (OOD), drawn from the created class-conditional multivariate distributions x̂.
min _θ𝔼_(𝐱, y) ∼𝒟[ℒ_cls+β·ℒ_uncert]
where ℒ_cls is the Cross-Entropy Loss and ℒ_uncert is the BCE loss, which is scaled by a factor β.
§.§ Logit Normalization
Logit Normalization (LN) is introduced by Wei et.al <cit.> as a simple but effective method of reducing the number of outliers detected. This occurs by normalizing the predicted logits before computing the cross-validation loss. The loss is computed for a model f given as input a target pair (x,y) with weights θ as:
ℒ_ln(f(x ; θ), y)=-loge^f_y /(τf)/∑_i=1^k e^f_i/(τf)
where τ is a temperature parameter that modulates the magnitude of the logits. The authors presented an impressive performance increase on an OOD dataset test suite commonly used to compare outlier detection works. The results are presented in Table <ref>.
§ METHODOLOGY
Our goal is to provide a well-calibrated model that produces epistemic and aleatoric uncertainty estimates. We propose to sample the VOS models with MC-Dropout; this would allow for the aleatoric uncertainty to be described by MC Dropout while the Epistemic uncertainty(the OOD samples) will be scored by the VOS Energy Score. Furthermore, with MC-Dropout, we should be able to handle extreme outliers better, as a set of Energy scores with a significant variance implies that the model is uncertain whether it is OOD or ID. We propose using the MI of the samples from MC-Dropout as a supportive OOD detection measure, as the MI of the OOD datasets appears to be lower.
To further manage the Energy Score, we introduce the Logit Normalization loss see Eq. <ref>.
By removing the incentive to increase the logits when computing the classification loss but allowing it when computing the loss on the Scaled energy, we aim to produce a network that better ensures that the magnitude of the energy is correlated to the state of a sample, whether it is ID or OOD. Furthermore, the introduction of Logit Normalization ensures calibrated networks.
§.§ Uncertainty measures
Following established practices in the literature, we use the following metrics to measure the performance of detections:
§.§.§ Aleatoric uncertainty similarity measures
Following recent work, we here present tools for quantifying the aleatoric uncertainty.
The mutual information (MI) explored by Smith et al. <cit.> in a machine learning setting compares the predictive entropy against the expected entropy.
Entropy is calculated as follows,
H(p)=-∑_k=1^K p_k log(p_k)
where p is the softmaxed network output, followed by the Expected 𝔼(p)=p̂. This allows us to measure the similarity between two random variables; if samples are very similar, we will get a high MI which is shown below in Eq. <ref>
MI=H(p̂)-𝔼[∑_i=1^KH(p_i)]
where K is the number of MC samples.
It is also possible to use the Expected Kullback-Leibler Divergence (EKL):
𝔼[K L(p̂ p)]=𝔼[∑_i=1^Kp̂_ilog (p̂_i/p_i)]
This term is similar to the MI but measures the expected divergence among the possible softmax outputs.
Predictive variance is a more ad-hoc measure of uncertainty that evaluates the variance on the MC-sampled softmax outputs,
σ(p)=𝔼[(p-p̂)^2]
§.§.§ Epistemic metrics
The Area Under the Receiver Operating Characteristic curve (AUROC) depicts the relationship between the True Positive Rate (TPR) and the False Positive Rate (FPR). It can be interpreted as the probability that a positive sample is assigned a higher detection score than a negative example. The AUROC score is not affected by class imbalance, which is desirable.
The Area Under the Precision-Recall curve (AUPRC), on the other hand, has a common axis, the True Positive Rate (also known as Recall) but instead maps the relationship between that and the Precision (accuracy of the model). We show this metric with respect to both the ID and OOD datasets as the positive class (AUPRC_ID/OOD).
Finally, to waive any confusion in previous works about the use of FPR95, we clearly define FPR95_ID and FPR95_OOD.
On the one hand, FPR95_ID is used when the positive samples are the ID dataset, i.e., the number of false positive cases when 95% of our data is correctly classified.
On the other hand, the FPR95_OOD is used when the out-of-distribution dataset is set as the positive class. When 95% of positive samples are correctly classified, how many ID samples are within that range, this is visualized in figure <ref>.
§.§ FPR95 discussion
The FPR-N metric has been utilized in numerous contemporary studies on outlier detection. In certain instances, positive detection of outlier samples is demonstrated as FPR95_OOD <cit.>, which formally introduces FPR95 as an out-of-distribution measure. However, other studies, such as those by Liu et al. <cit.>, employ the in-distribution class as the positive category (FPR95_ID). Finally, Liang et al. <cit.> and Hsu et al. <cit.> use True Negative Rate at 95% True Positive Rate (TNR@TPR95). These studies refer to one another but interchange the two methodologies for computing FPR without explicit clarification.
For FPR95_OOD, what is being tested is the proportion of ID samples with lower confidence when 95% of the outliers are found. This is, in essence, how good we are at detecting false alarms. The FPR95_ID metric shows us how big a proportion of the outliers are present when 95% of the ID data is correctly classified, resulting in a metric that describes how well we find ID samples. This choice is often vague, and while the metrics are similar, it highlights different things. That is how well we predict our in-distribution samples and how good we are at finding outliers. This also applies to the AUPR metric, where we have a big difference in what is shown depending on the positive selection.
§ EXPERIMENTAL SETUP
§.§ Backbone Network Structure
Our backbone model follows the convention set by <cit.> that is, the Wide ResNet (WRN) <cit.>; this architecture strikes a balance between performance and sensitivity. The Networks are trained for 100 epochs with a cosine annealing learning rate scheduler starting at 0.1 to produce models comparable to the VOS baseline. The Baseline WRN model achieves a 94.5% accuracy on CIFAR10 and 97.58% on our ships dataset. We follow the selected hyper-parameters to compare our method to the model trained in <cit.>. Thus the models with Logit Normalization are trained for 200 epochs with an initial learning rate of 0.1 with a step-wise scheduler reducing the learning rate by a factor of ten at 80 and 140 epochs.
Both training setups have a batch size of 128 and are optimized using SGD with a momentum of 0.9 and weight decay of 5e-4. When MC sampling for inference, we use a dropout with a 10% chance of dropping a neuron.
§.§ Datasets
In this section, we describe the datasets used for our experiments, split into an 80/20 split of training/testing samples.
§.§.§ CIFAR10
For most of our experiments, we use the CIFAR10 dataset <cit.> as our ID dataset. The CIFAR10 dataset consists of 60,000 32x32 color images of 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. We use this simple but popular dataset as a baseline to compare our work against previous works.
§.§.§ OOD Data
To test our models against outliers, we need samples that are OOD. The authors of “ODIN" <cit.> proposed using a set of 6 outlier datasets. We will also follow the same practice commonly followed in the related literature. Therefore, as in <cit.>, our six test datasets are: The Textures dataset, introduced by Cimpoi et al. <cit.>, contains describable textural images. The SVHN dataset, proposed by Netzer et al. <cit.>, comprises 32x32 color images of house numbers, with ten classes representing the digits 0-9. Zhou et al. <cit.> introduced the Places365 dataset, which comprises images for scene recognition rather than object recognition. The LSUN dataset, another scene understanding dataset, has fewer classes than Places365, and the cropped and resized versions are denoted as LSUN-C and LSUN-R, respectively. Finally, iSUN is a large-scale eye-tracking dataset selected from natural scene images in the SUN database <cit.> and was introduced by Xu et al. <cit.>.
§.§.§ SHIPS
This work includes a self-collected and annotated dataset. This dataset is a finely classified version of our in-house datasets and data from other sources. Stets et al. in <cit.> introduced our in-house maritime dataset, which consists of 51,000 images with 31,900 annotations separated into two classes; boats and buoys. Of these, a subset has been selected and more granularly labeled.
We include data from other relevant datasets, such as the Singapore Maritime Dataset <cit.>. This dataset contains mixed samples of buoys and boats collected in Singaporean waters. Furthermore, we add data from the target area collected from online sources, primarily from videos and photos from Limfjorden (our area of interest). We add this to relevant data collected from publicly available data sets such as COCO, VOC, CIFAR10, and ImageNet.
We refer to this dataset as the SHIPS dataset and is made publicly available[<https://github.com/DTU-PAS/Ships-Classification>]. The SHIPS dataset is a classification dataset consisting of seven unique classes. Refer to Table <ref> for a description and examples of images of the dataset.
§ RESULTS
This section shows the results gathered from testing the proposed method.
Our initial results will primarily focus on our models trained on CIFAR10 to be more comparable with previous work. However, our main goal is to provide the method that has the best performance on our SHIPS dataset, which is not as curated as CIFAR10.
In Table <ref>, we present the improvement that VOS and MC-Dropout yield on OOD detection. We note that the MC-Sampling allows us to regulate better incorrect outlier detections that VOS provides. As we increase the number of samples for MC-Dropout, we decrease the overall FPR95; this is shown to work with diminishing returns (we found the best performance to speed was 10 MC-Samples). Furthermore, the output, when predicting with MC-Samples provides a prediction bound displaying the variability of the estimates see Figure <ref>. This is also noticeable in the energy term, where samples with significant energy variations are samples with very low confidence. Ideally, we want our samples with high confidence to have a high energy score while also having a low mutual information score. From table <ref>, we show that the MI for incorrectly classified ID samples is higher than the correct counterpart, per our expectations. We use the pre-trained model presented in <cit.> as our baseline. However, we instead sample their model with MC-Dropout by enforcing dropout at inference. This shows a slight improvement with five samples; while sampling ten times, we achieve a score of 5.0% / 6.4% better than the baseline VOS. Our results indicate that by using MC-sampling on a VOS model, we allow for better filtering of the incorrect OOD samples, thus achieving better FPR95_ID/OOD and AUROC/AUPR. We see a less impressive improvement for our SHIPS dataset, with a respective improvement of 41.0% / 10.1% for ID and OOD going from the baseline to VOS, and with MC-Dropout, we get a 44.2% / 20.1% improvement.
After introducing the Log normalization loss <cit.>, we observed a significant decrease in the FPR95_ID/OOD compared to the baseline VOS, from 24.87 to 11.91 (50.3%) and from 27.71 to 12.00 (57.0%). The performance of MC-Dropout improves with increasing sample size, but with diminishing returns. However, we noticed a different outcome when applying the same testing scheme to our SHIPS dataset. The LN substantially reduced the effectiveness of OOD detection while still maintaining good accuracy and being well-calibrated. We will delve deeper into this phenomenon in the discussion.
In table <ref> we show the relative difference in MI between the correctly classified and incorrectly classified samples and the relative difference between the IID and OOD datasets. We can see that the inclusion of Logit Normalization decreases the logit variation in the ID datasets and the OOD ID datasets by roughly 90%, an order of magnitude lower. We see this to a lesser extent on the SHIPS dataset (73%). The MI difference between the ID and OOD datasets for CIFAR10 is again roughly 90%, whereas, for the SHIPS dataset, we do not see any difference.
§ DISCUSSION
The presented methods show a clear improvement in both outlier detection (epistemic uncertainty) and the improvement of intra-class confidence (aleatoric uncertainty). We want to explore issues with the current work further. We introduce new parameters that must be tuned, adding to the model complexity.
MC-Sampling requires more computation as, in essence, the model is run multiple times. These issues are of interest, and a more unified method for parameter selection and alternate methods of more efficient MC-Sampling would need to be explored.
Although not our primary goal, we show that our method can achieve, to the best of our knowledge, results not yet achieved when no OOD data is available during training. The relationship between the MI of OOD and ID samples when LN is applied remains to be adequately explored. However, the application of the LN loss on the CIFAR10 dataset reduces the MI dramatically in OOD datasets see table <ref>. The combination of scaled MI and the energy score, improved the performance of low-energy ID samples. Furthermore, by extending our method to include ODIN, we further improve our FPR95 results; 11.91 → 11.65 For ID and 12.00 → 10.67 OOD.
Our findings indicate a significant disparity between the performance of models trained on CIFAR10 and SHIPS datasets. We have observed that extending OOD models with MC-Dropout is a promising approach, and we aim to investigate its potential further. Our study highlights the crucial role that dataset characteristics play in determining the effectiveness of OOD detection methods. For instance, the SHIPS dataset is relatively small and skewed towards certain types of vessels, with a limited and high entropy representation of other objects, such as buoys and humans engaged in water-based activities (e.g., kayaking, rowing, swimming). Further investigation is needed to address these challenges, how to apply LN on poorly curated datasets, and to develop robust and reliable OOD detection methods that can perform well in diverse real-world scenarios.
§ CONCLUSION
This work has aimed to explore uncertainty estimation for the classification of maritime objects. We propose a method providing a holistic and usable uncertainty measure. We have presented our experimental setup, producing well-calibrated models while providing a usable aleatoric and epistemic uncertainty measure. Furthermore, we show that our method, when considering CIFAR10, to the best of our knowledge, performs 8% better than the previous state-of-the-art models only trained on ID data and 77% better than the vanilla WRN model. We show the varying performance of the applied methods when considering a well-curated dataset (CIFAR10) and a more applied dataset, SHIPS. Our proposed method performs well on the CIFAR10 and SHIPS datasets, performing 5% better than the baseline VOS, on both datasets and 55% / 44% better, respectively, than the vanilla WRN models.
bibstyles/IEEEtran
|
http://arxiv.org/abs/2307.00191v1
|
20230701015138
|
Conformal Gradient Index Phononic Crystal Lenses: Theory and Application on Non-planar Structures
|
[
"Hrishikesh Danawe",
"Serife Tol"
] |
physics.app-ph
|
[
"physics.app-ph"
] |
]Conformal Gradient Index Phononic Crystal Lenses: Theory and Application on Non-planar Structures
Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI USA 48109
Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI USA 48109
[email protected]
The gradient index phononic crystal (GRIN-PC) lens concept has been proven very effective for focusing elastic waves at a desired location. Although well-studied for planar structures, GRIN-PC lenses for elastic wave focusing in curved structures are scarce and lack the theoretical framework for studying the wave focusing mechanism. In this work, we develop conformal GRIN-PC theory to analyze wave focusing in non-planar geometries and present a design framework for conformal GRIN-PC lenses to be implemented over curved structures. The proposed conformal GRIN-PC theory studies the wave propagation in a curved GRIN-PC lens using ray trajectories that meet at the focal spot of the lens. We apply the conformal GRIN-PC theory to accurately predict the focal region of the GRIN-PC lens implemented over a steel pipe and validate the results with numerical simulations. Further, the design framework is utilized to design a 3D-printed conical GRIN-PC lens. The elastic wave focusing in the conical lens is demonstrated using numerical simulations and is further validated with experiments.
Keywords: GRIN-PC lenses, wave focusing, curved structures, GRIN theory
§ INTRODUCTION
The gradient index (GRIN) lens concept is well studied in optics literature <cit.>, as it enables the creation of flat optical lenses overcoming the limitations of conventional spherical lenses in focusing light waves. The GRIN medium is composed of layered material of gradually varying refractive indices so that the light rays bend from the region of low refractive index towards the region of high refractive index. In the GRIN lens, the refractive indices of different layers are tailored to obtain a refractive index profile such that it results in the focusing of an incident beam of light at a desired location. According to GRIN optics theory <cit.>, a hyperbolic secant (HS) profile results in aberration-free focusing of meridional rays (i.e., rays propagating in planes that include the optical axis) for which the governing equations can be solved analytically to predict the focal spot. Thus, the HS profile is exceptionally used for designing GRIN lenses.
With the emergence of phononic crystals (PCs), the concept of the GRIN lens was extended to acoustic or elastic waves using a layered structure called gradient index phononic crystal (GRIN-PC) lens. Phononic crystals are artificially engineered structures with spatially periodic structural features called scatterers that enable exceptional wave control due to Bragg scattering. By tailoring the geometric or material properties of the scatterer, the wave properties (such as wave speed) of the PCs can be altered to achieve unprecedented wave phenomena. In GRIN-PC lenses, the properties of scatterers in different layers are engineered such that the effective refractive index across the layers follows the HS profile transverse to the wave propagation direction, thus focusing an incident elastic or acoustic wave at the focal spot of the lens. The first GRIN-PC lens was designed using the GRIN optics theory to focus bulk acoustic waves in a planar 2D PC made of epoxy medium embedded with cylindrical rods as scatterers <cit.>. The gradient HS profile of the effective refractive index was achieved by changing the diameter or material of the cylindrical rods in different layers or rows of the GRIN-PC lens. The follow-up studies on GRIN-PC lenses for elastic waves mainly considered focusing of symmetric (S0) and antisymmetric (A0) Lamb waves in plates at different length scales <cit.>. However, in most cases, discrepancies were observed between the numerical and theoretical focal distances of GRIN-PC lenses. The reason is that phononic crystals' anisotropic nature is not captured in the optical GRIN theory, which assumes PC as an isotropic medium of effective refractive index. To accurately predict the focal region of planar GRIN-PC lenses, Zhao et al. <cit.> proposed an analytical ray tracing method utilizing the equal frequency contours (EFCs) of PC to locally determine the wave vector and group velocity in every row of the GRIN-PC lens. So, even if the GRIN-PC lenses are designed based on effective refractive indices to fit the HS profile, the wave-focusing mechanism can only be understood with EFCs. Recently, a ray theory was also proposed for wave propagation in more general, spatially graded, planar elastic metamaterials assuming local periodicity varying slowly in space compared to the unit cell length scale <cit.>. It also utilizes the locally computed wave vectors and group velocity vectors to trace the ray emanating from a point source.
The GRIN-PC lenses are found very effective for the localization of wave energy benefiting many applications such as energy harvesting <cit.>. Although well studied for planar structures, their application on curved structures was not yet explored before we demonstrated the first conformal GRIN-PC lens for focusing Lamb waves in pipe-like structures <cit.>. The cylindrical GRIN-PC lens was made of steel stubs attached to the outer surface of the steel pipe, and the effective HS refractive index profile was achieved by tailoring the stub heights around the circumference of the pipe. The lens was found very effective for multi-mode broadband wave focusing of ultrasonic guided waves in pipes <cit.>. However, similar to planar GRIN-PC lenses, we found discrepancies in focal distances determined from optical GRIN theory and numerical simulations because of the anisotropy of PCs. Thus, there is a need for the development of conformal GRIN-PC lens theory to understand the focusing mechanism in non-planar structures.
In this work, we propose a conformal GRIN-PC theory for tracing ray trajectories inside a curved GRIN-PC lens. We adopt the analytical approach of calculating the beam path inside a planar GRIN-PC lens presented by Zhao et al. <cit.> and apply it to a more general non-planar geometry via coordinate transformation. The theory is applied to accurately determine the focal region of the cylindrical GRIN-PC lens previously implemented over steel pipe <cit.>. Using the proposed theory, we further design a 3D-printed conical GRIN-PC lens and demonstrate multi-mode elastic wave focusing of guided Lamb waves in conical structures commonly found in civil, mechanical, and aerospace industries. The 3D-printed GRIN-PC lens is numerically and experimentally tested for elastic wave focusing. The presented design framework is crucial to extend the concept of GRIN-PC lenses for elastic wave focusing beyond planar structures that can benefit many applications including nondestructive testing, sensing, energy harvesting, etc.
§ CONFORMAL GRIN-PC THEORY
A GRIN lens with hyperbolic secant distribution of refractive index results in focusing of normally incident beam at a distance of π/2α according to the optical GRIN theory, which is conventionally adopted for acoustic/elastic waves. The optical (a.k.a. conventional) GRIN theory assumes perfectly circular EFCs which means the wavevector magnitude and group speed are the same in all directions. This results in focusing the normally incident beam on the lens at a single location. However, in general, the EFCs of phononic crystals are not perfectly circular. Thus, tracing the ray trajectory in the GRIN-PC lens requires computing the EFCs in each row to account for the anisotropy. Zhao et al. <cit.> presented a framework for tracing a ray path in a planar GRIN-PC lens by locally computing the wave vectors and group velocity vectors. The ray path across the neighboring unit cell layers was determined using the combination of Snell's law and Poynting vector (i.e., group velocity vector). We utilize a similar approach for tracing the ray trajectories in a non-planar GRIN-PC lens with a coordinate transformation from cartesian to cylindrical coordinates. The wave vectors and group velocity vectors are determined from the EFCs of curved unit cells. To demonstrate the computation of ray trajectories in non-planar structures, we take the example of a cylindrical GRIN-PC lens implemented over steel pipe <cit.>, as shown in Fig. <ref>(a). In the cylindrical coordinate system, the wave vector has two non-zero components: k_x along the pipe axis and k_ϕ along the pipe circumference. The wavevector components are different in different rows of the GRIN-PC lens due to the gradient distribution of stub heights. Thus, the wavevector magnitude is a function of angular distance ϕ from the centerline and the wave propagation direction θ measured with respect to x-direction, as shown in the EFC plots in Fig. <ref>(b). The wavevector components are thus given as:
k_x=k(ϕ,θ) cos(θ), k_ϕ=k(ϕ,θ) sin(θ)
Now, because of the anisotropy the group velocity vector defined as v_g=∇_kω(k) is at an angle φ to the wave vector such that the slope of ray trajectory in curved GRIN-PC lens is given by:
tan(φ)=-∂ k_x/∂θ(∂ k_ϕ/∂θ)^-1
If we consider a beam normally incident on the GRIN-PC lens, Snell's law states that the x-component of wavevector k_x is conserved across the interface of two consecutive rows of the GRIN-PC lens. Thus, the wavevector tilts gradually from a horizontal position (θ=0) at the beginning of the lens to attain maximum angle with respect to the x-axis at the centerline. The ray tracing starts at the beginning of each row (x=0a,ϕ=na/R), where n∈ [1,6] is the row number, and k(ϕ=na/R,θ=0)=k_x is the initial wavevector which is conserved due to the Snell's law. Now, to predict the ray path, we move closer to the centerline in incremental steps of angular distance dϕ to search for the wave vector at ϕ=na/R-mdϕ, where m is the step number, such that k(ϕ=na/R-mdϕ,θ)=k_x, i.e., we find the unknown angle θ with the help of EFCs. Next, we determine the slope of the ray trajectory, tan(φ) using equation <ref>. The axial location x for the ray at angular position ϕ=na/R-mdϕ is then determined using the following iterative relation:
x(ϕ=na/R-mdϕ)=x(ϕ=na/R-(m-1)dϕ)+Rdϕ/tan(φ)
Note that even if the GRIN-PC lens is divided into discrete rows with gradually varying stub heights, the incremental step dϕ is chosen much smaller than the angular stub spacing to obtain converging results. Since the EFCs are only calculated for the unit cells in discrete rows, the ray trajectory calculation at locations in between two consecutive unit cell rows is done using interpolated EFCs by assuming a continuous variation of stub height.
§ RAY TRAJECTORIES IN THE CURVED GRIN-PC LENS
A curved GRIN-PC lens integrated with steel pipe is depicted in Fig. <ref>(a) <cit.>. It consists of steel pipe with outer radius R=57.15 mm with externally attached steel stubs of constant diameter d_s=10 mm and varying height across the circumference, as shown in Fig. <ref>(b). The stubs are uniformly spaced in the axial and circumferential direction such that the inter-stub distance equals the unit cell length. The unit cell mode shapes are depicted in the inset for three fundamental pipe modes, L(0,2), L(0,1), and T(0,1). The unit cell length is a=20 mm and the pipe wall thickness is t_p=6 mm. The stub heights are tailored to obtain a hyperbolic secant (HS) profile of the refractive index. The stub height is maximum at the centerline, S_0, and decreases symmetrically on either side of the centerline up to the lens edges, S_±6. The height profile is obtained as (4.5000, 4.4646, 4.3188, 4.0668, 3.6588, 2.9682, 1.9158) mm at locations S_0 to S_±6, respectively. In order to trace ray trajectories, we define a sector angle ϕ measured from the centerline such that ϕ R/a is 1 at location S_1, 2 at location S_2, and so on, in the cylindrical coordinate system.
An HS profile is well studied in GRIN optics literature for aberration-free focusing as parallel rays meet at a single point after being gradually refracted through a GRIN medium. The hyperbolic secant profile for pipe is defined as n=n_0sech(αϕ R), where n_0 is the refractive index at the lens centerline and α is the gradient coefficient. The refractive index distribution of the GRIN-PC lens is obtained from the dispersion variation of the unit cell for different stub heights. The refractive index distribution for L(0,2) mode at 30 kHz is fitted with HS profile as shown in Fig. <ref>(a), for which the ray trajectories meet at a distance of π/2α according to the conventional GRIN theory, as shown in Fig. <ref>(b). The rays bend from the region of low refractive index at the edges towards the region of high refractive index at the centerline. We have previously studied the focusing effect of curved GRIN-PC lenses via time-domain numerical simulation in COMSOLMultiphysics <cit.>. To determine the focal region, we obtained RMS velocity plots along the lens centerline normalized with RMS velocity in baseline (i.e., pipe without GRIN-PC lens) as depicted in Fig.<ref>(c) for the L(0,2) mode at 30 kHz. The normalized velocity amplitude is close to 1 before the lens start and increases along the pipe length to attain maximum value at the first focal point. The velocity amplitude decreases past the first focal point and peaks again at the second focal point because of refocusing. The focal region is identified with the maximum velocity amplitude and compared with the focal point obtained using optical GRIN theory. The predicted first focal point using optical GRIN theory FP=32.3876a lies beyond the highest intensity point in numerical simulations. As previously explained in the introduction, the discrepancy is because the phononic crystals are not generally isotropic. The anisotropy of PCs is captured with equal frequency contours (EFCs) obtained from unit cell simulations. The EFCs of L(0,2) mode at 30 kHz are shown in Fig. <ref>(d). The equal frequency contours are not perfect circles, meaning that the wave vectors and wave speeds are different along different directions. Hence, a single value of the refractive index in the HS profile, obtained by averaging it in different directions, could not predict the focal point accurately using the conventional GRIN theory. We implement a conformal GRIN-PC theory for accurately predicting the focal region of GRIN-PC lenses by utilizing EFCs. The conformal GRIN-PC theory utilizes directional phase speeds and group velocities to predict the path a ray would take in a GRIN-PC lens. The directional phase and group velocities are obtained from EFCs at every single location on the ray trajectory. The ray trajectories calculated using conformal GRIN-PC theory for L(0,2) mode at 30 kHz are depicted in Fig. <ref>(e). The ray trajectories do not meet at a single location as previously predicted by the conventional GRIN theory. The focal region of the GRIN-PC lens is determined from the intersection of ray trajectories at the lens centerline (ϕ=0). The predicted focal region from conformal GRIN-PC theory matches exactly with the highest intensity region at the first focal point in numerical simulations, as shown in Fig. <ref>(f).
§ GRIN-PC LENS IMPLEMENTATION IN CONICAL STRUCTURES
To demonstrate the applicability and effectiveness of the proposed theory for any curved structure, we consider a conical shell of uniform wall thickness. Conical shells are commonly found in civil, mechanical, and aerospace industries, which require structural health monitoring and can also serve as a platform for enhanced energy harvesting of ambient structural vibrations via guided wave focusing. We chose to design a conformal GRIN-PC lens for the conical structure similar to the steel pipe with externally attached cylindrical stubs of varying heights on the outer surface. The GRIN-PC lens integrated with a conical structure is depicted in Fig. <ref>. The conical GRIN-PC lens comprises stubbed unit cells, representing a phononic crystal pipe of an infinite extent. The wave propagation characteristics of the phononic crystal pipe are obtained by applying periodic Floquet boundary conditions at the unit cell sides and solving for the eigenfrequency solutions by sweeping wave vectors in the first Brillouin zone. We compute the dispersion curves of the unit cell in COMSOL Multiphysics using solid mechanics physics and eigenfrequency study. The unit cell is made of VeroClear with material properties: ρ=1170 kg/m^3, E=2.55 GPa, ν=0.3. The Floquet periodicity boundary conditions in COMSOL are as follows:
u_dst=u_src· e^i k· (r_dst-r_src)
where, u_src and u_dst are displacement vectors at the source and destination boundaries of the unit cell, respectively. Similarly, r_src and r_dst are position vectors at the source and destination boundaries of the unit cell, respectively, and k is the wave vector. The cone, along with the GRIN-PC lens, is made of VeroClear, which is a 3D printable polymer available with PolyJet printers. The prototype cone is 225 mm long with wall thickness t_p=3 mm, and the internal diameter of the cone varies from D_1=75 mm at one end to D_2=25 mm at the other end. The axial and angular spacing between the stubs of the GRIN-PC lens is kept constant throughout the lens to respect the geometry of the cone for guided wave propagation. The axial and angular spacing between the neighboring stubs is 9 mm and 18^∘, respectively. The GRIN-PC lens is 22 unit cells long in the axial direction and has 13 unit cell rows along its circumference. The axial length of the unit cell equals a=9 mm, and the stub diameter equals d_s=5 mm. The circumferential length of the unit cell varies along the cone axis due to varying diameter and constant angular spacing of 18^∘. The stub heights are tailored in the circumferential direction to realize hyperbolic secant (HS) refractive index distribution. The stub height is maximum at the centerline unit cell row S_0 and minimum at the edge rows of the lens S_±6. The stub height profile is kept constant along the axis of the cone. We compute dispersion curves of unit cells for different stub heights as previously done for steel pipe. However, since the curvature of the cone varies along its axis, the angular length of the unit cell is different at every location along the axis. Hence, the dispersion curves are computed not only for different stub heights but also for different axial locations.
The dispersion curves of the unit cell at an axial distance of x=20 mm (x=0 mm is the left end of the cone with internal diameter D_1=75 mm) are shown in Fig. <ref>(a) for stub heights ranging from 0.25 mm to 2.5 mm. The dispersion curves represent three fundamental pipe modes (L(0,2), L(0,1), and T(01)) of a pipe with internal diameter D=70.77 mm and wall thickness t_p=3 mm. Note that the dispersion curves are locally calculated considering that the unit cell represents a phononic crystal pipe of diameter equaling the cone diameter at that location. As expected, the dispersion curves shift to a lower frequency with increasing stub height. We found that the torsional T(0,1) mode dispersion curve does not change with the pipe diameter for plain pipe, and thus it propagates with the same wave speed throughout the cone for a given excitation frequency. However, the wavelengths of longitudinal L(0,1) and L(0,2) modes are affected by the diameter, and thus their propagation speed changes along the cone axis. Thus, for simplicity, we chose T(0,1) mode for designing the GRIN-PC lens whose dispersion variation is shown in Fig. <ref>(b). The design frequency is chosen just below the Bragg bandgap for T(0,1) mode at 34 kHz corresponding to the maximum stub height of 2.5 mm. From dispersion variation, we obtained the refractive index n=v/v_Γ X as a function of stub height where v is the phase velocity of T(0,1) mode in the plain pipe and v_Γ X is the phase velocity of T(0,1) mode in phononic crystal pipe. The refractive index as a function of stub height is plotted in Fig. <ref>(c). The refractive index increases with increasing stub height, indicating that the wave speed is slower for higher stub heights. Thus, the wave travels faster at the edges of the lens where the stub height is minimum and it travels slower at the centerline where the stub height is maximum. The stub height profile around the circumference of the cone is depicted in Fig. <ref>(e), which is obtained to follow the HS profile of refractive index, as shown in Fig. <ref>(d). The gradient coefficient equals α=0.0953/a for the HS distribution of T(0,1) mode at x=20 mm and f_design=34 kHz for which the first focal point is predicted at 16.5a according to optical GRIN theory. Now, even if the dispersion curves of plain pipe remain the same for different diameters, the unit cell with stubs shows variation in dispersion curves of T(0,1) mode as the diameter varies. Thus, for the same stub height profile, the refractive index distribution changes along the cone axis, as depicted in Fig. <ref>(e). Therefore, conventional GRIN theory is insufficient to predict the focal point of the GRIN-PC lens for a conical structure. Hence, we numerically investigate the focusing of the three pipe modes using the designed GRIN-PC lens for the cone in the next section.
§.§ Numerical Results
The conical GRIN-PC lens design was numerically tested for multimode wave focusing through time-domain numerical simulations. The simulation model consists of a cone integrated with a GRIN-PC lens made of VeroClear, as shown in Fig. <ref>. Using solid mechanics physics, the time domain numerical simulations on conical GRIN-PC lens were run in COMSOL Multiphysics. The CAD model of the cone integrated with the GRIN-PC lens (see Fig. <ref>) was built in Solidworks and imported into COMSOL for finite element simulations. The material of the cone is VeroClear which was modeled using linear elastic solid. Low-reflecting boundary conditions were applied at the two ends of the cone to avoid reflected waves interfering in the lens region. A 7-cycle sine burst excitation was applied at the left edge of the cone where the inner diameter is D_1=75 mm. The edge load was applied in tangential, radial, and axial directions for exciting T(0,1), L(0,1), and L(0,2) modes, respectively. The finite element model has tetrahedral mesh elements with a maximum element size of λ/20, where λ is the wavelength of the excited mode. The time-dependent study was run with sufficiently smaller time steps to obtain a converging solution. The RMS velocity was extracted at the centerline unit cell row S_0 and it was compared with the cone without the GRIN-PC lens, as shown in Fig. <ref>.
The plane wave excited at the left end of the cone starts to bend towards the lens centerline as it propagates through the lens. This is because the refractive index is highest at the lens centerline and gradually decreases towards the lens edges. Thus, the plane wave travels faster at the edges and slower at the centerline resulting in the bending of the wavefront from the region of low refractive index to the region of high refractive index. Figure <ref> shows RMS velocity at the lens centerline compared to the baseline cone for three different pipe modes at the design frequency of 34 kHz. Note that the lens starts at x=0a and ends at x=22a along the cone axis. The focusing results are also obtained at a frequency of 30 kHz away from the design frequency to demonstrate the broadband operation of the lens. Note that the velocity amplitude increases gradually in the baseline cone along its axis because of decreasing circumference. With the GRIN-PC lens, the velocity curve peaks above the baseline velocity curve at the focal point of the lens because of the focusing effect. The focal point location is different for different modes at different frequencies. At 30 kHz, all the modes focus toward the end of the lens. The focal points at the design frequency of 34 kHz are closer than at 30 kHz. The maximum amplification of velocity amplitude for T(0,1) mode at the design frequency of 34 kHz is obtained at a distance equal to 12a, which is shorter than the predicted focal length of 16.5a. As stated earlier, this is partly attributed to the changing refractive index distribution along the cone axis, as shown in Fig. <ref>(f), and partly because of the anisotropy of the phononic crystal, which is not accounted for in the conventional GRIN theory. Nonetheless, broadband multimode focusing is numerically demonstrated with maximum amplification factors of 1.59, 2.19, and 1.56 at 30 kHz and 1.51, 1.20, 1.33 at 34 kHz for T(0,1), L(0,1), and L(0,2) modes, respectively.
§.§ Experimental Validation
We further validate the wave focusing ability of conical GRIN-PC lens through laboratory experiments using a Polytec laser vibrometer and data acquisition system. The experimental setup is depicted in Fig. <ref>. The cone integrated with GRIN-PC lens was 3D-printed using Stratasys J750 Polyjet 3D printer using VeroClear material, which is a rigid transparent polymer that simulates PMMA (polymethyl methacrylate). The cone is 225 mm long and has other dimensions similar to the numerical model, as depicted in Fig. <ref>. The cone was supported at both ends using soft supports placed on the vibration isolation table. An absorbing clay was applied at both ends of the cone to reduce wave reflections. An array of piezoelectric actuator disks of diameter equal to 5 mm and thickness of 0.4 mm were glued on the cone surface around its circumference with a layer of copper tape in between. The copper tape provides electrical contact to the bottom electrodes of the actuators, whereas the free top surface serves as the other electrode. The actuators vibrate radially to excite longitudinal plane waves in the cone right before the lens starts. The actuator array was excited using a signal generator connected to a power amplifier. The out-of-plane velocity signal was measured on the cone surface at the end of the GRIN-PC lens, where focusing of longitudinal modes is expected at 30 kHz from numerical simulations. The time domain velocity signal measured on the cone surface was stored in the Polytec data acquisition center. The experiments were conducted on a 3D-printed cone integrated with GRIN-PC lens using a setup consisting of a vibration isolation table, Polytec PSV 500 laser vibrometer, a Keysight 33210A function generator, TReK PZD350A amplifier, and data acquisition system, as shown in Fig. <ref>. The function generator generates a 5-cycle sine burst signal with a peak-to-peak amplitude of 1V and signal time of 800μsec. The time delay between two consecutive bursts was set to 50 msec. The burst signal was amplified by the TReK amplifier before supplying it to the piezoelectric actuator disks from Steminc Inc (PZT-4, radial mode vibration). The laser vibrometer was set to measure out-of-plane velocity on the cone surface with a sampling frequency of 0.625 MHz with 10-time averages. The vibrometer was in-sync with the signal generator, and the measured velocity data was stored in the data acquisition center.
The velocity signals captured using the laser vibrometer for the baseline cone depicted in Fig.<ref>(a) and the cone integrated with the GRIN-PC lens depicted in Fig.<ref>(b) are plotted in Fig. <ref>(c) and (d) at frequencies 30 and 34 kHz, respectively. The piezoelectric actuator array excites only the longitudinal modes and the laser only measures out-of-plane velocity; thus, the waveform captured using the vibrometer corresponds to L(0,1) mode. The velocity signal is amplified at both the excitation frequencies due to the focusing effect of the GRIN lens. The maximum velocity amplitude with the GRIN-PC lens is about two times higher than that of the baseline cone at 34 kHz. The amplification at 30 kHz is not significant at the measured location. Note that the numerical results predict about two times amplification of velocity amplitude at 30 kHz, which is instead observed at 34 kHz in experiments. This shift in the frequency might be the because of the uncertainty in material properties of 3D-printed polymers as documented in the literature<cit.>. Several aspects, such as UV light exposure while printing affect the properties of 3D-printed materials. In fact, the material properties vary over a large range and are strongly affected by the printing process, as reported in previous studies. Stratasys has specified the range of Young's modulus for the VeroClear material, which is in between 2 GPa and 3GPA. However, the reported values in some studies for similar 3D-printed polymers go beyond this range<cit.>. In numerical simulations, Young's modulus of VeroClear is chosen as 2.55 GPa, which might differ from the actual material properties of 3D-printed samples in experiments. Also, the 3D-printed samples of the cone are printed layer by layer, which results in anisotropic behavior and cannot be accounted for in numerical simulations. Despite these uncertainties, the GRIN-PC lens focuses the wave energy as expected from the gradient refractive index distribution.
§ DISCUSSION AND CONCLUSION
In this work, we present the conformal GRIN-PC theory based on the ray trajectories in curved GRIN-PC lenses and demonstrate its validity for accurately predicting the focal region of a GRIN-PC lens integrated over a steel pipe. The ray trajectories represent guided wave propagation inside a GRIN-PC lens due to the gradient distribution of the refractive index that results in the focusing of elastic waves. On the other hand, the optical GRIN theory predicts that the ray trajectories in the GRIN-PC lens with hyperbolic secant refractive index distribution meet at a single location without accounting for the crystal anisotropy of phononic crystals. Thus, the predicted focal spot does not agree well with the numerical simulations. The non-planar GRIN theory proposed in this paper utilizes the EFCs of phononic crystal to capture the crystal anisotropy and predict the ray path inside a GRIN-PC lens. The ray trajectories obtained using conformal GRIN-PC theory intersect at multiple locations, and the theoretical focal regions of the GRIN-PC lens are determined by marking the intersection of ray trajectories at the centerline of the lens. The theoretical focal region determined using conformal GRIN-PC theory for L(0,2) pipe mode is in excellent agreement with the focal region obtained in numerical simulations. Thus, the non-planar GRIN theory accurately predicts the entire focal region of a curved GRIN-PC lens overcoming the limitations of optical GRIN theory. Next, to demonstrate the effectiveness of the proposed theory, we present a 3D-printed conical GRIN-PC lens design for multimode focusing of guided elastic waves. For guided wave propagation along the cone axis, the GRIN-PC lens design for conical structure requires uniform angular spacing between the neighboring unit cell rows to respect the cone geometry. This results in varying arc lengths of the curved unit cells as the diameter changes along the cone axis. Thus, the dispersion curves are different for every unit cell, even in the same row of the GRIN-PC lens. As a result, the refractive index profile changes along the cone axis, because of which the conventional GRIN theory fails to predict the focal spot of a conical GRIN-PC lens. We successfully demonstrated the wave-focusing ability of the designed GRIN-PC lens for the three fundamental pipe modes at multiple frequencies through numerical simulations. The non-planar GRIN-PC theory based on the ray tracing framework enables new lens designs conforming or integrated with non-planar geometries and predicts the wave behavior and focal spots in an accurate manner. Thus, it expands the applicability of wave focusing phenomena in a myriad of real-life structures in mechanical, aerospace, and civil engineering applications.
§ DATA AVAILABILITY STATEMENT
Data is available on reasonable request from the corresponding author.
§ ACKNOWLEDGEMENTS
This work was supported in part by the National Science Foundation [grant number CMMI-1914583].
§ AUTHOR CONTRIBUTION STATEMENT
Danawe: Conceptualization. Methodology. Software. Experiments. Validation. Writing- Original draft preparation. Tol: Conceptualization. Supervision. Writing- Reviewing and Editing.
empty
model1-num-names
20
natexlab#1#1
[#1],#1
[Moore(1980)]Moore80
authorD. T. Moore,
titleGradient-index optics: a review,
journalAppl. Opt. volume19
(year1980) pages1035–1038.
[Nishi et al.(1986)Nishi, Ichikawa, Toyama, and Kitano]Nishi86
authorH. Nishi, authorH. Ichikawa,
authorM. Toyama, authorI. Kitano,
titleGradient-index objective lens for the compact disk
system,
journalAppl. Opt. volume25
(year1986) pages3340–3344.
[Ohmi et al.(1988)Ohmi, Sakai, Asahara, Nakayama, Yoneda, and
Izumitani]Ohmi88
authorS. Ohmi, authorH. Sakai,
authorY. Asahara, authorS. Nakayama,
authorY. Yoneda, authorT. Izumitani,
titleGradient-index rod lens made by a double ion-exchange
process,
journalAppl. Opt. volume27
(year1988) pages496–499.
[Koike et al.(1994)Koike, Kanemitsu, Shioda, Nihei, and
Ohtsuka]Koike94
authorY. Koike, authorA. Kanemitsu,
authorY. Shioda, authorE. Nihei,
authorY. Ohtsuka,
titleSpherical gradient-index polymer lens with low
spherical aberration,
journalAppl. Opt. volume33
(year1994) pages3394–3400.
[Gómez-Varela et al.(2012)Gómez-Varela, Flores-Arias, Bao-Varela,
and Gómez-Reino]GOMEZVARELA20121706
authorA. Gómez-Varela, authorM. Flores-Arias,
authorC. Bao-Varela, authorC. Gómez-Reino,
titleFocusing, collimation and beam shaping by active grin
rod lenses: Theory and simulation,
journalOptics and Lasers in Engineering
volume50 (year2012) pages1706–1715.
[Huang et al.(2010)Huang, Mao, Lin, Kiraly, Huang, and
Huang]C005071G
authorH. Huang, authorX. Mao, authorS.-C. S.
Lin, authorB. Kiraly, authorY. Huang,
authorT. J. Huang,
titleTunable two-dimensional liquid gradient refractive
index (l-grin) lens for variable light focusing,
journalLab Chip volume10
(year2010) pages2387–2393.
[Gómez-Reino et al.(2002)Gómez-Reino, Perez, and Bao]GRINOptics
authorC. Gómez-Reino, authorM. V. Perez,
authorC. Bao, titleGradient-index Optics: Fundamentals
and Applications, publisherSpringer, Berlin,
year2002.
[Lin et al.(2009)Lin, Huang, Sun, and Wu]Lin2009
authorS.-C. S. Lin, authorT. J. Huang,
authorJ.-H. Sun, authorT.-T. Wu,
titleGradient-index phononic crystals,
journalPhys. Rev. B volume79
(year2009) pages094302.
[Wu et al.(2011)Wu, Chen, Sun, Lin, and Huang]Wu2011
authorT.-T. Wu, authorY.-T. Chen,
authorJ.-H. Sun, authorS.-C. Lin,
authorT. Huang,
titleFocusing of the lowest antisymmetric lamb wave in a
gradient-index phononic crystal plate,
journalAppl. Phys. Lett. volume98
(year2011).
[Zhao et al.(2012)Zhao, Marchal, Bonello, and Boyko]Zhao2012
authorJ. Zhao, authorR. Marchal,
authorB. Bonello, authorO. Boyko,
titleEfficient focalization of antisymmetric lamb waves in
gradient-index phononic crystal plates,
journalAppl. Phys. Lett. volume101
(year2012) pages261905.
[Chiou et al.(2014)Chiou, Lin, Ono, Esashi, Yeh, and
Wu]CHIOU20141984
authorM.-J. Chiou, authorY.-C. Lin,
authorT. Ono, authorM. Esashi, authorS.-L.
Yeh, authorT.-T. Wu,
titleFocusing and waveguiding of lamb waves in
micro-fabricated piezoelectric phononic plates,
journalUltrasonics volume54
(year2014) pages1984–1990.
[Jin et al.(2015)Jin, Torrent, Pennec, Pan, and
Djafari-Rouhani]Jin2015
authorY. Jin, authorD. Torrent,
authorY. Pennec, authorY. Pan,
authorB. Djafari-Rouhani,
titleSimultaneous control of the s0 and a0 lamb modes by
graded phononic crystal plates,
journalJournal of Applied Physics volume117
(year2015) pages244904.
[Tol et al.(2016)Tol, Degertekin, and Erturk]TolAPL
authorS. Tol, authorF. L. Degertekin,
authorA. Erturk,
titleGradient-index phononic crystal lens-based
enhancement of elastic wave energy harvesting,
journalAppl. Phys. Lett. volume109
(year2016) pages063902.
[Tol et al.(2019)Tol, Degertekin, and Erturk]TOL2019AddManuf
authorS. Tol, authorF. Degertekin,
authorA. Erturk,
title3d-printed phononic crystal lens for elastic wave
focusing and energy harvesting,
journalAdditive Manufacturing volume29
(year2019) pages100780.
[Zhao et al.(2014)Zhao, Bonello, Marchal, and Boyko]Zhao_2014
authorJ. Zhao, authorB. Bonello,
authorR. Marchal, authorO. Boyko,
titleBeam path and focusing of flexural lamb waves within
phononic crystal-based acoustic lenses,
journalNew Journal of Physics volume16
(year2014) pages063031.
[Dorn and Kochmann(2022)]Dorn2022
authorC. Dorn, authorD. M. Kochmann,
titleRay theory for elastic wave propagation in graded
metamaterials,
journalJournal of the Mechanics and Physics of Solids
volume168 (year2022) pages105049.
[Danawe et al.(2020a)Danawe, Okudan, Ozevin, and
Tol]Danawe2020APL
authorH. Danawe, authorG. Okudan,
authorD. Ozevin, authorS. Tol,
titleConformal gradient-index phononic crystal lens for
ultrasonic wave focusing in pipe-like structures,
journalAppl. Phys. Lett. volume117
(year2020a) pages021906.
[Danawe et al.(2020b)Danawe, Okudan, Ozevin, and
Tol]Danawe2020SPIE
authorH. Danawe, authorG. Okudan,
authorD. Ozevin, authorS. Tol,
titleMetamaterial-based amplification of multi-mode
ultrasonic guided waves toward improved damage detection in pipelines,
journalProceedings of the 27th SPIE Smart Structures/NDE
volume11376 (year2020b)
pages160–166.
[Danawe et al.(2020c)Danawe, Ozevin, and
Tol]DanaweIDETC
authorH. G. Danawe, authorD. Ozevin,
authorS. Tol,
titleNumerical Investigation of Multi-Mode Guided Wave
Focusing in Pipe–Like Structures Using Gradient Index Metamaterial Lens
Design,
journalInternational Design Engineering Technical
Conferences and Computers and Information in Engineering Conference
volume7 (year2020c).
noteV007T07A003.
[Barclift and Williams(2012)]Barclift2012
authorM. Barclift, authorC. Williams,
titleExamining variability in the mechanical properties of
parts manufactured via polyjet direct 3d printing,
journal23rd Annual International Solid Freeform
Fabrication Symposium - An Additive Manufacturing Conference
(year2012).
3pt
|
http://arxiv.org/abs/2307.01752v1
|
20230704144630
|
Controlling electric and magnetic Purcell effects in phosphorene via strain engineering
|
[
"P. P. Abrantes",
"W. J. M. Kort-Kamp",
"F. S. S. Rosa",
"C. Farina",
"F. A. Pinheiro",
"Tarik P. Cysne"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"physics.optics",
"quant-ph"
] |
We investigate the spontaneous emission lifetime of a quantum emitter near a substrate coated with phosphorene under the influence of uniaxial strain. We consider both electric dipole and magnetic dipole-mediated spontaneous transitions from the excited to the ground state. The modeling of phosphorene is performed by employing a tight-binding model that goes beyond the usual low-energy description. We demonstrate that both electric and magnetic decay rates can be strongly tuned by the application of uniform strain, ranging from a near-total suppression of the Purcell effect
to a remarkable enhancement of more than 1300% due to the high flexibility associated with the puckered lattice structure of phosphorene. We also unveil the use of strain as a mechanism to tailor the most probable decay pathways of the emitted quanta. Our results show that uniaxially strained phosphorene is an efficient and versatile material platform for the active control of light-matter interactions thanks to its extraordinary optomechanical properties.
[email protected]
Departamento de Física, Universidade Federal de São Carlos, Rod. Washington Luís, km 235 - SP-310, 13565-905, São Carlos, São Paulo, Brazil
Theoretical Division, Los Alamos National Laboratory, MS B262, Los Alamos, New Mexico 87545, USA
Instituto de Física, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro 21941-972, RJ, Brazil
Instituto de Física, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, 21941-972, RJ, Brazil
Instituto de Física, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro 21941-972, RJ, Brazil
[email protected]
Instituto de Física, Universidade Federal Fluminense, 24210-346, Niterói RJ, Brazil
Controlling electric and magnetic Purcell effects in phosphorene via strain engineering
Tarik P. Cysne
August 1, 2023
=======================================================================================
§ INTRODUCTION
In a pioneering work, E. M. Purcell demonstrated that the surrounding environment could drastically modify the spontaneous emission (SE) rate of an excited quantum system <cit.>. This effect occurs due to the modification of the local electromagnetic density of states and, consequently, the number of available decay channels for the deexcitation of the emitter. The engineering of the SE via the Purcell effect is an accessible tool for probing the optical density of states, leading to a plethora of applications that run from the design of efficient scintillators <cit.> and light-emitting diodes <cit.> to single-photon sources <cit.>. The study of the Purcell effect remains an active topic in nanophotonics and has been investigated for emitters near structures of distinct geometries and materials <cit.>.
Quantum emitters are confined systems with discrete electronic spectra subjected to radiative optical transitions. They can either be atoms, molecules, nanoparticles, or even quantum dots. For most quantum emitters, the decay from an excited state to the ground one occurs via the electric dipole (ED) transition <cit.>. There exist, for example, a variety of quantum dots that emit via ED transitions in wavelengths ranging from 0.3 to 4.1 μm <cit.>. Nevertheless, the SE may also occur due to magnetic dipole (MD) transitions <cit.>. Most often, the MD contribution to the SE is weaker than the ED one by a factor of α=1/137 <cit.>, so the electric Purcell effect has been usually much more investigated in photonics than its magnetic counterpart. However, recent progress in nanofabrication techniques has allowed for the design of new nanostructures that enhances the MD contribution in relation to the SE <cit.>. In addition, the SE of rare-earth ions <cit.> and some suitably designed quantum dots <cit.> can also be dominated by MD transitions. Depending on the emitter, the wavelength of the MD transition may vary from 0.5 to 500 μm <cit.>. Recent studies on the magnetic Purcell effect include emitters close to dielectric nanostructures <cit.>, antiferromagnets <cit.>, and parity-time symmetric potentials <cit.>, but its full potential for applications is still unexplored.
The advent of two-dimensional (2D) materials, triggered by the synthesis of graphene nearly two decades ago, has unlocked a new venue in tailoring light-matter interactions down to the nanoscale. In contrast to the usual three-dimensional materials used in photonics, 2D materials possess an electronic structure that can be highly modified by external stimuli with weak or moderate intensities, enabling unprecedented control of light-matter interactions. For instance, the possibility of applying electromagnetic fields to control Casimir and Casimir-Polder interactions on graphene and graphene-family materials has been theoretically explored <cit.>. Similar studies on the Purcell effect <cit.>, near-field radiative heat transfer <cit.>, photonic spin Hall effect <cit.>, and resonance energy transfer <cit.> have also been performed and, despite the great level of tunability predicted in all these cases, the application of strong external electromagnetic fields may present practical difficulties. Furthermore, 2D materials are experimentally used in nanophotonics <cit.>, prompting the search for novel methods to control their interaction with light.
Phosphorene is a monolayer of black phosphorus, first synthesized in 2014 <cit.>. This atomically thin material has emerged as an appealing platform for application in optics, among other reasons, due to its anisotropic band structure and direct electronic energy gap <cit.>. Indeed, it was shown that this anisotropy may cause non-trivial changes in the sign of the Casimir-Lifshitz torque <cit.>. Some studies on the ED SE close to phosphorene have also been carried out, analyzing the behavior of its electronic spectra with layer stacking and twisting <cit.>. In contrast to other 2D materials, the puckered lattice of phosphorene makes its electronic structure very sensible to strain <cit.>, and its flexibility allows for sustaining high-strain levels up to 30% <cit.>. When subjected to uniaxial strain, which is usually implemented in experiments <cit.>, the energy band gap in phosphorene and the Fermi velocity of the carriers are altered, which modifies the anisotropic character of the material and results in a modification of its optical response.
By means of a more sophisticated tight-binding model that goes beyond the low-energy description commonly used in the framework of nanophotonics to model phosphorene layers <cit.>, we are able to describe the modifications in the material properties due to the application of a uniform strain field. Indeed, we demonstrate that this methodological progress, when applied in the context of nanophotonics, is able to not only successfully describe the optomechanical properties of phosphorene but also unveil unknown optical functionalities so far. Based on such a model, we demonstrate that uniaxially strained phosphorene may affect the SE of electric and magnetic dipole emitters, leading to a remarkable suppression of almost 100% and enhancements of more than 1300% of the Purcell effect. We discuss the situations in which the dipole moment is aligned parallel to the x (armchair), y (zigzag), and z (perpendicular) directions. We show that the intrinsic anisotropy of the phosphorene lattice implies the dependence of the decay rate on the orientation of the electric and magnetic dipoles. Finally, our findings attest that strain can be employed to tailor the probabilities associated with the different decay channels into which the photon can be emitted, demonstrating the impact of the extraordinary optomechanical properties of phosphorene in light emission engineering.
§ THEORETICAL MODEL AND RESULTS
We use the tight-binding model for phosphorene developed in Refs. <cit.>. This model has been successfully applied in the context of condensed matter physics to describe many of phosphorene's remarkable properties, such as its topological characteristics <cit.>, the anisotropic nature of its optical response <cit.>, the quantum transport properties in the presence of disorder <cit.>, and its mesoscopic physics <cit.>. Using Harrison's prescription, one can also include the effect of a uniform strain field in the model <cit.>. Previous studies on phosphorene applied to nanophotonics used a low-energy description <cit.> simply including a direction-dependent Fermi velocity <cit.>, which captures the phosphorene's anisotropic optical nature. Nevertheless, these models are insufficient to explore phosphorene's strain engineering, one of the prominent characteristics of the material. The tight-binding model for strained phosphorene is reviewed in Appendix <ref>. As we discuss in the following, the application of this tight-biding model allows for a successful description of the optomechanical properties of phosphorene and unveils the unique quantum emission functionalities that can be harnessed by the presence of strain.
The optical conductivity of strained phosphorene monolayer can be computed from the tight-binding Hamiltonian [Eq. (<ref>)], employing linear response theory <cit.>. Here, we neglect spatial dispersion, which is supported by previous numerical calculations using different 2D materials that showed that this approximation accurately describes the Purcell effect for the distance scales we are interested in this work <cit.>. Within these assumptions, one can write the constitutive equation J (r, ω) = σ (ω, ϵ_μ) ·E (r, ω), where E (r, ω) is the amplitude of the oscillating electric field, J (r, ω) is the amplitude of the induced oscillating charge current, and
σ (ω, ϵ_μ)=[ σ_xx(ω, ϵ_μ) 0; 0 σ_yy(ω, ϵ_μ) ]
is the optical conductivity tensor of strained phosphorene. In this expression, ϵ_μ (μ= x, y, z) is the uniform strain in phosphorene applied along the μ direction. In Appendix <ref>, we compute the optical conductivity of strained phosphorene in different situations.
§.§ Electric dipole emission
We consider the system depicted in Fig. <ref>. The half space z<0 is composed of a homogeneous, isotropic, and nonmagnetic dielectric with permittivity ε_ s (ω). On top of this substrate (z=0), a phosphorene sheet is placed. The substrate permits the mechanical application of uniaxial strain in the phosphorene layer. We assume the upper medium z>0 to be vacuum, and an excited quantum emitter is located at r_0 = (0,0,d).
We first consider the quantum emitter as a two-level system dominated by an ED transition between the excited |e⟩ and ground |g⟩ states with energy difference E_e-E_g=ħω_0=ħ k_0 c. The electric Purcell factor (PF) is the modification in the SE rate due to the presence of neighboring objects and can be written as <cit.>
Γ^(e) (r)/Γ_0^(e) = 6 π c/ω_0 Im[ p̂·G^(e) (r,r, ω_0) ·p̂] ,
where Γ_0^(e) = |p|^2 ω_0^3/3 πħε_0 c^3 is the free space SE rate of an ED emitter, p is the emitter's transition ED moment, p̂ = p/|p|, and G^(e) (r,r', ω) is the electric dyadic Green function of the system. One can evaluate the PF writing G^(e) (r,r, ω) in terms of the diagonal part of the reflection matrices <cit.>. With the knowledge of the optical conductivity of phosphorene and the electric permittivity of the substrate, one can calculate the desired reflection coefficients by solving the Maxwell equations with the appropriate boundary conditions (see Appendix <ref>). The expressions of the electric PFs Γ^(e)_x/Γ_0^(e), Γ^(e)_y/Γ_0^(e), and Γ^(e)_z/Γ_0^(e) for the cases of transition ED moments parallel to the x (armchair), y (zigzag), and z (perpendicular) directions, respectively, can be cast as <cit.>
Γ^(e)_x/Γ_0^(e) = 1 + 3/4π k_0 Im[ i ∫ d^2 k_∥e^2 i √(k_0^2 - k^2_∥) d/ k^2_∥√(k_0^2 - k^2_∥).
×.( k^2_y r_ss - k^2_x (k_0^2 - k^2_∥)/k^2_0 r_pp) ],
Γ^(e)_y/Γ_0^(e) = 1 + 3/4π k_0 Im[ i ∫ d^2 k_∥e^2 i √(k_0^2 - k^2_∥) d/k^2_∥√(k_0^2 - k^2_∥).
×. ( k^2_x r_ss - k^2_y (k_0^2 - k^2_∥)/k^2_0 r_pp) ],
Γ^(e)_z/Γ_0^(e) = 1 + 3/4π k_0^3 Im[ i ∫ d^2 k_∥k^2_∥ e^2 i √(k_0^2 - k^2_∥) d/√(k_0^2 - k^2_∥) r_pp],
where r_ss and r_pp are diagonal reflection coefficients (see Appendix <ref>) and k_∥=|k_∥|=|k_xx̂+k_yŷ|. Due to the anisotropic nature of phosphorene, we obtain Γ^(e)_x≠Γ^(e)_y. Throughout this paper, we consider a silicon carbide (SiC) substrate and, in all results of the main text, we set the Fermi energy of phosphorene at E_ F=0.7 eV. The control of the carriers density to keep the Fermi energy fixed can be done by tuning the back-gate voltage <cit.>.
In Fig. <ref>, we show the PFs as functions of the distance d between the emitter and the phosphorene/SiC medium for different values of uniaxial strain ϵ_y = -20, -10, 0, 10, 20 %. We consider emitters with ED transitions at three distinct wavelengths λ_0 = 2π c/ω_0, to wit, 1.5 μm, 4.1 μm, and 10 μm, the first two values lying in the near to mid-IR range reached by a wide variety of quantum dots <cit.>. Emitters with longer wavelengths have already been experimentally explored in the context of SE <cit.>. Comparing the results corresponding to relaxed phosphorene sheets, one can see that the longer the transition wavelengths, the more pronounced the changes in the SE rates are, with the PFs reaching values in excess of 10^5 when d = 10 nm. When strain comes into play, the PFs may be dramatically modified, particularly at small distances. As discussed in Appendix <ref>, the compressive uniaxial strain (ϵ_y<0) enhances the Drude weight and, consequently, the intraband contribution to the optical conductivity. The opposite occurs in the case of tensile strain (ϵ_y>0), which decreases the Drude weight and the intraband contribution. In most frequency ranges, the interband contribution presents the same behavior. It should be noticed that, for λ_0 = 1.5 μm and λ_0 = 4.1 μm, these patterns with ϵ_y are also followed by the PFs: The electric PF increases (decreases) with compressive (tensile) strain. The exception occurs in the case of λ_0 = 10 μm, in which the PFs reveal a non-monotonic behavior with strain ϵ_y. It is worth mentioning that, for ϵ_y = 20%, the bottom of the conduction band of phosphorene surpasses the value of 0.7 eV, and the Fermi energy used in Fig. <ref> becomes located inside the energy bandgap. In such a situation, the intraband term of the optical conductivity disappears, thereby surviving only the interband contribution, which produces abrupt reductions in the PFs. Finally, note that all SE rates tend to the free-space value at large distances, and the associated PFs are barely affected by strain, as expected.
To quantify the degree of control of the SE, we define
ΔΓ^(e)_ν = Γ^(e)_ν|_ϵ_x,y≠ 0 - Γ^(e)_ν|_ϵ_x,y=0/Γ^(e)_ν|_ϵ_x,y=0,
where Γ^(e)_ν|_ϵ_x,y≠ 0 (Γ^(e)_ν|_ϵ_x,y=0) is the decay rate of the emitter aligned parallel to the ν direction near strained (relaxed) phosphorene/SiC half space. The percentage variation in the SE rates of the three emitters induced by strain applied in the y direction for ϵ_y=± 20 % as a function of separation between the emitter and the phosphorene/SiC half-space is illustrated in Fig. <ref>. From these results, the signature of the anisotropic nature of phosphorene becomes evident since ΔΓ^(e)_x≠ΔΓ^(e)_y. We highlight that the electric PFs for λ_0 = 4.1 μm can be enhanced up to 1300 % by compressive strain ϵ_y = -20 %. In the case of tensile strain ϵ_y = 20%, for which Fermi energy E_ F=0.7 eV lies inside the insulating gap, the PFs are reduced by a striking factor close to 100%, being nearly suppressed. In this situation, phosphorene becomes invisible to the emitter, demonstrating that strain can switch on and off quantum emission on demand. A residual Purcell effect still occurs due to the presence of the SiC substrate.
Despite the inherent anisotropic character of phosphorene, the effects of uniaxial strain along the x direction are qualitatively similar when compared to the previous ones. By using an expression equivalent to Eq. (<ref>), we can estimate the relative modification in the SE generated by strain applied in the x direction, as presented in Appendix <ref>.
§.§ Magnetic dipole emission
We now discuss the study of the magnetic Purcell effect. The setup is similar to the one considered in Fig. <ref>. The difference is that the emitter decays to the ground state mediated by an MD transition. The magnetic PF can be obtained from <cit.>
Γ^(m) (r)/Γ_0^(m) = 6 π c^3/ω_0^3 Im[ m̂·G^(m) (r, r, ω_0) ·m̂] .
In the previous relation, Γ_0^(m) = μ_0 ω_0^3 |m|^2/3 πħ c^3 is the free space SE rate of an MD emitter, m is the emitter's transition MD moment, m̂ = m/|m|, and G^(m) (r, r', ω_0) is the magnetic Green dyadic. Analogously to the electric case, one can also express the magnetic PFs in terms of the diagonal part of the reflection matrices, and the formulas corresponding to the MD moments parallel to the x, y, and z directions are
Γ^(m)_x/Γ_0^(m) = 1 + 3/4π k_0 Im[ i ∫ d^2 k_∥e^2 i √(k_0^2 - k^2_∥) d/ k^2_∥√(k_0^2 - k^2_∥).
×. ( k^2_y r_pp - k^2_x (k_0^2 - k^2_∥)/k^2_0 r_ss) ],
Γ^(m)_y/Γ_0^(m) = 1 + 3/4π k_0 Im[ i ∫ d^2 k_∥e^2 i √(k_0^2 - k^2_∥) d/k^2_∥√(k_0^2 - k^2_∥).
×. ( k^2_x r_pp - k^2_y (k_0^2 - k^2_∥)/k^2_0 r_ss) ],
Γ^(m)_z/Γ_0^(m) = 1 + 3/4π k_0^3 Im[ i ∫ d^2 k_∥k^2_∥ e^2 i √(k_0^2 - k^2_∥) d/√(k_0^2 - k^2_∥) r_ss].
Note that the final expressions for the magnetic PFs are very similar to the electric ones, given in Eqs. (<ref>)-(<ref>), only requiring the exchange r_ss↔ r_pp <cit.>. Likewise, Γ^(m)_x≠Γ^(m)_y due to the anisotropy of phosphorene.
In Fig. <ref>, we display the magnetic PFs as functions of the distance between the emitter and the phosphorene/SiC half-space for different values of uniaxial strain ϵ_y=-20, -10, 0, 10, 20 % applied along the y direction. We assume emitters with magnetic transition wavelengths λ_0 = 10, 150, 300 μm. The general behavior of the magnetic PFs presents some similarities when compared to the electric one, showing huge variations for small d. For larger d, the spontaneous decay rates tend to the free-space value, as expected. Furthermore, compressive strains (ϵ_y<0) enhance the magnetic PFs, whereas tensile strains (ϵ_y>0) diminish them. In this case, however, the magnetic PFs obey the scaling law Γ^(m)_ν/Γ^(m)_0∝ d^-2 (ν = x, y, z) for small separations, which can be clearly noticed in the plots with larger wavelengths (λ_0=150 μm and λ_0=300 μm) and for strain values whose Fermi energy E_ F=0.7 eV crosses the phosphorene bands (ϵ_x,y=-20, -10, 0, 10%). It is noteworthy that, for our choices of ED transitions, we did not find any scaling law in this same distance regime. We briefly mention that, in the case of ED emitters near graphene, it was shown that larger wavelength values and small distance regimes also obey a scaling law of the form Γ^(e)/Γ^(e)_0∝ d^-4 <cit.>.
To quantify the change in the magnetic PFs produced by strain, we define the quantity ΔΓ^(m)_ν analogous to Eq. (<ref>). Figure <ref> shows the results for the relative modification on the magnetic PFs produced by compressive (tensile) strain ϵ_y=-20% (20%). In Appendix <ref>, we included analogous plots considering strain along the x direction. In both situations, the tensile strain may nearly suppress the magnetic PFs for small separations between the emitter and the phosphorene/SiC medium. The compressive strain along the two directions strongly enhances the magnetic PFs for small distances d for the three wavelengths considered.
§ DECAY CHANNELS
Results portrayed in Figs. <ref>-<ref> demonstrate the potential of manipulating the electric and magnetic PFs of an emitter close to phosphorene/SiC by applying strain. To acquire more physical insights into these results, we analyze the decay channels of the emitted quanta in the specific case of dipoles perpendicular to the phosphorene interface with strain applied in the y direction. The outcome is qualitatively alike when considering dipoles parallel to the surface and/or strain in the x direction.
The relaxation process of an emitter in free space is followed by a radiative emission into propagating (Prop) modes detectable in the far field. When close to a given environment, other channels become accessible, especially in the near-field regime <cit.>. For instance, the photon can be emitted into total internal reflection (TIR) modes that show up for k_0 < k_∥ < n_s k_0, where n_s = Re[√(ε_s/ε_0)] stands for the medium refraction index. When losses are negligible, such modes propagate within the substrate but are evanescent in vacuum. Another possibility is the emitter to deexcite by a nonradiative process in which its energy is transferred directly to the half-space giving origin to lossy surface waves (LSWs). They emerge when k_∥≫ n_s k_0, their energy being quickly damped and converted into heat. From Eq. (<ref>), we can extract the contributions of each channel to the decay rate as <cit.>
Γ^(e)_z, Prop/Γ_0^(e) ≃ 1 + 3/4 π k^3_0∫_0^k_0 d k_∥∫_0^2π dϕk^3_∥ Re[ e^2 i √(k_0^2 - k^2_∥)d r_pp]/√(k_0^2 - k^2_∥) ,
Γ^(e)_z, TIR/Γ_0^(e) ≃3/4π k^3_0∫_k_0^n_s k_0 dk_∥∫_0^2π dϕk^3_∥ e^-2 √(k^2_∥ - k_0^2) d Im[r_pp]/√(k^2_∥ - k_0^2),
Γ^(e)_z, LSW/Γ_0^(e) ≃3/4π k^3_0∫_n_s k_0^∞ d k_∥∫_0^2π dϕk^3_∥ e^- 2 √(k^2_∥ - k_0^2) d Im[r_pp]/√(k^2_∥ - k_0^2).
In the case of the magnetic Purcell effect, the decay contributions follow the aforementioned expressions with the exchange r_pp↔ r_ss [see Eq. (<ref>)]. The probabilities p_z, Prop^(e), p_z, TIR^(e), and p_z, Eva^(e) of energy emission in the different decay channels are calculated by the ratio between the partial and the total rates. Similar decomposition can be done for dipoles lying parallel to the x and y directions.
In Fig. <ref>, we depict the decay probabilities as functions of the distance d in order to uncover the role of the different relaxation channels for an ED emitter. Each plot refers to a transition wavelength (λ_0 = 1.5, 4.1, and 10 μm), and different strain intensities along the y direction (ϵ_y=-10, 0, 10%) are shown in each panel. As d increases, the propagating modes become the dominant decay channel, minimizing the effects of the interface on SE. This can be clearly noticed for λ_0 = 1.5 μm, in which case the decay via propagating modes dominates. However, the same behavior will also occur for the other wavelengths provided d is large enough. Indeed, as d decreases the propagating channel gets progressively suppressed, giving rise to competition between TIR and LSW modes. Moreover, the probabilities associated with these decay channels may be highly influenced by strain to the point where one may tune the relative dominance between TIR and LSW processes. For the transition wavelength λ_0 = 4.1 μm, this variation in the dominant decay channel can be achieved for separations 20 nm ≲ d ≲ 100 nm, while for λ_0 = 10 μm, the corresponding range is 100 nm ≲ d ≲ 300 nm. Lastly, note that LSWs govern the SE in the near-field regime (which also holds for λ_0 = 1.5 μm in the extreme near-field). In Fig. <ref>, we display the different relaxation channels probabilities for the MD case for the transition wavelengths λ_0 = 10, 150, 300 μm. The main aspects of the discussion follow analogously to the previous case, with the difference that the distance scales for which each mode is most relevant may comprise larger values. Ultimately, Figs. <ref> and <ref> unveil the possibility of controlling the preferable pathway of emitted energy in the decay process via uniform uniaxial strain. It also shows that, at a fixed distance, emitters with larger wavelengths are more prone to the control of spontaneous emission by strain in phosphorene.
§ CONCLUSIONS
In summary, we have applied a tight-binding approach that goes beyond the low-energy description traditionally used in nanophotonics to investigate spontaneous emission in phosphorene layers. With this methodology, we demonstrate remarkable external control over the electric and magnetic Purcell effects by applying uniform strain. The application of strain is also shown to control the different decay pathways that contribute to SE. The use of high-strain levels is only possible due to the great flexibility of the phosphorene sheet that has its origins in its puckered lattice structure. The strain-based approach to control quantum emission in phosphorene is within the reach of state-of-the-art techniques <cit.>, and it is a clear advantage when compared to existing proposals based on electromagnetic fields acting as external agents. We hope that our results will not only allow for an alternative method to tune spontaneous emission but also be relevant in developing new photonic devices, as the Purcell effect is a key mechanism in many quantum-optical applications such as single-photon sources.
§ ACKNOWLEDGMENTS
T.P.C., F.A.P., F.S.S.R., and C.F. thank the Brazilian Agencies CAPES, CNPq, and FAPERJ for financial support. P.P.A. is supported by the São Paulo Research Foundation (FAPESP) through Grant No. 2021/04861-7. W.J.M.K.-K. acknowledges the Laboratory Directed Research and Development program of Los Alamos National Laboratory under Projects No. 20220228ER and 20220627DI. T.P.C. would like to thank R. de Melo e Souza for the fruitful discussions.
§ TIGHT-BINDING MODEL OF PHOSPHORENE
Throughout this work, we describe the electronic structure of phosphorene by employing a simplified two-band tight-binding model <cit.>. The inclusion of uniform strain is done by using the Harrison prescription <cit.>. In short, this model captures the behavior of the anisotropic spectra of phosphorene with a uniform strain field. The Hamiltonian can be cast into
H^(2)_q=[ B_q e^i(q_a-q_b)/2 A_q + C_q e^i(q_a-q_b)/2; A^*_q + C^*_q e^-i(q_a-q_b)/2 B_q e^i(q_a-q_b)/2 ],
where
A_q = t_2+t_5 e^-iq_a,
B_q = 4t_4 e^-i(q_a-q_b)/2cos(q_a/2) cos(q_b/2),
C_q = 2 e^iq_b/2cos(q_b/2)(t_1 e^-iq_a+t_3).
Here, q_a=q·a, q_b=q·b, where a=(4.580 ) x̂ and b=(3.320 ) ŷ are lattice vectors of the unstrained phosphorene monolayer and q is the electronic momentum. One can follow the Harrison prescription and include the effect of strain in the hopping amplitudes <cit.>
t_i≈( 1-2 α^i_xϵ_x- 2α^i_y ϵ_y-2α^i_z ϵ_z )t^0_i,
with t^0_1=-1.220 eV, t^0_2=3.665 eV, t^0_3=-0.205 eV, t^0_4=-0.105 eV, and t^0_5=-0.055 eV being hopping parameters of the unstrained phosphorene <cit.>, and α^i_μ=(δ^i_μ/|δ^i|)^2, where δ^i is the i-th hopping vectors: δ^1=(r^0_1x, r^0_1y, 0), δ^2=(-r^0_2x, 0, -r^0_2z), δ^3=(-2r^0_2x-r^0_1x, r^0_1y, 0), δ^4=(r^0_1x+r^0_2x, r^0_1y, -r^0_2z), and δ^5=(2r^0_1x+r^0_2x, 0, -r^0_2z). They are written in terms of vectors r^0_1=( 1.503, 1.660, 0) and r^0_2=( 0.786, 0, 2.140). The parameter ϵ_μ is negative (positive) for compressive (tensile) uniaxial strain along the μ direction (μ=x, y, z).
In Fig. <ref>(a), we show how strain along the y direction modifies the energy spectra E(q) and the velocity of the carriers v(q)=ħ^-1∇_q E(q). The compressive strain (ϵ_y<0) reduces the energy gap of phosphorene at the Γ point and enhances the modulus of the velocity of the carriers. On the other hand, the tensile strain (ϵ_y>0) enhances the energy gap of the electronic spectra and reduces the velocity of the carriers. The behaviors of the energy spectra and the electronic velocity with strain along the x direction are qualitatively similar, while strain in the z direction produces an opposite effect, as can be seen in Fig. <ref>(c).
§ OPTICAL CONDUCTIVITY
With Hamiltonian (<ref>), we can compute the matrix elements of the optical conductivity tensor of strained phosphorene, written in Eq. (<ref>). Generally, it is possible to express the optical conductivity as a sum of two contributions, to wit, σ_μ,μ(ω, ϵ_μ)=σ^(Inter)_μ,μ(ω)+σ^(Intra)_μ,μ(ω) <cit.>. The intraband contribution is given by
σ^(Intra)_μ, μ (ω) = i D_μ,μ/ħω + i η_1,
where the Drude weight is
D_μ, μ = - g_s e^2 ħ/S∑_n = 1,2∑_q f'_n,q⟨u_q,n|v̂_μ (q) |u_q, n⟩^2.
The interband contribution is obtained from the Kubo formula <cit.>
σ^(Inter)_μ, μ (ω) = i g_s e^2ħ/S∑_q⟨u_q,1|v̂_μ (q) |u_q, 2⟩^2/Δ E_q
×[f_q,1 - f_q,2/ħω + Δ E_q + i η_2 + f_q,1 - f_q,2/ħω - Δ E_q + i η_2],
where |u_q, 1(2)⟩ is the eigenvector of the Hamiltonian (<ref>), associated with energy bands E_q, 1(2), and Δ E_q = (E_q, 2-E_q, 1). Furthermore, f_q,1(2)=f_FD(E_q, 1(2)), with f_FD(E)={exp[(E-E_F)/k_BT] +1}^-1 being the Fermi-Dirac distribution. In Eq. (<ref>), we also defined f'_1(2),q=[ ∂ f_FD(E)/∂ E ] |_E=E_q,1(2). The velocity operator in the μ direction is given by v̂_μ(q)=ħ^-1∂ H^(2)_q/∂ q_μ, with μ=x, y. In Eqs. (<ref>) and (<ref>), g_s=2 is the spin degeneracy factor, S is the area of the phosphorene layer. We express the results in terms of σ_0=e^2/ħ. In Eq. (<ref>), η_1=ħ/(2τ) and τ is the momentum relaxation time <cit.>. In Eq. (<ref>), η_2 is a small phenomenological quantity. In all results presented in this paper, we used T= 180 K, η_1= 25 meV, and η_2= 25 meV <cit.>.
Now, we briefly discuss the optical conductivity in insulating and metallic cases. In Fig. <ref>, we show the Drude weight as a function of the Fermi energy for different values of uniaxial strain along the y direction. These plots illustrate how the Drude weight can be well controlled by uniform strain, which occurs as a direct consequence of the change in the velocities of the carriers due to the strain, as previously mentioned in Fig. <ref>(a). In addition, we may separate two distinct situations depending on the Fermi energy. In the insulating case, E_ F lies inside the energy gap (shaded region in Fig. <ref>), and the Drude weight vanishes. Consequently, the intraband term does not contribute to the optical conductivity. The metallic case occurs when E_F crosses a Bloch band of the phosphorene energy spectra. In this situation, the Drude weight is non-zero, and the optical conductivity has contributions from both interband and intraband terms.
In Fig. <ref>, we show the real and imaginary parts of the optical conductivity for the case of Fermi energy E_ F= (E_q=0,2+E_q=0,1)/2 lying inside the insulating bandgap and different values of ϵ_y. In this situation, σ_μ,μ (ω) = σ^(Inter)_μ,μ (ω). For comparison, we show in Fig. <ref> the same quantities, but for a fixed Fermi energy E_ F = 0.7 eV. For ϵ_y=-20% and ϵ_y=0% in Fig. <ref>, the Fermi energy crosses the electronic bands of phosphorene, and the system exhibits a metallic behavior, such that σ_μ,μ(ω) = σ^(Inter)_μ,μ(ω) + σ^(Intra)_μ,μ(ω). For ϵ_y=20 %, the bottom of the conduction band surpasses the Fermi energy E_ F=0.7 eV that enters into the energy gap region, thereby vanishing the intraband contribution to the conductivity. From these results, it becomes evident the possibility of controlling the optical responses of phosphorene by means of uniaxial strain. Typically, fixed Fermi energy can be maintained by controlling the carriers doping <cit.>, which is possible by tuning the back gate voltage in the substrate <cit.>.
§ REFLECTION COEFFICIENTS
In our system, the phosphorene sheet is grown on top of a substrate of silicon carbide (SiC), whose electrical permittivity can be modeled by a simple Drude-Lorentz model <cit.>
ε_SiC(ω)/ε_0 = ε_∞( 1+ ω^2_L-ω^2_T/ω_T^2 - ω^2 - iω/τ_SiC),
with ε_∞=6.7, ω_L=182.7 × 10^12 rad/s, ω_T=149.5 × 10^12 rad/s, and τ_SiC^-1=0.9 × 10^12 rad/s.
The reflection coefficients of the phosphorene/SiC medium can be derived by solving the Maxwell equations with proper boundary conditions <cit.>. Following Ref. <cit.>, we obtain the diagonal parts of the reflection matrices
r_pp = Δ_+^ TΔ_-^ L + Λ^2/Δ_+^ TΔ_+^ L + Λ^2 and r_ss = - Δ_-^ TΔ_+^ L + Λ^2/Δ_+^ TΔ_+^ L + Λ^2.
In both equations,
Δ^ L_± = ( k_z,1ε_2 ± k_z,2ε_1 + k_z,1 k_z,2σ_ L/ω)/ε_0,
Δ^ T_± = (k_z,2μ_1 ± k_z,1μ_2 +ωμ_1 μ_2 σ_ T)/μ_0,
Λ^2 = - Z_0^2 μ_1 μ_2 k_z,1 k_z,2σ_ LT^2/μ_0^2.
In our system, medium 1 is vacuum (ε_1 = ε_0, μ_1 = μ_0) and medium 2 is the SiC substrate (ε_2 = ε_SiC, μ_2 = μ_0). In Eqs. (<ref>)-(<ref>), Z_0 = √(μ_0/ε_0), k_z,n = √(k^2_n - k^2_∥), k_∥ = |k_∥| = |k_x x̂ + k_y ŷ|, and k_n = ω√(ε_n μ_n) (n=1,2). We have also defined the optical conductivities in the reference frame of the incident electromagnetic wave <cit.>, so that σ_ L=( k_x^2 σ_xx + k_y^2 σ_yy)/k^2_∥, σ_ T = ( k_y^2 σ_xx + k_x^2 σ_yy)/k^2_∥, and σ_ LT=k_x k_y ( σ_yy - σ_xx)/k_∥^2, where σ_xx (yy) are given by Eqs. (<ref>)-(<ref>). We stress that the inclusion of substrate in this work has conceptual importance, allowing for the application of strain in the plane of phosphorene. Nevertheless, the optical response in the phosphorene/SiC half-space is dominated by phosphorene. The strain along the z direction cannot be controlled in the setup proposed in Fig. <ref>.
§ PURCELL FACTORS FOR STRAINS IN THE X DIRECTION
In Figs. <ref> and <ref>, we presented the electric and magnetic PFs as functions of the separation between the emitter and the phosphorene/SiC half-space for different values of uniaxial strain applied along the y direction. Figures <ref> and <ref> contain the results for the electric and magnetic PFs, respectively, when considering the uniaxial strain applied along the x direction.
Figures <ref> and <ref> present the percentage variation in the electric and magnetic PFs, respectively, generated by the uniaxial strain along the x direction as functions of the distance from the emitter to the phosphorene/SiC medium. In both results, the compressive strain may strongly increase the PFs, while the tensile strain nearly suppresses them. We highlight the electric PF for λ_0 = 4.1 μm that can be enhanced up to almost 1000 %.
apsrev
1
Purcell1946 E. M. Purcell, H. C. Torrey, and R. V. Pound, Phys. Rev. 69, 37 (1946).
Ye-Bizarri-Scintillator-2022 W. Ye, G. Bizarri, M. D. Birowosuto, and L. J. Wong, ACS Photonics 9, 3917 (2022).
Kim-Jung-Park-2021 S. K. Kim, S. W. Jung, H.-U. Park, R. Lampande, J. H. Kwon, Org. Electron. 95, 106192 (2021).
Huang-Chen-Yang-OptExpress-2022 S. Huang, Y. Chen, Y. Yang, and W. E. I. Sha, Opt. Express 14, 24544 (2022).
Kaupp-Hunger-PRApplied-2016 H. Kaupp, T. Hummer, M. Mader, B. Schlederer, J. Benedikter, P. Haeusser, H.-C. Chang, H. Fedder, T. W. Hansch, and D. Hunger, Phys. Rev. Appl. 6, 054010 (2016).
Jeantet-Voisin-PRL-2016 A. Jeantet, Y. Chassagneux, C. Raynaud, Ph. Roussignol, J. S. Lauret, B. Besga, J. Esteve, J. Reichel, and C. Voisin, Phys. Rev. Lett. 116, 247402 (2016).
Blanco-GarciadeAbajo-2004 L. A. Blanco and F. J. García de Abajo, Phys. Rev. B 69, 205414 (2004).
Rosa-Farina-2008 F. S. S. Rosa, T. N. C. Mendes, A. Tenório, and C. Farina, Phys. Rev. A 78, 012105 (2008).
Biehs-Greffet-2011 S.-A. Biehs and J.-J. Greffet, Phys. Rev. A 84, 052902 (2011).
Vladimirova-adkov-2012 Y. V. Vladimirova, V. V. Klimov, V. M. Pastukhov, and V. N. Zadkov, Phys. Rev. A 85, 053408 (2012).
Kort-Kamp-Farina-2013 W. J. M. Kort-Kamp, F. S. S. Rosa, F. A. Pinheiro, and C. Farina, Phys. Rev. A 87, 023837 (2013).
Klimov-ACSnano-2015 Y.-S. Park, S. Guo, N. S. Makarov, and V. I. Klimov, ACS Nano 9, 10386 (2015).
Klimov-NatMat-2019 Y.-S. Park, J. Lim and, V. I. Klimov, Nat. Mater. 18, 249 (2019).
Lodahl-Nature-2004 P. Lodahl, A. Floris van Driel, I. S. Nikolaev, A. Irman, K. Overgaag, D. Vanmaekelbergh, and W. L. Vos, Nature 430, 654 (2004).
Lodahl-RPM-2015 P. Lodahl, S. Mahmoodian, and S. Stobbe, Rev. Mod. Phys. 87, 347 (2015).
vanDriel-PRL-2005 A. F. van Driel, G. Allan, C. Delerue, P. Lodahl, W. L. Vos, and D. Vanmaekelbergh, Phys. Rev. Lett. 95, 236804 (2005).
Novotny-book L. Novotny and B. Hecht, Principles of Nano-Optics, 2nd ed. (Cambridge University Press, Cambridge, 2006).
Review-QuantumDots H. Lu, G. M. Carroll, N. R. Neale, and M. C. Beard, ACS Nano 13, 939 (2019).
Hussein-Neshev-OLett-2015 R. Hussain, S. S. Kruk, C. E. Bonner, M. A. Noginov, I. Staude, Y. S. Kivshar, N. Noginova, and D. N. Neshev, Opt. Lett. 40, 1659 (2015).
THz-Mag-Purcell H.-W. Wu, Y. Li, H.-J. Chen, Z.-Q. Sheng, H. Jing, R.-H. Fan, and R.-W. Peng, ACS Appl. Nano Mater. 2, 1045 (2019).
Alu-magneticSE-Review D. G. Baranov, R. S. Savelev, S. V. Li, A. E. Krasnok, and A. Alú, Laser Photonics Rev. 11, 1600268 (2017).
Feng-QD-Lambda-500nm T. Feng, W. Zhang, Z. Liang, Y. Xu, and E. Miroshnichenko, ACS Photonics 5, 678 (2018).
MPF-Silicon-Nanostructures Y. Brule, P. Wiecha, A. Cuche, V. Paillard, and G. C. des Francs, Opt. Express 12, 20360 (2022).
Ferreira-Peres-EPL-2019 B. A. Ferreira and N. M. R. Peres, Europhys. Lett. 127, 37002 (2019).
Alaeian-Dionne-PRB-2015 H. Alaeian and J. A. Dionne, Phys. Rev. B 91, 245108 (2015).
Cysne-CasimirPolder-PRA-2014 T. Cysne, W. J. M. Kort-Kamp, D. Oliver, F. A. Pinheiro, F. S. S. Rosa, and C. Farina, Phys. Rev. A 90, 052511 (2014).
Silvestre-QR-PRA-2019 M. Silvestre, T. P. Cysne, D. Szilard, F. A. Pinheiro, and C. Farina, Phys. Rev. A 100, 033605 (2019).
Abrantes-QR-PRB-2021 P. P. Abrantes, Tarik P. Cysne, D. Szilard, F. S. S. Rosa, F. A. Pinheiro, and C. Farina, Phys. Rev. B 104, 075409 (2021).
Rodriguez-Lopez-NatCommun-2017 P. Rodriguez-Lopez, W. J. M. Kort-Kamp, D. A. R. Dalvit, and L. M. Woods, Nat. Commun. 8, 14699 (2017).
Muniz-Farina-Kort-Kamp-2021 Y. Muniz, C. Farina, and W. J. M. Kort-Kamp, Phys. Rev. Res. 3, 023061 (2021).
Kort-Kamp-Amorim-FresnelCoefficients W. J. M. Kort-Kamp, B. Amorim, G. Bastos, F. A. Pinheiro, F. S. S. Rosa, N. M. R. Peres, and C. Farina, Phys. Rev. B 92, 205415 (2015).
NFRHTGraphene-PRApp H. Wu, Y. Huang, L. Cui, and K. Zhu, Phys. Rev. Appl. 11, 054020 (2019).
NFRHTGraphene-PRB L. Ge, K. Gong, Y. Cang, Y. Luo, X. Shi, and Y. Wu, Phys. Rev. B 100, 035414 (2019).
Kort-Kamp-PRL-2017 W. J. M. Kort-Kamp, Phys. Rev. Lett. 119, 147401 (2017).
MShah-JPD-AppPhys M. Shah, J. Phys. D: Appl. Phys. 55, 105105 (2022).
Abrantes-RET-2021 P. P. Abrantes, G. Bastos, D. Szilard, C. Farina, and F. S. S. Rosa, Phys. Rev. B 103, 174421 (2021).
Low-Martin-Moreno-NatureMaterials T. Low, A. Chaves, J. D. Caldwell, A. Kumar, N. X. Fang, P. Avouris, T. F. Heinz, F. Guinea, L. Martin-Moreno, and F. Koppens, Nat. Mater. 16, 182 (2017).
Reserbat-Plantey-ACSPhotonics-2021 A. Reserbat-Plantey, I. Epstein, L. Torre, A. T. Costa, P. A. D. Gonçalves, N. Asger Mortensen, M. Polini, J. C. W. Song, N. M. R. Peres, and F. H. L. Koppens, ACS Photonics 8, 85 (2021).
Liu-Mohideen-PRL-2021 M. Liu, Y. Zhang, G.L. Klimchitskaya, V.M. Mostepanenko, and U. Mohideen, Phys. Rev. Lett. 126, 206802 (2021).
Guest-NanoLett-2018 C. Husko, J. Kang, G. Moille, J. D. Wood, Z. Han, D. Gosztola, X. Ma, S. Combrie, A. De Rossi, M. C. Hersam, X. Checoury, and J. R. Guest, Nano Lett. 18, 6515 (2018).
Phosphorene-First-Syntesis L. Li, Y. Yu, G. Jun Ye, Q. Ge, X. Ou, H. Wu, D. Feng, X. H. Chen, and Y. Zhang, Nat. Nanotechnol. 9, 372 (2014).
Phosphorene-Second-Syntesis H. Liu, A. T. Neal, Z. Zhu, D. Tomanek, and P. D. Ye, ACS Nano 8, 4033 (2014).
Review-Light-Matter-Phosphorene J. Lu, J. Yang, A. Carvalho, H. Liu, Y. Lu, and C. H. Sow, Acc. Chem. Res. 49, 1806 (2016).
Rudenko-Katsnelson-Ph-NoStrain A. N. Rudenko and M. I. Katsnelson, Phys. Rev. B 89, 201408 (2014).
Rodin-Carvalho-CastroNeto-Ph-NoStrain A. S. Rodin, A. Carvalho, and A. H. Castro Neto, Phys. Rev. Lett. 112, 176801 (2014).
Casimir-Torque-Phosphorene P. Thiyam, P. Parashar, K. V. Shajesh, O. I. Malyi, M. Bostrom, K. A. Milton, I. Brevik, and C. Persson, Phys. Rev. Lett. 120, 131601 (2018).
Phosphorene-SE-Twisting H. Mu, T. Wang, D. Zhang, W. Liu, T. Yu, and Q. Liu, Opt. Express 2, 1037 (2021).
Phosphorene-SE-NLayer B. Sikder, S. H. Mayem, and S. Z. Uddin, Opt. Express 26, 47152 (2022).
Phosphorene-SE-PRAplied-2019 E. van Veen, A. Nemilentsau, A. Kumar, R. Roldan, M. I. Katsnelson, T. Low, and S. Yuan, Phys. Rev. Appl. 12, 014011 (2019).
Phosphorene-SE-Bilayer L. Sun, G. Zhang, S. Zhang, and J. Ji, Opt. Express 13, 14270 (2017).
Peeters-StrainPhosphorene-Model E. Taghizadeh Sisakht, F. Fazileh, M. H. Zare, M. Zarenia, and F. M. Peeters, Phys. Rev. B 94, 085417 (2016).
Midtvedt-Lewenkopf-Croy-2DMat D. Midtvedt, C. H. Lewenkopf, and A. Croy, 2D Mater. 3, 011005 (2016).
Midtvedt-Lewenkopf-Croy-JPCM D. Midtvedt, C. H. Lewenkopf, and A. Croy, J. Phys.: Condens. Matter 29, 185702 (2017).
Flexibility-Phosphorene-1 Q. Wei and X. Peng, Appl. Phys. Lett. 104, 251915 (2014).
Flexibility-Phosphorene-2 X. Peng, Q. Wei, and A. Copple, Phys. Rev. B 90, 085402 (2014).
Exp-Strain-Ph-1 S. Huang, G. Zhang, F. Fan, C. Song, F. Wang, Q. Xing, C. Wang, H. Wu, and H. Yan, Nat. Commun. 10, 2447 (2019).
Exp-Strain-Ph-2 J. Quereda, P. San-Jose, V. Parente, L. Vaquero-Garzon, A. J. Molina-Mendoza, N. Agrait, G. Rubio-Bollinger, F. Guinea, R. Roldan, and A. Castellanos-Gomez, Nano Lett. 16, 2931 (2016).
Alidoust-Akola-PRB-2021 M. Alidoust, E. E. Isachsen, K. Halterman, and J. Akola, Phys. Rev. B 104, 115144 (2021).
Yan-Zhang-Wang-Zhang-Optcond C. H. Yang, J. Y. Zhang, G. X. Wang, and C. Zhang, Phys. Rev. B 97, 245408 (2018).
Li-Peeters-2018 L. L. Li and F. M. Peeters, Phys. Rev. B 97, 075414 (2018).
Li-Peeters-2017 L. L. Li, D. Moldovan, P. Vasilopoulos, and F. M. Peeters, Phys. Rev. B 95, 205426 (2017).
Faria-Junior-Low-energyModel P. E. Faria Junior, M. Kurpas, M. Gmitra, and J. Fabian, Phys. Rev. B 100, 115203 (2019).
Low-Rodin-Optcond T. Low, A. S. Rodin, A. Carvalho, Y. Jiang, H. Wang, F. Xia, A. H. Castro Neto, Phys. Rev. B 90, 075434 (2014).
Non-Local_x_Local-PF-Abajo R. Petersen, T. G. Pedersen, and F. Javier García de Abajo, Phys. Rev. B 96, 205430 (2017).
Szilard_SE_VO2 D. Szilard, W. J. M. Kort-Kamp, F. S. S. Rosa, F. A. Pinheiro, and C. Farina, J. Opt. Soc. Am. B 36, C46 (2019).
Purcell-Phosphorene-OptExpress-Sikder B. Sikder, S. Hasan Nayem, and S. Zia Uddin, Opt. Express 26, 47152 (2022).
Das-Roelofs-ACS-Nano-2014 S. Das, M. Demarteau, and A. Roelofs, ACS Nano 8, 11730 (2014).
Hulet-Kleppner-dipoleTHz-PRL-1985 R. G. Hulet, E. S. Hilfer, and D. Kleppner, Phys. Rev. Lett. 55, 2137 (1985).
Gaudreau-Koppens-NanoLett-2013 L. Gaudreau, K. J. Tielrooij, G. E. D. K. Prawiroatmodjo, J. Osmond, F. J. García de Abajo, and F. H. L. Koppens, Nano Lett. 13, 2030 (2013).
Szilard-PRB-2016 D. Szilard, W. J. M. Kort-Kamp, F. S. S. Rosa, F. A. Pinheiro, and C. Farina, Phys. Rev. B 94, 134204 (2016).
HarrisonBook W. A. Harrison, Elementary Electronic Structure (World Scientific, Singapore, 1999).
Novko-Opticond-Phosphorene D. Novko, K. Lyon, D. J. Mowbray, and V. Despoja, Phys. Rev. B 104, 115421 (2021).
Moshayedi-Opticond-Phosphorene M. Moshayedi, M. R. Preciado Rivas, and Z. L. Miskovic, Phys. Rev. B 105, 075429 (2022).
Cysne-Rappoport-DisorderGraphene T. P. Cysne, T. G. Rappoport, A. Ferreira, J. M. Viana Parente Lopes, and N. M. R. Peres, Phys. Rev. B 94, 235405 (2016).
Zhu-Zhang-Li-Relaxation-Time L. Zhu, G. Zhang, and B. Li, Phys. Rev. B 90, 214302 (2014).
Lv-Lu-Relaxation-Time H. Y. Lv, W. J. Lu, D. F. Shao, and Y. P. Sun, Phys. Rev. B 90, 085433 (2014).
Guinea-Martin-Moreno-PRR2019 Tetiana M. Slipchenko, Jurgen Schiefele, Francisco Guinea, and Luis Martin-Moreno, Phys. Rev. Res. 1, 033049 (2019).
DL-SiC-PaliK_book E. W. Palik, Handbook of Optical Constants of Solids (Academic Press, San Diego, 1985).
Moreno-FresnelCoefficients M. Moreno, Phys. Rev. A 93, 013832 (2016).
|
http://arxiv.org/abs/2307.02123v1
|
20230705085954
|
Reflectionless pseudospin-1 Dirac systems via Darboux transformation and flat band solutions
|
[
"Vit Jakubsky",
"Kevin Zelaya"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.mes-hall",
"math-ph",
"math.MP"
] |
decorations.pathreplacing
shapes.multipart
positioning
matrix
shapes,calc,arrows
=1ex
= 0.55cm = 0.55cm
=1.5em
=22.5cm =16cm =-1.0cm
|
http://arxiv.org/abs/2307.02886v1
|
20230706094122
|
Misfit layer compounds as ultra-tunable field effect transistors: from charge transfer control to emergent superconductivity
|
[
"Ludovica Zullo",
"Giovanni Marini",
"Tristan Cren",
"Matteo Calandra"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
Misfit layer compounds are heterostructures composed of rocksalt units stacked with few layers transition metal dichalcogenides. They host Ising superconductivity, charge density waves and good thermoelectricity. The design of misfits emergent properties is, however, hindered by the lack of a global understanding of the electronic transfer among the constituents. Here, by performing first principles calculations, we unveil the mechanism controlling the charge transfer and demonstrate that rocksalt units are always donor and dichalcogenides acceptors. We show that misfits behave as a periodic arrangement of ultra-tunable field effect transistors where a charging as large as ≈6×10^14 e^-cm^-2 can be reached and controlled efficiently by the La-Pb alloying in the rocksalt. Finally, we identify a strategy to design emergent superconductivity and demonstrate its applicability in (LaSe)_1.27(SnSe_2)_2. Our work paves the way to the design synthesis of misfit compounds with tailored physical properties.
The capability of inducing a controlled and tunable number of carriers in few layer systems has been pivotal for the success of 2D materials <cit.>. However, in metallic few layers 2D dichalcogenides such as NbSe_2, the largest carrier doping that can be achieved via field effect gating are of the order of n_e≈ 3× 10^14 e^- cm^-2 <cit.>, corresponding to a Fermi level shift of the order of 0.1 eV, too small to drastically change the physical properties.
Recently <cit.>, it has been shown that overcoming this limit is possible in the misfit layer compound (MLC) (LaSe)_1.14 (NbSe_2)_2, an heterostructure composed of periodically alternating rocksalt monocalchogenide units (RS) and few layers transition metal dichalcogenides (TMDs) <cit.>. In this system, a massive electron transfer from the LaSe RS to the NbSe_2 TMD occurs, leading to a rigid Fermi level shift as large as +0.55 eV. It is, however, unclear if the electron doping in misfits can be in some way controlled by any physical parameter and, more important, how general this mechanism to dope few layer TMDs is.
MLCs have been known for a long time and their structures as a function of the RS and TMD composition have been thoroughly investigated <cit.>. However, the exploration of physical properties such as Ising superconductivity <cit.> charge density waves (CDW) <cit.> or topological effects <cit.> are quite recent. The research in the field has lead to remarkable results but it has mostly proceeded by isolated discoveries and trial and error chemical synthesis, while general rules to understand what happens when assembling different RS and TMDs are missing. The need of a global picture becomes evident when considering that (i) many ternary alloys composed of monochalcogenides can be assembled with practically any few layer dichalchogenide, (ii) the thickness of the dichalcogenide layers can be chosen at will. This makes a lot of possible combinations and leads to many unanswered questions.
For example, how does the charge transfer occur in these structures ? Are the TMD layers acceptors or donors ? How can the charge transfer be tuned ? To what extent the electronic structure of the TMD is affected when inserted in the heterostructure ? Most important, what are the emergent properties of the misfit, i.e. properties of the MLC that are absent in the pristine constituents ? How can we design misfit properties from the knowledge of their building blocks ?
In this work we answer these questions by performing extensive first principles electronic structure calculations of MLCs. We identify the fundamental mechanism ruling charge transfer and demonstrate how the charge injection into the TMD layers can be efficiently controlled by chemical alloying in the rocksalt unit. Most important, we show that superconductivity can emerge in
MLCs formed by assembling non-superconducting RS and TMDs. Finally, we demonstrate that misfit layer compounds can be assimilated to ultra-tunable field effect transistor with an unequaled charging of the TMD layers. Our work paves the way to extensive experimental synthesis and development of these promising systems.
The chemical formula of MLCs is (RQ)_1+δ(TX_2)_m, where (TX_2)_m is a m-layers TMD and RQ is a rocksalt monochalcogenide unit (often referred to as Q-layer) <cit.>. Ternary alloys of two monochalcogenides within a single RS Q-layer (e.g. La_xSr_1-xS) have also been synthesized <cit.> leading to MLCs having chemical formulas of the kind (R_xM_1-xQ)_1+δ(TX_2)_m. As a prototypical example of the MLCs crystal structure we consider (LaSe)_1.18(TiSe_2)_2, shown in Fig. <ref> (a) and (b). Each TiSe_2 and LaSe sublattice has its
own set of cell parameters. Compared to bulk 1TTiSe_2, the lattice of the TiSe_2 bilayer in the MLC is not perfectly hexagonal and is slightly expanded along one direction. As a
consequence, the TiSe_2 sublattice is described by a centered
orthorhombic cell with in-plane lattice vectors a_1 ≈ 3.6 Å and
b_1 ≈ 6 Å. The LaSe sublattice has also an orthorhombic symmetry but with similar in-plane lattice parameters a_2 ≈ b_2 ≈ 6 Å. Both systems have the same b vectors (b_1≈ b_2) so that the material is commensurate along this direction. The ratio between the norms of the a_1 and a_2 vectors sharing the same direction is an irrational number (see tables in Fig. 1 and 2 in Supplemental Material) making the MLC incommensurate in the a direction. The mismatch ratio a_2/a_1=x/y is usually in the range ∼ 1.6-1.8 and sets the parameter δ in the chemical formula through the relation 1+δ=2× (a_1/a_2).
In this work we adopt the convention of using the value of δ as obtained from the lattice parameters a_1 and a_2 of the pristine RS and TMD before assembling them in a MLC structure, as reported in the tables in Figs. 1-2 in the Supplemental Material. The commensurate approximant of each MLCs considered in the current work is reported in Fig. 3 in the Supplemental Material.
The RS layers have a strong intralayer bonding. A strong bonding also forms among the RS and TMDs layers. On the contrary, Van der Waals bonding occurs among the closer TMD layers. After cleavage, for m>1, the surface of the sample is a perfect TMD layer (a single layer in the m=2 case considered in this work <cit.>).
In the m=1 case , i.e. a single layer TMD sandwiched among RS Q-layers, the bonding along the z axis is always strong. As a result, the cleavage occurs in-between the RS and TMD bonding and the surface is still a TMD single layer, however it is often less clean and presents several steps and defects <cit.>. In all cases, there is a substantial experimental evidence <cit.> that ARPES and STS/STM measurements mostly sample the terminating TMD layer without accessing the bulk of the structure. On the contrary, Raman, transport and superconducting measurements probe bulk properties of the crystal.
In order to gain insight on the charge transfer among the RS and TMD layers in the MLC and its relevance for the electronic structure measurements (ARPES), we perform extensive calculations of the work functions of 8 isolated rocksalt Q-layers and 12 isolated TMDs single layers. The choice of considering TMD single layers is motivated by (i) the fact that we consider MLC with m=2 having a single layer TMD as terminating surface and (ii) by the fact that the work functions of bilayers TMDs is fairly close to the one of single layers <cit.>. Thus, we expect that our results will also hold for the surface and the bulk and for the m=1 case. Calculations are performed with the quantum ESPRESSO <cit.> package and we use the PBE exchange and correlation functional <cit.> (see SI for more technical details). Results are shown in Fig. <ref>.
The key quantities ruling the charge transfer in these systems are the work function difference among RS and TMDs and the consequent band alignment, the lattice mismatching ratio a_2/a_1 and, finally,
the degree of hybridization when the two subsystems are in contact.
As shown in Fig. <ref>, the TMDs globally possess substantially larger work functions than the RS compounds. As the work function is the energy required to transfer an electron from the Fermi level to the vacuum level, RS are always donor and TMDs always acceptors. The net amount of charge transfer depends, however, not only on the work function difference but also on the mutual concentration of the RS and TMD that is related to the mismatching ratio. To explain this more clearly, each RS can transfer a given amount of charge to the TMDs layer, if the mismatching ratio is close to one. However, if the mismatching ratio increases, the relative concentration of RS atoms per TMD cell decreases, and so does the charge transfer. By looking at Fig. 3 in SI, it is clear that the mismatching ratio varies mostly due to the change in the TMD lattice parameter.
In order to demonstrate this global picture we perform explicit calculations for several misfit surfaces terminated by a single layer NbSe_2 but having different RS Q-layers as building blocks and sharing comparable mismatching ratios very close to 7/4 (these compounds all belongs to the ninth column in the table in Fig. 3 in the Supplemental Material).
As it can be seen in Fig. <ref>, the behaviour of the (LaSe)_1.15 (NbSe_2)_2, (BiSe)_1.14(NbSe_2)_2, (PbSe)_1.14(NbSe_2)_2 and (SnSe)_1.16(NbSe_2)_2 serie is almost completely characterized by the work function differences. Indeed as W(LaSe)<W(SnSe)<W(PbSe), the charge transfer decreases by progressively decreasing the difference W(NbSe_2)-W(RS), as expected. The work function of BiSe is slightly larger than the one of SnSe, however BiSe seems to transfer few more electrons than SnSe.
We attribute this to the metallic character of BiSe and the consequent stronger hybridization occurring between BiSe and NbSe_2, resulting in a substantial band deformation of the pristine NbSe_2, as shown in Fig. <ref>.
Finally we point out that the NbSe_2 electronic structure in going from (PbSe)_1.14(NbSe_2)_2 to (LaSe)_1.15 (NbSe_2)_2 is n-doped rigidly, i.e. the charge transfer simply induces a Fermi level upshift. From this analysis two questions arise: how general is this rigid doping effect and how can it be used to effectively tune the doping ? We now show that it is possible to engineer the misfit in such a way that the doping level is rigidly adjustable through appropriate alloying of the RS Q-layer.
For this reason we consider MLCs having the following stoichiometry
(La_xPb_1-xSe)_1.18 (TiSe_2) as a function of x. We point out that similar substitutions (La↔Sr) have already been achieved in sulfur-based MLC <cit.>. A comparison between this system and the previous results for the NbSe_2 series will allow us to draw conclusions that are less dependent on the chosen TMD.
From the previous reasoning and from Fig. <ref>, we expect that the La concentration (x) allows to tune the carrier concentration in the TiSe_2 layers with x=1 (x=0) corresponding to the highest (lowest) n-doping.
In Fig.[<ref>] we show the calculated band structure of the full (La_xPb_1-xSe)_1.18(TiSe_2)_2 misfit for x=1.0,0.34,0.0. We also plot (red continuous line) the electronic structure for an isolated single layer. The position of the bottom of the Ti d-band of the isolated single layer is aligned to the corresponding band in the misfit. As it can be seen, by increasing x the doping is increased. Most important, the Ti d-band displays no deformation upon doping. At the highest doping level (x=1, corresponding to a charge transfer of 0.53 electrons per Ti, which is n_e∼ 5 × 10^14 e^- cm^-2) two parabolic La bands cross the Fermi level along the ΓK direction. These bands disappear by decreasing x (see SI for calculation at additional values of x). Remarkably, the electronic structure of (PbSe)_1.18(TiSe_2)_2 is almost indistinguishable from the one of the isolated TiSe_2 layer.
Despite this similarity in the electronic structure, we find that (PbSe)_1.18(TiSe_2)_2 does not display a 2× 2 CDW as it happens in the case of the supported TiSe_2 single layer <cit.>. This result is in agreement with resistivity data on this MLC <cit.> where no CDW was detected. We attribute the suppression of the CDW to the strong bonding between TiSe_2 and the the RS Q-layer. We find that in (La_xPb_1-xSe)_1.18(TiSe_2)_2, for x0,1, the Ti-Ti distances are modulated by the presence of Pb atoms in the host LaSe lattice (i.e. the Ti-Ti distance becomes shorter if the Ti atoms are close to a Pb atom).
The reason is mostly sterical as the La atomic radius is larger than the one of Pb, therefore Pb atoms are more strongly bounded to the RS layer and a consequent deformation of the LaSe rocksalt host occurs (as shown in Fig. <ref> (b)) followed by a modulation of the Ti-Ti distances. We verified that even starting from 2× 2 distorted TiSe_2 layers in the misfit, the structural optimization suppresses the CDW and leads to other distortion patterns that essentially follow the Pb atoms superstructure. Our analysis shows that altering the chemical composition of the rocksalt has a double effect: on the one hand, it allows to precisely tune the rigid doping of the TMD, on the other hand it suppresses the 2× 2 CDW of the TiSe_2 bilayer and introduces an additional modulation related to the alternation of La and Pb.
After achieving a complete knowledge of the charge transfer in MLC, we now demonstrate how to design a misfit superconductor starting from its constituents. In particular we show that non-superconducting pristine RS and TMD compounds can lead to a superconductor via charge transfer control (emergent superconductivity).
We consider the layered indirect gap semiconductor 1TSnSe_2 that can be exfoliated and synthesized in single layer form <cit.>. The electronic structure of a single layer SnSe_2 is shown in Fig. <ref> (red line). The conduction band is formed by an isolated band with a Van Hove singularity point at K. A maximum in the density of states
occurs at the energy corresponding to the band flattening. If the Fermi level is tuned at the inflection point, this would be beneficial for superconductivity. However, this involves a ≈ 1.4 eV Fermi level shift corresponding to a charge transfer of 0.77 electrons (≈ 6×10^14e^-cm^-2), unreachable even in a ionic-liquid based field effect transistor. However, as previously shown, this electron doping level could be reached in the misfit (La_xPb_1-xSe)_1.27(SnSe_2)_2.
In order to confirm this hypothesis, we perform first principles calculations for this MLC as a function of x (see Fig. 8 in SI ).
We find that the La-Pb alloying allows a perfect control of the doping level due to the large work function difference between LaSe and SnSe_2 and an insulator-to-metal transition occurs in SnSe_2.
At x=1.00 the Fermi level perfectly matches the inflection point, as shown in Fig. <ref>.
It is worth noting that at this high La concentration, some LaSe bands cross the Fermi level close to the K point and along ΓK, however their contribution to the total density of states is marginal.
In Fig. <ref> we also compare the MLC surface electronic structure with the one of an isolated layer (red line). As it can be seen, there is a substantial band distortion with respect to the isolated single layer. A better description of the surface electronic structure is obtained by replacing the LaSe layer with a uniformly positive charged potential barrier, as in a single gate field effect transistor setup by using the method developed in Ref. <cit.>. The electronic structure of an isolated SnSe_2 layer under this approximation is the green line in Fig. <ref>, in perfect agreement with the complete calculation of the MLC surface electronic structure both for what concerns the band bending at the Fermi level (some deviations are seen in the empty states close to zone center) and for the position of the valence band top. We attribute the band-bending occurring at the K high-symmetry point to a modification
of the intralayer spacing between Sn and Se in SnSe_2 due to the charging of the monolayer (a table with intralayer spacing comparisons can be found in Fig. 7 of SI).
This result shows that it is possible via Pb/La alloying in the RS layers to set the Fermi level at the Van Hove singularity. Furthermore, it shows that the LaSe Q-layer can be assimilated to a capacitor plate in a Field Effect Transistor (FET) (see Fig. <ref>(a)). This remains true even for the SnSe_2 bilayers in the bulk of the sample, i.e. the full MLC can be assimilated to several field-effect transistors stacked periodically along the z-axis of the MLC, as shown in Fig. <ref>.
As superconductivity is a bulk property, we must simulate the complete 3D crystal. The calculation of the vibrational properties and electron-phonon coupling for the complete MLC is, however, a very cumbersome task due to the large number of atoms. We then proceed differently, namely we consider a SnSe_2 bilayer in a field effect configuration as in Fig. <ref> with a +0.7 charge on each of the two plates (double gate configuration). In order to prevent the ions from moving too close to the gate electrodes, a potential barrier is placed before the gates, and the total charge of the system is maintained equal to zero <cit.>. Additional details on these calculations can be found in the SI.
We have verified that this approach gives geometries for the SnSe_2 bilayer in excellent agreement with the complete MLC structural optimization. Furthermore the electronic density of states of the MLC and that of the monolayer in double gate configuration are practically indistinguishable, as shown in Fig. <ref>.
We then calculate the phonon dispersion (ω_𝐪ν) and the electron-phonon coupling λ_𝐪ν for each mode ν of phonon crystal momentum 𝐪 in double gate geometry. From these quantities we obtain the Eliashberg function α^2F(ω)=1/2 N_q∑_𝐪νλ_𝐪νω_𝐪νδ(ω-ω_𝐪ν) and the
average electron-phonon coupling λ=1/N_q∑_𝐪νλ_𝐪ν=0.6, N_q being the number of points in the phonon momentum grid used to calculate the average (we used a 96×96×1 𝐪-grid, see the SI).
These quantities are plotted in Fig. <ref> (b).
Approximately 30% of the coupling arises from the Einstein optical modes
at ≈ 45-50 cm^-1, while the rest of the coupling is uniformly distributed throughout the other modes. The phonon density of states (not shown) is very similar to the Eliashberg function.
We calculate the superconducting critical temperature
by solving the anisotropic Migdal-Eliashberg equations <cit.>, as implemented in the EPIq software <cit.>, and by assuming μ^* = 0.1, obtaining a superconducting critical temperature of T_c=3.5 K (see SI for details on Migdal-Eliashberg calculations). This result matches well with the T_c=4.8 K detected in ultrathin Li-intercalated SnSe_2 via field effect gating and demonstrates that superconductivity can emerge in MLC from pristine components that are not superconducting.
In conclusion, by performing extensive first principles electronic structure calculations on misfit layer compounds we unveiled the mechanism ruling charge transfer in these systems. In particular, due to their large work functions, we showed that TMDs are always acceptors while rocksalts are always donors. The electron density that can be injected in the TMD layers can be as high as 6×10^14 e^-cm^-2, sensibly larger than in ordinary field-effect transistors.
We have shown that the charging of the TMD layers can be efficiently controlled via the La↔Pb substitution. Most interesting, by replacing each RS Q-layer with a charged plate and a barrier, we have shown that the surface of the MLC behaves as a single gated field-effect transistor while the bulk can be seen as a periodic arrangement of double-gated field effect transistor.
Finally and most important, we have shown that from the knowledge of the RS and TMD constituents it is possible to infer the amount of charge transfer to the TMD layers in the MLC and to predict the physical properties of the heterostructure. As a practical demonstration, we showed that emergent superconductivity occurs in (LaSe)_1.27(SnSe2)_2 via a 1.4 eV Fermi level shift induced by the presence RS Q-layers in the misfit.
The methodology developed in this work paves the way to the synthesis and design of misfit compounds with
tailored physical properties.
§ ACKNOWLEDGEMENTS
We acknowledge EuroHPC for awarding us access to the LUMI supercomputer (grant number 465000468). We acknowledge support from the European Union's Horizon 2020 research and innovation programme Graphene Flagship under grant agreement No 881603.
§ SUPPORTING INFORMATION
Contains:
* I. Geometrical Details of MLCs.
* II. Technical details.
* III. Band Alignment Calculation.
* IV. Band unfolding method applied to (La_xPb_1-xSe)_1.18(TiSe_2)_2.
* V. Doping-induced Superconductivity.
@ifundefinedendmcitethebibliography
30
f
subitem(mcitesubitemcount)
[Wu et al.(2023)Wu, Li, Wu, Hwang, and Cui]Wu2023
Wu, Y.; Li, D.; Wu, C.-L.; Hwang, H. Y.; Cui, Y. Electrostatic gating and
intercalation in 2D materials. Nature Reviews Materials 2023,
8, 41–53
[Xi et al.(2016)Xi, Berger, Forró, Shan, and
Mak]PhysRevLett.117.106801
Xi, X.; Berger, H.; Forró, L.; Shan, J.; Mak, K. F. Gate Tuning of Electronic
Phase Transitions in Two-Dimensional NbSe_2. Phys. Rev.
Lett. 2016, 117, 106801
[Leriche et al.(2021)Leriche, Palacio-Morales, Campetella,
Tresca, Sasaki, Brun, Debontridder, David, Arfaoui, Šofranko, Samuely,
Kremer, Monney, Jaouen, Cario, Calandra, and Cren]MisfitsMCTC2021
Leriche, R. T. et al. Misfit Layer Compounds: A Platform for Heavily
Doped 2D Transition Metal Dichalcogenides. Advanced Functional
Materials 2021, 31, 2007706
[Wiegers(1996)]WIEGERS19961
Wiegers, G. Misfit layer compounds: Structures and physical properties.
Progress in Solid State Chemistry 1996, 24,
1–139
[Rouxel et al.(1995)Rouxel, Meerschaut, and
Wiegers]WIEGERS_ROUXEL_MLC_1995
Rouxel, J.; Meerschaut, A.; Wiegers, G. Chalcogenide misfit layer compounds.
Journal of Alloys and Compounds 1995, 229,
144–157
[Samuely et al.(2021)Samuely, Szabó, Kaččmar ččík,
Meerschaut, Cario, Jansen, Cren, Kuzmiak, ŠŠofranko, and Samuely]TristanSupercondMLCs
Samuely, P.; Szabó, P.; Kaččmar ččík, J.; Meerschaut, A.;
Cario, L.; Jansen, A. G. M.; Cren, T.; Kuzmiak, M.; ŠŠofranko, O.; Samuely, T. Extreme in-plane upper critical magnetic
fields of heavily doped quasi-two-dimensional transition metal
dichalcogenides. Phys. Rev. B 2021, 104, 224507
[Giang et al.(2010)Giang, Xu, Hor, Williams, Dutton,
Zandbergen, and Cava]BOBCAVA_MLCsSupercond
Giang, N.; Xu, Q.; Hor, Y. S.; Williams, A. J.; Dutton, S. E.;
Zandbergen, H. W.; Cava, R. J. Superconductivity at 2.3 K in the misfit
compound (PbSe)_1.16(TiSe_2)_2. Phys. Rev.
B 2010, 82, 024503
[Kim et al.(2021)Kim, Yun, Song, and
Rhyee]SUPERCOND_MISFITS_KIM20211
Kim, J. H.; Yun, J. H.; Song, Y. J.; Rhyee, J.-S. Anisotropic thermoelectric
and superconducting properties of the bulk misfit-layered
(SnSe)_1.17(TaSe_2) compound. Current Applied Physics
2021, 28, 1–6
[Šofranko et al.(2020)Šofranko, Leriche, Morales,
Cren, Sasaki, Cario, Szabo, Samuely, and Samuely]Sofranko2020
Šofranko, O.; Leriche, R.; Morales, A.; Cren, T.; Sasaki, S.; Cario, L.;
Szabo, P.; Samuely, P.; Samuely, T. Periodic Surface Modulation of
(LaSe)_1.14(NbSe_2) Observed by Scanning Tunneling Microscopy.
Acta Physica Polonica A 2020, 137, 785–787
[Yang et al.(2019)Yang, Ma, Lv, Hu, Sun, Li, Qiao, Wu, Tao,
Cao, and Xu]SUPERCOND_MISFITS_Yang_2019
Yang, X.; Ma, J.; Lv, B.; Hu, H.; Sun, T.; Li, M.; Qiao, L.; Wu, S.; Tao, Q.;
Cao, G.-H.; Xu, Z.-A. Enhanced superconductivity in a misfit compound
(PbSe)_1.12 (TaSe_2)_2 with double TaSe_2 layers.
Europhysics Letters 2019, 128, 17004
[Grosse et al.(2016)Grosse, Alemayehu, Falmbigl, Mogilatenko,
Chiatti, Johnson, and Fischer]SupercondGrosse2016
Grosse, C.; Alemayehu, M. B.; Falmbigl, M.; Mogilatenko, A.; Chiatti, O.;
Johnson, D. C.; Fischer, S. F. Superconducting ferecrystals: turbostratically
disordered atomic-scale layered (PbSe)_1.14(NbSe_2)_n thin films.
Scientific Reports 2016, 6, 33457
[Atkins et al.(2013)Atkins, Disch, Jones, Haeusler, Grosse,
Fischer, Neumann, Zschack, and Johnson]CDW_MISFIT_ATKINS2013
Atkins, R.; Disch, S.; Jones, Z.; Haeusler, I.; Grosse, C.; Fischer, S. F.;
Neumann, W.; Zschack, P.; Johnson, D. C. Synthesis, structure and electrical
properties of a new tin vanadium selenide. Journal of Solid State
Chemistry 2013, 202, 128–133
[Trump et al.(2014)Trump, Livi, and
McQueen]BiSe_TiSe2_NO_CDW_TRUMP2014
Trump, B. A.; Livi, K. J.; McQueen, T. M. The new misfit compound
(BiSe)_1.15(TiSe_2)_2 and the role of dimensionality in the
Cu_x(BiSe)_1+δ(TiSe_2)_n series. Journal of Solid
State Chemistry 2014, 209, 6–12
[Falmbigl et al.(2015)Falmbigl, Putzky, Ditto, and
Johnson]CDW_MISFIT_FALMBIGL2015
Falmbigl, M.; Putzky, D.; Ditto, J.; Johnson, D. Influence of interstitial V on
structure and properties of ferecrystalline
([SnSe]_1.15)_1(V_1+xSe_2)_n for n=1, 2, 3, 4, 5, and 6.
Journal of Solid State Chemistry 2015, 231,
101–107
[Göhler et al.(2022)Göhler, Ramasubramanian, Rajak, Rösch,
Schütze, Wolff, Cordova, Johnson, and Seyller]CDW_MISFITS_2022
Göhler, F.; Ramasubramanian, S.; Rajak, S. K.; Rösch, N.; Schütze, A.;
Wolff, S.; Cordova, D. L. M.; Johnson, D. C.; Seyller, T. Modulation doping
and charge density wave transition in layered PbSe–VSe_2 ferecrystal
heterostructures. Nanoscale 2022, 14,
10143–10154
[Pei et al.(2023)Pei, Zhu, Li, Zhao, Gao, Li, Zhu, Zhang, Ying,
Gu, Gao, Gou, Yao, Sun, Liu, Chen, Wang, Yao, and Qi]ZHU_2023_MISFITS
Pei, C. et al. Pressure-Induced Superconductivity in Topological
Heterostructure (PbSe)_5(Bi_2Se_3)_6. 2023;
<https://arxiv.org/abs/2301.01120>
[Luo et al.(2016)Luo, Yan, Pletikosic, Xie, Phelan, Valla, and
Cava]TOPOL_MISFITS_CAVA2016
Luo, H.; Yan, K.; Pletikosic, I.; Xie, W.; Phelan, B. F.; Valla, T.;
Cava, R. J. Superconductivity in a Misfit Phase That Combines the Topological
Crystalline Insulator Pb_1-xSn_xSe with the CDW-Bearing Transition
Metal Dichalcogenide TiSe_2. Journal of the Physical Society of
Japan 2016, 85, 064705
[Cario et al.(1997)Cario, Johrendt, Lafond, Felser, Meerschaut,
and Rouxel]CarioPhysRevB.55.9409
Cario, L.; Johrendt, D.; Lafond, A.; Felser, C.; Meerschaut, A.; Rouxel, J.
Stability and charge transfer in the misfit compound
(LaS)(SrS)_0.2CrS_2: Ab initio band-structure
calculations. Phys. Rev. B 1997, 55, 9409–9414
[Kim and Choi(2021)Kim, and Choi]Band_Alignment_Kim_2021
Kim, H.-g.; Choi, H. J. Thickness dependence of work function, ionization
energy, and electron affinity of Mo and W dichalcogenides from DFT and GW
calculations. Phys. Rev. B 2021, 103, 085404
[Giannozzi et al.(2020)Giannozzi, Baseggio, Bonfà, Brunato,
Car, Carnimeo, Cavazzoni, de Gironcoli, Delugas, Ferrari Ruffino, Ferretti,
Marzari, Timrov, Urru, and Baroni]QE
Giannozzi, P.; Baseggio, O.; Bonfà, P.; Brunato, D.; Car, R.; Carnimeo, I.;
Cavazzoni, C.; de Gironcoli, S.; Delugas, P.; Ferrari Ruffino, F.;
Ferretti, A.; Marzari, N.; Timrov, I.; Urru, A.; Baroni, S. Quantum ESPRESSO
toward the exascale. The Journal of Chemical Physics 2020,
152, 154105
[Perdew et al.(1996)Perdew, Burke, and Ernzerhof]PBE
Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized Gradient Approximation Made
Simple. Physical Review Letters 1996, 77,
3865–3868
[Kolekar et al.(2018)Kolekar, Bonilla, Ma, Diaz, and
Batzill]2DMaterials-2018-substrate-dependent-TC
Kolekar, S.; Bonilla, M.; Ma, Y.; Diaz, H. C.; Batzill, M. Layer- and
substrate-dependent charge density wave criticality in 1T-TiSe_2.
2D Materials 2018, 5, 015006
[Wang et al.(2018)Wang, Chen, Duchamp, Zeng, Wang, Tsang, Li,
Jing, Yu, Teo, and Liu]Wang2018-advmat
Wang, H.; Chen, Y.; Duchamp, M.; Zeng, Q.; Wang, X.; Tsang, S. H.; Li, H.;
Jing, L.; Yu, T.; Teo, E. H. T.; Liu, Z. Large-Area Atomic Layers of the
Charge-Density-Wave Conductor TiSe_2. Advanced Materials
2018, 30, 1704382
[Fang et al.(2017)Fang, Hong, Chen, and
Chiang]FangPhysRevB.95.201409
Fang, X.-Y.; Hong, H.; Chen, P.; Chiang, T.-C. X-ray study of the
charge-density-wave transition in single-layer
TiSe_2. Phys. Rev. B 2017, 95,
201409
[Fu et al.(2022)Fu, Zhao, Zhou, Wu, Du, Wang, Song, Zhu, Zhou,
Huan, Bao, Wang, Zhang, and Zhang]Advanced_Materials_Interfaces_Fu_2022
Fu, J.; Zhao, L.; Zhou, L.; Wu, K.; Du, J.; Wang, X.; Song, J.; Zhu, L.;
Zhou, F.; Huan, Y.; Bao, L.; Wang, R.; Zhang, Q.; Zhang, Y. Controllable
Synthesis of Atomically Thin 1T-SnSe_2 Flakes and Its Linear Second
Harmonic Generation with Layer Thickness. Advanced Materials
Interfaces 2022, 9, 2102376
[Sohier et al.(2017)Sohier, Calandra, and Mauri]FET
Sohier, T.; Calandra, M.; Mauri, F. Density functional perturbation theory for
gated two-dimensional heterostructures: Theoretical developments and
application to flexural phonons in graphene. Phys. Rev. B
2017, 96, 075448
[Allen and Mitrović(1983)Allen, and Mitrović]MigdalEliashberg
Allen, P. B.; Mitrović, B. In Theory of Superconducting Tc;
Ehrenreich, H., Seitz, F., Turnbull, D., Eds.; Solid State Physics; Academic
Press, 1983; Vol. 37; pp 1–92
[Marini and Calandra(2022)Marini, and Calandra]Marini_2023
Marini, G.; Calandra, M. Phonon mediated superconductivity in field-effect
doped molybdenum dichalcogenides. 2D Materials 2022,
10, 015013
[Calandra et al.(2010)Calandra, Profeta, and
Mauri]calandraprofetamauri
Calandra, M.; Profeta, G.; Mauri, F. Adiabatic and nonadiabatic phonon
dispersion in a Wannier function approach. Phys. Rev. B 2010,
82, 165111
|
http://arxiv.org/abs/2307.02266v1
|
20230705130432
|
Preparation of two-qubit entangled states on a spin-1/2 Ising-Heisenberg diamond spin cluster by controlling the measurement
|
[
"A. R. Kuzmak"
] |
quant-ph
|
[
"quant-ph"
] |
Preparation of two-qubit entangled states
on a spin-1/2 Ising-Heisenberg diamond spin cluster
by controlling the measurement
A. R. Kuzmak
E-Mail: [email protected]
Department for Theoretical Physics, Ivan Franko National University of Lviv,
12 Drahomanov St., Lviv, UA-79005, Ukraine
The preparation of entangled quantum states is an inherent and indispensable step for the implementation of many quantum information algorithms. Depending on the physical system, there are different ways to control and measure them,
which allow one to achieve the predefined quantum states. The diamond spin cluster is the system that can be applied for this purpose. Moreover, such a system appears in chemical compounds such as the natural mineral azurite,
where the Cu^2+ are arranged in a spin-1/2 diamond chain. Herein, we propose the method of preparation of pure entangled states on the Ising-Heisenberg spin-1/2 diamond cluster. We suppose that the cluster consists
of two central spins which are described by an anisotropic Heisenberg model and interact with the side spins via Ising interaction. Controlling the measurement direction of the side (central) spins allows us to achieve
predefined pure quantum states of the central (side) spins. We show that this directly affects the entanglement and fidelity of the prepared states. For example, we obtain conditions and fidelities for preparations of the Bell states.
§ INTRODUCTION
The preparation of entangled states plays a crucial role in the implementation of quantum information algorithms <cit.>. Entangled states are an integral part of
quantum cryptography <cit.>, super-dense coding <cit.>, quantum teleportation <cit.>,
quantum calculations <cit.>, optimization of quantum calculations <cit.>, etc.
All these schemes require specific physical systems that can be easily controlled and measured. There are following systems that are used for this purpose: polarized photons <cit.>,
nuclear and electronic spins of atoms <cit.>, superconducting qubits <cit.>,
trapped ions <cit.>, ultracold atoms <cit.>, etc. In the last years, the preparation of entangled states
and their application to the algorithms of quantum information is also widely studied on quantum computers <cit.>.
One of the system which can be applied for the quantum information is a diamond spin cluster formed by four spins. Many compounds contain arranged diamond spin clusters in chains. For instance, there are the following copper-based compounds:
Ca_3Cu_3(PO_4)_4, Sr_3Cu_3(PO_4)_4 <cit.> Bi_4Cu_3V_2O_14 <cit.>,Cu_3(CO_3)_2(OH)_2 (also called the natural mineral azurite) <cit.>.
The Cu^2+ ions in the natural mineral azurite form a spin-1/2 diamond chain. Recently quantum properties such as the entanglement of these systems have been actively examined.
In papers <cit.> the bipartite entanglement between spins in the diamond spin cluster which are in thermodynamic equilibrium was studied.
Bose and Tribedi were provided the first calculations of thermal entanglement in the diamond spin cluster <cit.>. Thermal entanglement of a spin-1/2 Ising-Heisenberg symmetrical diamond chain was studied for the first time by Nerses Ananikian et all <cit.>. They calculated the behaviour of entanglement as a function of the system parameters. They also showed that for a dominant Heisenberg-type interaction the system’s ground state is maximally entangled, but on increasing the temperature pure quantum correlations disappear.
For other types of interaction between spins such as XXZ-Heisenberg <cit.>, XYZ-Heisenberg <cit.>, Ising-XYZ distorted diamond <cit.>, etc. the behaviour of thermal entanglement was also investigated.
Recently, thermal entanglement, local quantum uncertainty, and quantum coherence in a four-qubit square chain described by the Heisenberg XXZ Hamiltonian was exactly examined <cit.>.
The authors studied the influences of the Hamiltonian parameters on these criteria and fidelity of teleportation.
In our previous paper <cit.>, for the first time, we studied the bipartite entanglement of the Ising-Heisenberg diamond spin-1/2 cluster in evolution.
Control of quantum systems plays an important role in the preparation of states. Different types of systems are controlled in a specific way. The evolution of spin systems
is controlled by the values of interaction between spins and external magnetic fields. The predefined states are achieved on the system by the measurenents at a certain moment of time.
The technique which allows one to control and measure such systems is called the spin resonance technique <cit.>.
Implementation of quantum states on different spin systems was widely studied in papers <cit.>.
In this paper we consider the preparation of two-qubit pure entangled states on the central (side) spins of the diamond cluster by controlling the measurement directions of side (central) spins.
The diamond spin cluster consists of two central spins described by anisotropic Heisenberg Hamiltonian which interact with two side spins via the Ising model (Sec. <ref>). In Secs. <ref>, <ref>, the achievement of entangled states on central and side spins is studied. The conditions for achieving maximally entangled states are obtained. For example, the preparation of Bell states on the central and side spins is considered.
§ MODEL OF A DIAMOND SPIN CLUSTER
We consider the diamond spin cluster that consists of two central S_a, S_b spin-1/2 described by anisotropic Heisenberg Hamiltonian and two side spin-1/2 S_1, S_2 (Fig. <ref>).
Interaction of the central spins with the side spins is defined by the Ising model.
The Hamiltonian of the whole system is expressed by three terms that mutually commute
H=H_ab+H_12+H_int,
where
H_ab=J(S_a^xS_b^x+S_a^yS_b^y)+J_zS_a^zS_b^z+h'(S_a^z+S_b^z),
H_12=h(S_1^z+S_2^z),
H_int=J_0(S_a^z+S_b^z)(S_1^z+S_2^z).
Here S_α=1/2(S_α^x i+S_α^y j+S_α^z k) is the operator of α-th spin (α=a,b,1,2), J and J_z are the coupling constants between a and b spins,
J_0 is a coupling constant which defines the interaction between S_a, S_b and S_1, S_2 pairs of spins, h', h are the values which describe the interaction between spins and an external magnetic field.
We use the system of units, where the Planck constant is ħ =1. This means that the energy is measured in units of the frequency. The Hamiltonians H_ab (<ref>), H_12 (<ref>)
describe the subsystems of S_a, S_b and S_1, S_2 spins, respectively, and the Hamiltonian H_int (<ref>) describes the interaction between those subsystems.
Since these Hamiltonians mutually commute
[H_ab,H_12]=[H_ab,H_int]=[H_12,H_int]=0,
we can easily find eigenvalues and eigenstates of the system (see Appendix <ref>).
The evolution of diamond spin cluster determined by Hamiltonian (<ref>) having started from the initial state |ψ_I⟩ can be expressed as follows
|ψ(t)⟩=e^-iHt|ψ_I⟩=e^-iHt∑_n C_n|ψ_n⟩=∑_n C_ne^-iE_nt|ψ_n⟩,
where |ψ_n⟩ and E_n is a set of eigenstates and eigenvalues given by expression (<ref>), C_n are the complex parameters that determine the initial state.
Controlling the initial state, the values of the external magnetic field, and the time of evolution, allows us to achieve the predefined final state. Choosing the measurement direction of one subsystem allows us to fix the final pure state of another subsystem. Moreover we can achieve the predefined pure entangled states. Let us consider the preparation of these states on the S_a, S_b spins by controlling the measurement direction of the S_1, S_2 spins, and vice versa.
The copper-based compounds mentioned above form a spin chain. We consider the evolution of a separate diamond spin cluster. In the case of chain the interaction between the Heisenberg spins is provided by the
Ising spins. In other words, the dimers of the chain interact between themselves via Ising spins which leads to the fact that all spins of the chain affect the state of the selected diamond cluster.
However, when Ising spins are in the eigenstates, the evolution of all dimer spins in the chain is independent. This fact allows one to consider the evolution of a single dimer in a diamond spin cluster (Subsec. <ref>).
§ PREPARATION OF ENTANGLED STATES ON THE S_A, S_B SPINS
In this section, we examine the preparation of quantum states on the S_a and S_b spins. We consider two cases of the evolution of the whole system: 1. the side S_1, S_2 spins do not evolve; 2. the side S_1, S_2 spins evolve. Depending on the measurement of the side spins, we obtain the conditions for the preparation of entangled pure states on the S_a, S_b spins. We calculate the entanglement of these states. For this purpose, we use the Wootters definition of concurrence <cit.>
C(|ψ⟩)=2| ad-bc|,
where a, b, c and d are complex parameters which define the state
|ψ⟩ =a|↑↑⟩+b|↑↓⟩+c|↓↑⟩+d|↓↓⟩.
and satisfy the normalization condition | a|^2+| b|^2+| c|^2+| d|^2=1. The concurrence takes values C=[0,1]. For the separated states, it equals C=0 and for the maximally entangled states, it takes the value C=1.
§.§ Stationarity of the S_1, S_2 spins
Let us consider the evolution of S_a, S_b spins without including in the evolution of S_1, S_2 spins. For this purpose, we prepared the initial state of the system in the way that the spins S_1 and S_2 do not evolve. Due to the facts that Hamiltonians H_ab (<ref>) and H_12 (<ref>) mutually commute, we take the eigenstate of Hamiltonian (<ref>) in the initial state. It does not matter which we take the eigenstate of the S_1, S_2 spins because this subsystem does not evolve. Thus the initial state of the whole system we take in the form
|ψ_I⟩=|↑↑⟩_12(C_1|↑⟩_a+C_2|↓⟩_a)(C_3|↑⟩_b+C_4|↓⟩_b),
where C_1, C_2, C_3 and C_4 are the complex parameters which define the initial state of S_a, S_b spins and satisfy the following normalization conditions: | C_1|^2+| C_2|^2=1,
| C_3|^2+| C_4|^2=1. The initial state can be decomposed by eigenstates (<ref>) as follows
|ψ_I⟩=C_1C_3|ψ_1⟩+C_1C_41/√(2)(|ψ_2⟩+|ψ_3⟩)
+C_2C_31/√(2)(|ψ_2⟩-|ψ_3⟩)+C_2C_4|ψ_4⟩.
Based on the equation (<ref>), the evolution of the system takes the form
|ψ(t)⟩=e^-iHt|ψ_I⟩
=C_1C_3e^-iE_1t|ψ_1⟩+C_1C_41/√(2)(e^-iE_2t|ψ_2⟩+e^-iE_3t|ψ_3⟩)
+C_2C_31/√(2)(e^-iE_2t|ψ_2⟩-e^-iE_3t|ψ_3⟩)+C_2C_4e^-iE_4t|ψ_4⟩
=C_1C_3e^-i(h+J_z/4+h'+J_0)t|ψ_1⟩+C_1C_41/√(2)e^-i(h-J_z/4)t(e^-iJt/2|ψ_2⟩+e^iJt/2|ψ_3⟩)
+C_2C_31/√(2)e^-i(h-J_z/4)t(e^-iJt/2|ψ_2⟩-e^iJt/2|ψ_3⟩)+C_2C_4e^-i(h+J_z/4-h'-J_0)t|ψ_4⟩.
In the basis |↑↑⟩_ab, |↑↓⟩_ab, |↓↑⟩_ab and |↓↓⟩_ab this state can be represented as follows
|ψ(t)⟩=e^-iht|↑↑⟩_12[C_1C_3e^-i(J_z/4+h'+J_0)t|↑↑⟩_ab.
.+e^iJ_z/4t(C_1C_4cos(Jt/4)-iC_2C_3sin(Jt/4))|↑↓⟩_ab.
.+e^iJ_z/4t(C_2C_3cos(Jt/4)-iC_1C_4sin(Jt/4))|↓↑⟩_ab.
. +C_2C_4e^-i(J_z/4-h'-J_0)t|↓↓⟩_ab].
The interaction with the side S_1, S_2 spins affects the central S_a, S_b spins as an effective magnetic field of the value J_0. During the evolution, the states of the side and central spins remain separate. Thus the state of S_a, S_b spins does not depend on the measurements of the S_1 and S_2 spins. Selection of the initial state, system parameters, and the period of evolution allows us to achieve different entangled states. Using definition (<ref>), we calculate the value of entanglement of this state
C(|ψ(t)⟩_ab)
=2| C_1C_2C_3C_4(1-e^iJ_ztcos(Jt))+i/2e^iJ_zt(C_1^2C_4^2+C_2^2C_3^2)sin(Jt)|,
where |ψ(t)⟩_ab is the state of S_a, S_b spins that is separated from the state of S_1, S_2 spins in equation (<ref>). For example, let us obtain the conditions for the preparation of different entangled states.
Suppose that C_1=C_4=1 and C_2=C_3=0 than the initial state has the form
|ψ_I⟩=|↑↑⟩_12|↑↓⟩_ab. Entangled state reached during the evolution of S_a, S_b spins is the following
|ψ(t)⟩_ab=cos(Jt/4)|↑↓⟩-isin(Jt/4)|↑↓⟩.
From the equation (<ref>) follows that the concurrence of this state has the form
C(|ψ(t)⟩_ab)=|sin(Jt)|.
It takes the maximal values in the moments which satisfy the condition Jt_n=π/2+π n, where n∈Z.
Let us now project the initial state of S_a, S_b spins in the plane xy. Then the parameters of initial state take the form C_1=1/√(2), C_2=e^iϕ_1/√(2), C_3=1/√(2), C_4=e^iϕ_2/√(2),
where ϕ_1 and ϕ_2 are the azimuthal angles of spherical coordinate system. For these parameters concurrence (<ref>) reduces to the equation
C(|ψ(t)⟩_ab)=1/2[(cos(J_zt)-cos(Jt))^2+(sin(J_zt)-sin(Jt)cos(ϕ_1-ϕ_2))^2]^1/2.
Controlling the parameters J, J_z, ϕ_1, ϕ_2, and time of evolution allows us to achieve entangled states. For instance, if we put ϕ_1-ϕ_2=0 and (J_z-J)t=(2n+1)π or ϕ_1-ϕ_2=π and (J_z+J)t=(2n+1)π
(n∈Z), we obtain the maximally entangled states of S_a, S_b spins (C(|ψ(t)⟩_ab)=1). Dependencies of concurrence (<ref>) for different values of ϕ_1-ϕ_2 are presented
in Fig. <ref>.
§.§ Dynamics of the S_1, S_2 spins
Now we consider the case when the whole system evolves. For this purpose, we prepare the initial state as a projection of the spins in the positive direction of the x-axis. This state can be expressed as follows
|ψ_I⟩=1/4(|↑↑⟩_12+|↑↓⟩_12+|↓↑⟩_12+|↓↓⟩_12)
(|↑↑⟩_ab+|↑↓⟩_ab+|↓↑⟩_ab+|↓↓⟩_ab).
Taking into account equation (<ref>) the evolution of the system takes the form
|ψ(t)⟩=e^-iHt|ψ_I⟩=1/4(e^-iE_1t|ψ_1⟩+√(2)e^-iE_2t|ψ_2⟩ + e^-iE_4t|ψ_4⟩.
.+e^-iE_5t|ψ_5⟩+√(2)e^-iE_6t|ψ_6⟩+e^-iE_8t|ψ_8⟩+ e^-iE_9t|ψ_9⟩+√(2)e^-iE_10t|ψ_10⟩.
.+e^-iE_12t|ψ_12⟩+e^-iE_13t|ψ_13⟩+√(2)e^-iE_14t|ψ_14⟩+e^-iE_16t|ψ_16⟩),
where the initial state is decomposed by the eigenstates of Hamiltonian (<ref>) with eigenvalues E_i (<ref>). We express this state in the following form
|ψ(t)⟩=1/2(|ξ_1⟩_ab|↑↑⟩_12+ |ξ_2⟩_ab(|↑↓⟩_12+|↓↑⟩_12) + |ξ_3⟩_ab|↓↓⟩_12),
where
|ξ_1⟩_ab=1/2[e^-i(J_z/4+J_0+h+h')t|↑↑⟩_ab+e^-i(J/2-J_z/4+h)t(|↑↓⟩_ab+|↓↑⟩_ab)+e^-i(J_z/4-J_0+h-h')t|↓↓⟩_ab],
|ξ_2⟩_ab=1/2[e^-i(J_z/4+h')t|↑↑⟩_ab+e^-i(J/2-J_z/4)t(|↑↓⟩_ab+|↓↑⟩_ab)+e^-i(J_z/4-h')t|↓↓⟩_ab],
|ξ_3⟩_ab=1/2[e^-i(J_z/4-J_0-h+h')t|↑↑⟩_ab+e^-i(J/2-J_z/4-h)t(|↑↓⟩_ab+|↓↑⟩_ab)+e^-i(J_z/4+J_0-h-h')t|↓↓⟩_ab].
Measuring the S_1, S_2 spins on the z-axis, we obtain the central S_a, S_b spins in the states defined by expression (<ref>).
The fidelity of the state |ψ_c⟩ is determined in the following way
F=|⟨ψ(t)|ψ_c⟩|^2.
In table <ref>, we present the set of results presented with corresponding fidelities.
Using the Wootters definition (<ref>), we calculate the value of entanglement of these states. For all three states (<ref>) it takes the form
C=|sin(J_z-J/2t)|.
The maximally entangled states are achieved at the moment t=(2n+1)π/(J_z-J).
In addition to the period of evolution and the parameters included in the Hamiltonian, the direction in which the S_1 and S_2 spins are measured affects the form of achieved states. This fact is easy to show when the state of side spins is rewritten in the basis defined by some direction. Let us assume that this direction is defined by the spherical angles θ and ϕ. Then states of spin-1/2 projected in this direction have the form
| +⟩=cos(θ/2)|↑⟩+sin(θ/2)e^iϕ|↓⟩, | -⟩=-sin(θ/2)e^-iϕ|↑⟩+cos(θ/2)|↓⟩,
where | +⟩, and | -⟩ are the states which correspond to the positive and negative projections, respectively. The relations between bases of the S_1, S_2 spins are presented in Appendix <ref>.
Substituting expressions (<ref>) in state (<ref>), we rewrite it in the form
|ψ(t)⟩=1/4(1/A_1|ψ_1⟩_ab| ++⟩_12+ 1/A_2|ψ_2⟩_ab(| +-⟩_12+| -+⟩_12) + 1/A_3|ψ_3⟩_ab|–⟩_12).
Measuring the S_1, S_2 spins on the | +⟩, | -⟩ basis, the S_a, S_b spins are reduced to one of the following three states
|ψ_1⟩_ab=A_1[e^-i(J_z/4+h')t(cos(θ/2)e^-iJ_0+h/2t+sin(θ/2)e^i(J_0+h/2t-ϕ))^2|↑↑⟩_ab.
.+e^-i(J/2-J_z/4)t(cos(θ/2)e^-ih/2t+sin(θ/2)e^i(h/2t-ϕ))^2(|↑↓⟩_ab+|↓↑⟩_ab).
.+e^-i(J_z/4-h')t(cos(θ/2)e^iJ_0-h/2t+sin(θ/2)e^-i(J_0-h/2t+ϕ))^2|↓↓⟩_ab],
|ψ_2⟩_ab=A_2[e^-i(J_z/4+h')t(cosθ+isinθsin((h+J_0)t-ϕ))|↑↑⟩_ab.
.+e^-i(J/2-J_z/4)t(cosθ+isinθsin(ht-ϕ))(|↑↓⟩_ab+|↓↑⟩_ab).
.+e^-i(J_z/4-h')t(cosθ+isinθsin((h-J_0)t-ϕ))|↓↓⟩_ab],
|ψ_3⟩_ab=A_3[e^-i(J_z/4+h')t(cos(θ/2)e^iJ_0+h/2t-sin(θ/2)e^-i(J_0+h/2t-ϕ))^2|↑↑⟩_ab.
.+e^-i(J/2-J_z/4)t(cos(θ/2)e^ih/2t-sin(θ/2)e^-i(h/2t-ϕ))^2(|↑↓⟩_ab+|↓↑⟩_ab).
.+e^-i(J_z/4-h')t(cos(θ/2)e^-iJ_0-h/2t-sin(θ/2)e^i(J_0-h/2t+ϕ))^2|↓↓⟩_ab].
The amplitudes of these states read
A_1=[(1+sinθcos(J_0t+ht-ϕ))^2+2(1+sinθcos(ht-ϕ))^2.
.+(1+sinθcos(J_0t-ht+ϕ))^2]^-1/2,
A_2=[4cos^2θ+sin^2θsin^2(J_0t+ht-ϕ)+2sin^2θsin^2(ht-ϕ).
.+sin^2θsin^2(J_0t-ht+ϕ)]^-1/2,
A_3=[(1-sinθcos(J_0t+ht-ϕ))^2+2(1-sinθcos(ht-ϕ))^2.
.+(1-sinθcos(J_0t-ht+ϕ))^2]^-1/2.
Using equation (<ref>), the fidelities of achieved states can be calculated. The results of measurements with corresponding fidelities are presented in table <ref>.
As we can see from expressions (<ref>), changing angles θ, ϕ, the value of magnetic field h and period of evolution allows us to control the fidelities of states prepared on the S_a, S_b spins. For the demonstration, let us find the conditions for the preparation of the Bell states on the S_a and S_b spins.
§.§ Preparation of the Bell states on the S_a, S_b spins
Controlling the direction of measurement of the side S_1, S_2 spins, the value of the external magnetic field, and the period of evolution, we can achieve predefined states of the S_a, S_b spins. For example, let us find conditions that allow one to achieve the Bell states on the S_a, S_b spins. It is easy to see from state (<ref>) that the condition cosθ/2=-sinθ/2e^i(ht-ϕ) reduces this state to the subspace spanned by |↑↑⟩_ab, |↓↓⟩_ab vectors. Since the parameters θ and ϕ take values θ∈[0,π] and ϕ∈[0,2π], this equation has the following solutions
θ=π/2, ht-ϕ=π.
Then states (<ref>), (<ref>) and (<ref>) modulo a global phase take the form
|ψ_1⟩_ab=1/√(2)[|↑↑⟩_ab + e^2ih't|↓↓⟩_ab],
|ψ_2⟩_ab=1/√(2)[|↑↑⟩_ab+e^i(2h't+π)|↓↓⟩_ab],
|ψ_3⟩_ab=1/√(2)√(cos^4(J_0t/2)+1)[e^-i(J_z/4+h')tcos^2(J_0t/2)(|↑↑⟩_ab+e^2ih't|↓↓⟩_ab).
.+e^-i(J/2-J_z/4)t(|↑↓⟩_ab+|↓↑⟩_ab) ].
States (<ref>), (<ref>) are maximally entangled and become Bell states |Φ^±⟩ for h't=π/2n, where n∈Z.
Entanglement of state (<ref>) depends on time and coupling constant between spins as follows
C(|ψ_3⟩_ab)=1/1+cos^4(J_0t/2)[1+cos^8(J_0t/2)-2cos^4(J_0t/2)cos(J_z-J)t]^1/2.
In the case of J_0t=π+2π n, modulo a global phase this state becomes the |Ψ^+⟩ Bell state. Based on the definitions of amplitudes (<ref>) and equations from table <ref>,
we obtain the fidelities of each of states (<ref>), (<ref>), (<ref>) after measuring of the side spins. Substituting parameters (<ref>) in these equations
we obtain
F(|ψ_1⟩_ab)=1/2sin^4(J_0t/2),
F(|ψ_2⟩_ab)=1/4sin^2(J_0t),
F(|ψ_3⟩_ab)=1/2(cos^4(J_0t/2)+1).
These dependencies change with the period 2π with respect to the parameter J_0t (see, Fig. <ref>). As we can see, at the moment T=π/J_0 with fidelity 0.5, we obtain both |ψ_1⟩_ab (<ref>) and |Ψ^+⟩ states. It also follows from expression (<ref>) that the S_1, S_2 spins should be measured in the direction defined by the spherical angles θ=π/2, ϕ=(h/J_0-1)π. In addition, in order to achieve the |Φ^+⟩ and |Φ^-⟩ Bell states from the state (<ref>), the magnetic field of the values h'=0 and J_0/2 should be applied to the S_a, S_b spins. The |Φ^±⟩ Bell state can be obtained from the state (<ref>) with fidelity 0.25 if we make the measurements of side spins in the moments T=π/(2J_0) and 3π/(2J_0). In this case, the side spins should be measured in the directions defined by the sets of parameters: 1. θ=π/2, ϕ=(h/(2J_0)-1)π for T=π/(2J_0) and 2. θ=π/2, ϕ=(3h/(2J_0)-1)π for T=3π/(2J_0). Then the magnetic field h' should be given the values: 1. h'=0 for |Φ^-⟩ Bell state and 2. h'=J_0 for |Φ^+⟩ Bell state. Finally, it is worth noting that a similar situation exists when we put the following conditions θ=π/2 and ht-ϕ=π on measurement direction. In this case, the state |ψ_1⟩_ab takes the form defined by expression (<ref>) and vice versa.
Finally, we depict the time dependence of concurrence (<ref>) of state |ψ_3⟩ on the ratio between the interaction parameters J_0 and J_z-J (Fig. <ref>).
It is easy to see that the stronger the anisotropy and the value of interaction between the S_a and S_b spins, the faster the state |ψ_3⟩ becomes maximally entangled.
§ PREPARATION OF ENTANGLEMENT STATES ON THE S_1, S_2 SPINS
In this section, we examine the states prepared on the S_1, S_2 spins depending on the measurement direction of the S_a, S_b spins. For this purpose, we rewrite state (<ref>) in the following way
|ψ(t)⟩=1/2(|ϕ_1⟩_12|↑↑⟩_ab+ |ϕ_2⟩_12(|↑↓⟩_ab+|↓↑⟩_ab) + |ϕ_3⟩_12|↓↓⟩_ab),
where we introduce the following notations
|ϕ_1⟩_12=1/2[e^-i(J_z/4+J_0+h+h')t|↑↑⟩_12+e^-i(J_z/4+h')t(|↑↓⟩_12+|↓↑⟩_12)+e^-i(J_z/4-J_0-h+h')t|↓↓⟩_12],
|ϕ_2⟩_12=1/2[e^-i(J/2-J_z/4+h)t|↑↑⟩_12+e^-i(J/2-J_z/4)t(|↑↓⟩_12+|↓↑⟩_12)+e^-i(J/2-J_z/4-h)t|↓↓⟩_12],
|ϕ_3⟩_12=1/2[e^-i(J_z/4-J_0+h-h')t|↑↑⟩_12+e^-i(J_z/4-h')t(|↑↓⟩_12+|↓↑⟩_12)+e^-i(J_z/4+J_0-h-h')t|↓↓⟩_12].
Measuring the spins S_a, S_b on the basis |↑↑⟩_ab, |↑↓⟩_ab, |↓↑⟩_ab |↓↓⟩_ab with fidelities F(|ϕ_1⟩_12)=1/4, F(|ϕ_2⟩_12)=1/2 and F(|ϕ_3⟩_12)=1/4 we obtain states |ϕ_1⟩_12, |ϕ_2⟩_12 and |ϕ_3⟩_12 of the S_1, S_2 spins, respectively. The value of entanglement (<ref>) of each states is equal to C=0. However, making measurements of the S_a, S_b spins on another basis states allow one to achieve entangled states of the S_1, S_2 spins. Using relations between basis states |↑⟩, |↓⟩ and | +⟩, | -⟩ (<ref>) for S_a, S_b spins, state (<ref>) takes the form (<ref>) (see, Appendix <ref>). Measuring S_a, S_b spins, we obtain that the achieved states of the S_1, S_2 spins addition depend on the difference between interaction couplings J-J_z. For example, in the case of isotropic interaction between S_a and S_b spins (J=J_z), the achieved states take the form similar to states (<ref>), (<ref>) and (<ref>) with replacement h on h' and vice versa. Here, the Bell states are prepared in the same way described in the previous section.
§ CONCLUSIONS
We have considered the preparation of entangled pure quantum states on the Ising-Heisenberg diamond spin cluster placed in the magnetic field. This cluster consists of two central spins described by the anisotropic Heisenberg interaction
which interacts that two side spins via the Ising model. It is worth noting that the ions in the copper-based compounds mentioned in the introduction
are arranged in a spin-1/2 diamond chain. The interaction between each spin in the chain is described by the Heisenberg model. However, to simplify calculations, we have considered a simpler model where the side spins interact
with central spins via the Ising model. Depending on the initial state, we have studied the evolution of this system. Namely, we have examined the preparation of entangled states on the central spins when the side spins
are measured and vice versa. Due to the fact that parts of Hamiltonian which describe the central and side spins mutually commute, we can independently study the evolution of one subsystem without intertwining it with another subsystem.
We have considered such evolution when side spins are in the stationary state. The influence of the side spins on the evolution of the central spins is similar to the presence of an effective magnetic field. We have obtained
the conditions to achieve the entangled states on the central spins.
In the case when the whole system evolves, we have shown that the direction, in which the spins of one subsystem are measured, affects the form and entanglement of the achieved states on the other subsystem.
Firstly, we have investigated the preparation of pure entangled states on the central spins depending on the measurement direction of the side spins. The fidelities of these states as a function of the period of evolution,
parameters of Hamiltonian and the measurement direction of the side spins have been calculated. For example, we have obtained conditions and fidelities for the preparation of the |Φ^±⟩ and |Ψ^+⟩
Bell states. We have also examined the preparation of the states on side spins depending on the measurement direction of the central spins. It has been shown that the entanglement of states achieved on side spins depends
on the measurement direction of the central spins. There is a measurement direction in which all achieved states are separated and another measurement direction that allows one to prepare the maximally entangled states.
§ ACKNOWLEDGEMENTS
This work was supported by Project 77/02.2020 from National Research Foundation of Ukraine.
§ EIGENSTATES AND EIGENVALUES OF THE DIAMOND SPIN CLUSTER
The fact that Hamiltonians (<ref>), (<ref>) and (<ref>) mutually commute, we can easily obtain the eigenstates and corresponding eigenvalues of Hamiltonian (<ref>).
These eigenstates and eigenvalues have the following form
|ψ_1⟩ = |↑↑⟩_ 12|↑↑⟩_ab, E_1=h+J_z/4+h'+J_0,
|ψ_2⟩ = |↑↑⟩_121/√(2)(|↑↓⟩+|↓↑⟩)_ab, E_2=h+J/2-J_z/4,
|ψ_3⟩ = |↑↑⟩_121/√(2)(|↑↓⟩-|↓↑⟩)_ab, E_3=h-J/2-J_z/4,
|ψ_4⟩ = |↑↑⟩_ 12|↓↓⟩_ab, E_4=h+J_z/4-h'-J_0,
|ψ_5⟩ = |↑↓⟩_ 12|↑↑⟩_ab, E_5=J_z/4+h',
|ψ_6⟩ = |↑↓⟩_121/√(2)(|↑↓⟩+|↓↑⟩)_ab, E_6=J/2-J_z/4,
|ψ_7⟩ = |↑↓⟩_121/√(2)(|↑↓⟩-|↓↑⟩)_ab, E_7=-J/2-J_z/4,
|ψ_8⟩ = |↑↓⟩_ 12|↓↓⟩_ab, E_8=J_z/4-h',
|ψ_9⟩ = |↓↑⟩_ 12|↑↑⟩_ab, E_9=J_z/4+h',
|ψ_10⟩ = |↓↑⟩_121/√(2)(|↑↓⟩+|↓↑⟩)_ab, E_10=J/2-J_z/4,
|ψ_11⟩ = |↓↑⟩_121/√(2)(|↑↓⟩-|↓↑⟩)_ab, E_11=-J/2-J_z/4,
|ψ_12⟩ = |↓↑⟩_ 12|↓↓⟩_ab, E_12=J_z/4-h',
|ψ_13⟩ = |↓↓⟩_ 12|↑↑⟩_ab, E_13=-h+J_z/4+h'-J_0,
|ψ_14⟩ = |↓↓⟩_121/√(2)(|↑↓⟩+|↓↑⟩)_ab, E_14=-h+J/2-J_z/4,
|ψ_15⟩ = |↓↓⟩_121/√(2)(|↑↓⟩-|↓↑⟩)_ab, E_15=-h-J/2-J_z/4,
|ψ_16⟩ = |↓↓⟩_ 12|↓↓⟩_ab, E_16=-h+J_z/4-h'+J_0.
The states of subsystems are indicated by the subscripts. The states of S_1, S_2 and S_a, S_b spins are denoted by the subscripts 12 and ab, respectively.
§ RELATIONS BETWEEN |↑⟩, |↓⟩ AND | +⟩, | -⟩ BASIS STATES OF TWO SPINS
In this appendix, we present the relations between different basis states of two spins. Equations (<ref>) determine the states of spin-1/2 projected in the positive and negative direction of the axis defined by the spherical angles θ and ϕ. The inverse relations to states (<ref>) have the form
|↑⟩=cos(θ/2)| +⟩-sin(θ/2)e^iϕ| -⟩, |↓⟩=sin(θ/2)e^-iϕ| +⟩+cos(θ/2)| -⟩.
Using these relations the basis states of two S_1, S_2 spins can be rewritten as follows
|↑↑⟩_12=cos^2(θ/2)| ++⟩_12 -cos(θ/2)sin(θ/2)e^iϕ(|+-⟩_12+| -+⟩_12)
+sin^2(θ/2)e^2iϕ|–⟩_12,
|↑↓⟩_12=cos(θ/2)sin(θ/2)e^-iϕ| ++⟩_12 +cos^2(θ/2)|+-⟩_12
-sin^2(θ/2)| -+⟩_12-cos(θ/2)sin(θ/2)e^iϕ|–⟩_12,
|↓↑⟩_12=cos(θ/2)sin(θ/2)e^-iϕ| ++⟩_12 -sin^2(θ/2)|+-⟩_12
+cos^2(θ/2)| -+⟩_12-cos(θ/2)sin(θ/2)e^iϕ|–⟩_12,
|↓↓⟩_12=sin^2(θ/2)e^-2iϕ| ++⟩_12 +cos(θ/2)sin(θ/2)e^-iϕ(|+-⟩_12+| -+⟩_12)
+cos^2(θ/2)|–⟩_12.
§ STATE OF THE SYSTEM IN THE BASIS | +⟩, | -⟩ FOR S_A AND S_B SPINS
In this appendix, using relations between basis states |↑⟩, |↓⟩ and | +⟩, | -⟩ (<ref>) for S_a, S_b spins, we
rewrite state (<ref>) in the form
|ψ(t)⟩=1/4e^-i(J_z/4+h')t
×[ e^-iht(cos^2(θ/2)e^-iJ_0t + sin(θ)e^-i(ϕ-h't)e^-i(J/2-J_z/2)t + sin^2(θ/2)e^-2i(ϕ-h't)e^iJ_0t) |↑↑⟩_12.
.+(cos^2(θ/2) + sin(θ)e^-i(ϕ-h't)e^-i(J/2-J_z/2)t + sin^2(θ/2)e^-2i(ϕ-h't))( |↑↓⟩_12+|↓↑⟩_12).
.+e^iht(cos^2(θ/2)e^iJ_0t + sin(θ)e^-i(ϕ-h't)e^-i(J/2-J_z/2)t + sin^2(θ/2)e^-2i(ϕ-h't)e^-iJ_0t) |↓↓⟩_12]| ++⟩_ab
+1/4e^-iJ_z/4t[ e^-iht(cosθ e^-i(J/2-J_z/2)t + i sinθsin((h'+J_0)t-ϕ) ) |↑↑⟩_12.
.+(cosθ e^-i(J/2-J_z/2)t + i sinθsin(h't-ϕ) )( |↑↓⟩_12+|↓↑⟩_12) .
.+e^iht(cosθ e^-i(J/2-J_z/2)t + i sinθsin((h'-J_0)t-ϕ) ) |↓↓⟩_12](| +-⟩_ab+| -+⟩_ab)
+1/4e^-i(J_z/4-h')t
×[ e^-iht(sin^2(θ/2)e^2i(ϕ-h't)e^-iJ_0t - sin(θ)e^i(ϕ-h't)e^-i(J/2-J_z/2)t + cos^2(θ/2)e^iJ_0t) |↑↑⟩_12.
.+(sin^2(θ/2)e^2i(ϕ-h't) - sin(θ)e^i(ϕ-h't)e^-i(J/2-J_z/2)t + cos^2(θ/2) )( |↑↓⟩_12+|↓↑⟩_12).
.+e^iht(sin^2(θ/2)e^2i(ϕ-h't)e^iJ_0t - sin(θ)e^i(ϕ-h't)e^-i(J/2-J_z/2)t + cos^2(θ/2)e^-iJ_0t) |↓↓⟩_12]|–⟩_ab.
99
desurvire2009 E. Desurvire, Classical and Quantum Information Theory: An Introduction for the Telecom Scientist (Cambridge University Press, Cambridge, 2009).
Ekert1991 A. K. Ekert, Phys. Rev. Lett. 67, 661 (1991).
Bennett1992 Ch. H. Bennett, S. J. Wiesner, Phys. Rev. Lett. 69, 2881 (1992).
TELEPORT C. H. Bennett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres, W. K. Wootters, Phys. Rev. Lett. 70, 1895 (1993).
Zeilinger1997 D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, A. Zeilinger, Nature 390, 575 (1997).
cerf1998 N. J. Cerf, C. Adami, P. G. Kwiat, Phys. Rev. A 57, R1477 (1998).
pittman2001 T. B. Pittman, B. C. Jacobs, J. D. Franson, Phys. Rev. A 64, 062311 (2001).
gasparoni2004 S. Gasparoni, J.-W. Pan, Ph. Walther, T. Rudolph, A. Zeilinger, Phys. Rev. Lett. 93, 020504 (2004).
englert2001 Berthold-Georg Englert, Ch. Kurtsiefer, H. Weinfurter, Phys. Rev. A 63, 032303 (2001).
Giovannetti20031 V. Giovannetti, S. Lloyd and L. Maccone, Europhys. Lett. 62, 615 (2003).
Giovannetti20032 V. Giovannetti, S. Lloyd and L. Maccone, Phys. Rev. A 67, 052109 (2003).
Batle2005 J. Batle, M. Casas, A. Plastino and A. R. Plastino, Phys. Rev. A 72, 032337 (2005).
Borras2006 A. Borras, M. Casas, A. R. Plastino and A. Plastino, Phys. Rev. A 74, 022326 (2006).
ASPECT A. Aspect, J. Dalibard, G. Roger, Phys. Rev. Lett. 49, 1804 (1982).
quantcomp David P. DiVincenzo, Fortschr. Phys. 48, 771 (2000).
qdots1 Daniel Loss and David P. DiVincenzo, Phys. Rev. A 57, 120 (1998).
phosphorus3 B. E. Kane, Nature 393, 133 (1998).
phosphorus1 Jarryd J. Pla, Kuan Y. Tan, Juan P. Dehollain, Wee H. Lim, John J. L. Morton, Floris A. Zwanenburg, David N. Jamieson, Andrew S. Dzurak and Andrea Morello, Nature 496, 334 (2013).
kuzmak2020 A. R. Kuzmak, Phys. Scr. 95, 035403 (2020).
supcond1 L. F. Wei, Yu-xi Liu and Franco Nori, Phys. Rev. B 71, 134506 (2005).
supcond2 J. E. Mooij, T. P. Orlando, L. Levitov, Lin Tian, Caspar H. van der Wal and Seth Lloyd, Science 285, 1036 (1999).
supcond3 Yuriy Makhlin, Gerd Schön and Alexander Shnirman, Rev. Mod. Phys. 73, 357 (2001).
supcond4 J. Majer et al., Nature 449, 443 (2007).
SchrodCat1 K. Molmer, A. Sorensen, Phys. Rev. Lett. 82, 1835 (1999).
EQSSTI D. Porras, J. I. Cirac, Phys. Rev. Lett. 92, 207901 (2004).
SchrodCat2 D. Leibfried etc, Nature 438, 639 (2005).
ETDIITIQSHI J. W. Britton, B. C. Sawyer, A. C. Keith, C. C. Joseph Wang, J. K. Freericks, H. Uys, M. J. Biercuk, J. J. Bollinger,
Nature 484, 489 (2012).
QSDEGHTI J. G. Bohnet, B. C. Sawyer, J. W. Britton, M. L. Wall, A. M. Rey, M. Foss-Feig, J. J. Bollinger, Science 352, 1297 (2016).
opticallattice1 L.-M. Duan, E. Demler, M. D. Lukin, Phys. Rev. Lett. 91, 090402 (2003).
opticallattice18 A. B. Kuklov, B. V. Svistunov, Phys. Rev. Lett. 90, 100401 (2003).
opticallattice5 I. Bloch, Many-Body Physics with Ultracold Gases Edited by C. Salomon, G. Shlyapnikov, L. F. Cugliandolo (Oxford University Press, Oxford, UK, 2013), pp. 71-108.
wang2018s Yuanhao Wang, Ying Li, Zhang-qi Yin, Bei Zeng, npj Quant. Inf. 4, 46 (2018).
mooney2019 G. J. Mooney, Ch. D. Hill, L. C. L. Hollenberg, Sci. Rep. 9, 13465 (2019).
kuzmak20201 A. R. Kuzmak, V. M. Tkachuk, Phys. Lett. A 384, 126579 (2020).
arute2019 Fr. Arute et al., Nature 574, 505, (2019).
kuzmak20202 A. R. Kuzmak, V. M. Tkachuk, Condens. Matter Phys. 23, 43001 (2020).
kuzmak2021 A. R. Kuzmak, V. M. Tkachuk, Eur. Phys. J. Plus 136, 564 (2021).
gnatenko2021 Kh. P. Gnatenko, V. M. Tkachuk, Phys. Lett. A 396, 127248 (2021).
gnatenko20212 Kh. P. Gnatenko, N. A. Susulovska, EPL 136, 40003 (2021).
drillon1988 M. Drillon, E. Coronado, M. Belaiche, R. L. Carlin, J. Appl. Phys. 63, 3551 (1988).
drillon1993 M. Drillon, M. Belaiche, P. Legoll, J. Aride, A. Boukhari, A. Moqine, J. Magn. Magn. Mater. 128, 83 (1993).
sakurai2002 H. Sakurai, K. Yoshimura, K. Kosuge, N. Tsujii, H. Abe, H. Kitazawa, G. Kido, H. Michor, G. Hilscher, J. Phys. Soc. Japan 71, 1161 (2002).
kikuchi2005 H. Kikuchi, Y. Fujii, M. Chiba, S. Mitsudo, T. Klehara, T. Tonegawa, K. Okamoto, T. Sakai, T. Kuwai, H. Ohta, Phys. Rev. Lett. 94, 227201 (2005).
bose2005 I. Bose, A. Tribedi, Phys. Rev. A 72, 022314 (2005).
tribedi2006 A. Tribedi, S. Bose, Phys. Rev. A 74, 012314 (2006).
ananikian2006 N. S. Ananikian, L. N. Ananikyan, L. A. Chakhmakhchyan, O. Rojas, J. Phys.: Condens. Matter 24, 256001 (2012).
ananikian2012 N. Ananikian, H. Lazaryan, M. Nalbandyan, Eur. Phys. J. B 85, 223 (2012).
chakhmakhchyan2012 L. Chakhmakhchyan,, N. Ananikian, L. Ananikyan, C. Burdik, J. Phys.: Conf. Ser. 343, 012022 (2012).
rojas2012 O. Rojas, M. Rojas, N. S. Ananikian, S. M. de Souza, Phys. Rev. A 86, 042330 (2012).
rojas2014 J. Torrico, M. Rojas, S. M. de Souza, O. Rojas, N. S. Ananikian, EPL 108, 50007 (2014).
torrico2016 J. Torrico, M. Rojas, M. S. S. Pereira, J. Strecka, M. L. Lyra, Phys. Rev. B 93, 014428 (2016).
rojas2017 O. Rojas, M. Rojas, S. M. de Souza, J. Torrico, J. Strecka, M. L. Lyra, Physica A 486, 367 (2017).
Zheng2018 Y. Zheng, Z. Mao, B. Zhou, Chin. Phys. B 27, 090306 (2018).
Cavalho2019 I. M. Carvalho, O. Rojas, S. M. de Souza, M. Rojas, Quant. Inf. Process. 18, 134 (2019).
Ghannadan2022 A. Ghannadan, Katarína Karl'ova, J. Strecka, Magnetochemistry 8, 11 (2022).
Benabdallah2022 F. Benabdallah, S. Haddad, H. A. Zad, M. R. Pourkarimi, M. Daoud, N. Ananikian, Sci. Rep. 12:6406 (2022).
kuzmak2023 A. R. Kuzmak, J. Phys. A 56, 165302 (2023).
srtech C. P. Slichter, Principles of Magnetic Resonance (Springer-Verlag, Berlin, 1990).
Vandersypen2004 L. M. K. Vandersypen, I. L. Chuang, Rev. Mod. Phys. 76, 1037 (2004).
Nichol2017 J. M. Nichol, L. A. Orona, Sh. P. Harvey, S. Fallahi, G. C. Gardner, M. J. Manfra, A. Yacoby, npj Quantum Information 3, 3 (2017).
Harvey-Collard2018 P. Harvey-Collard et al, Phys. Rev. X 8, 021046 (2018).
Nagy2019 R. Nagy et al, Nature Communications 10, 1954 (2019).
Kuzmak2014 A .R. Kuzmak, V. M. Tkachuk, Phys. Lett. A 378, 1469 (2014).
Kuzmak2018 A .R. Kuzmak, Int. J. Quan. Inf. 16, 1850044 (2018).
Sahling2015 S. Sahling et al., Nature Physics 11, 255 (2015).
twosqg2 A. R. Kuzmak, V. M. Tkachuk, J. Phys. A 46, 155305 (2013).
twosqg3 N. Khaneja, S. J. Glaser and R. Brockett, Phys. Rev. A 65, 032301 (2002).
twosqg4 T. O. Reiss, N. Khaneja, and S. J. Glaser, J. Magn. Reson 165, 95 (2003).
twosqg5 H. Yuan, N. Khaneja, Phys. Rev. A 72, 040301(R) (2005).
twosqg6 R. Zeier, H. Yuan and N. Khaneja, Phys. Rev. A 77, 032332 (2008).
Zu2014 C. Zu, W.-B. Wang, L. He, w.-G. Zhang, C.-Y. Dai, F. Wang, L.-M. Duan, Nature 514, 72 (2014).
wootters1997 S. A. Hill, W. K. Wootters, Phys. Rev. Lett. 78, 5022 (1997).
wootters1998 W. K. Wootters, Phys. Rev. Lett. 80, 2245 (1998).
|
http://arxiv.org/abs/2307.00873v1
|
20230703091057
|
End-To-End Prediction of Knee Osteoarthritis Progression With Multi-Modal Transformers
|
[
"Egor Panfilov",
"Simo Saarakkala",
"Miika T. Nieminen",
"Aleksei Tiulpin"
] |
eess.IV
|
[
"eess.IV",
"cs.CV"
] |
Geometric renormalization of weighted networks
M. Ángeles Serrano
August 1, 2023
==============================================
empty
§ INTRODUCTION
Knee osteoarthritis (KOA) is a chronic musculoskeletal disease affecting millions of people worldwide <cit.>. Progression of KOA results in degeneration of knee joint's bony and soft tissues, which is often accompanied by worsening in symptoms <cit.>. Personalized prediction of structural KOA trajectory is important for multiple reasons, including early interventions and development of disease-modifying drugs, however, it is challenging due to high disease heterogeneity and rather poor understanding of KOA phenotypes <cit.>.
Conventionally, the status of the suspected knees is assessed clinically from radiographic images. Weight-bearing X-ray images visualize alterations in bones' shape (e.g. osteophytes) and texture (e.g. subchondral sclerosis) with high contrast, as well as provide indirect measurements of cartilage and menisci degeneration via apparent joint space <cit.>. These are the primary joint changes, and they are highly consistent across subjects with KOA. To date, the most established KOA severity scoring system – Kellgren-Lawrence grading (KLG)<cit.> – is based on radiographic imaging.
Studies published during the past decade have shown that many soft tissue changes, e.g. in cartilage, menisci, ligaments, synovial and adipose tissues, are also associated with OA onset and progression <cit.>. They are not visible in radiographs but can be detected and tracked using Magnetic Resonance Imaging (MRI), which enables three-dimensional imaging of the joint. Knee MRI studies typically include several MR imaging protocols with complementary contrasts, and they target morphological factors in major joint tissues, such as the severity of osteophytes, cartilage thickness, and meniscal and ligament tears. The MRI protocols can be divided into structural – targetting tissue morphology – and compositional MRI – reflecting microstructure and biochemical content. The most apparent morphological changes in soft tissues have been incorporated into advanced grading schemes, such as MOAKS <cit.>, however, utilization of such schemes for studying KOA progression remains limited <cit.>. Quantitative MRI (qMRI) protocols, such as T_2-mapping, have been getting increased attention due to their sensitivity to compositional tissue changes (e.g. collagen anisotropy in cartilage and meniscus in early KOA <cit.>, fatty infiltration of muscles <cit.>) and considerable technology readiness level <cit.>. Overall, despite the rich information provided by multi-sequence MRI in addition to radiography and sensitivity to early tissue changes, the real prognostic utility of MRI and, specifically, qMRI in KOA remains understudied <cit.>.
The vast majority of prior art on MRI in KOA progression prediction operated with limited sample sizes and highly interpretable and localized imaging biomarkers, which are typically extracted via image segmentation and basic radiomics <cit.>. Such conventional biomarkers are designed using a "bottom-up" approach, primarily describing apparent changes that occur in major joint tissues, particularly, in cartilage. As a result, the role of less affected tissues remains unstudied, and it is gaining attention only recently <cit.>. Another limitation of many prior works is that they perform aggressive subject exclusion for the definition of groups, omitting the study participants with mixed and inconsistent findings. This process allows studying the sensitivity of developed biomarkers in the discrimination of small-scale groups, while severely compromising/underestimating their specificity (i.e. generalization) <cit.>. While this knowledge lays the foundation for clinical management of KOA subjects by fine-grained differentiation of disease progression, it does not necessarily answer the question of how the disease will progress in the future in a particular subject from a general population.
Modern computational methods, such as the ones based on Deep Learning (DL), have made possible the analysis of large-scale imaging studies and the development of new personalized prediction models <cit.>. With DL, the design of imaging biomarkers can be seen as "top-down" process. Here, the informative features that are discriminative w.r.t. the defined target are first automatically derived in a data-driven manner <cit.>. Subsequently, the learned features and their interaction are analyzed from the model by factorization of model activations into interpretable concepts defined by a human expert. While interpretability of DL models remains challenging <cit.>, such methods allow to understand the peak performance of certain data in the considered task, long before the clinically applicable biomarkers are designed <cit.>.
In the KOA domain, Tiulpin et al <cit.> have previously shown superior performance of DL applied to raw radiographic images in comparison to demographic variables and gold-standard KLG in the task of radiographic progression prediction. Studies on MRI data analysis in this scope, however, are very sparse. Wang et al <cit.> demonstrated high performance of DL with two MRI protocols in predicting whether the knee will undergo total knee replacement (TKR) within 9 years from the exam. In the same problem, but at 5 years horizon, Tolpadi et al <cit.> contrasted radiographic and MR images showing a slight advantage of the latter modality. While TKR is regulatory-approved as a KOA endpoint, it is not inherent to the disease, and we argue that it is a noisy progression surrogate. To this end, the recent work of Panfilov et al <cit.> have recently compared X-ray images and structural MRI in the prediction of radiographic KOA progression (increase of KLG as in the work of Tiulpin <cit.> et al.) within 8 years. All in all, KOA forecasting over short-term, which is more valuable for clinical trials, has not been thoroughly addressed. On top of that, the complementary value of clinically accessible imaging modalities, especially, compositional MRI, in identification of progressors remains an open question.
To date, the majority of DL-based multi-modal methods in medical image computing either perform aggressive data dimensionality reduction <cit.> or multi-stage late fusion <cit.>, where the modalities are first processed separately and then combined in a second-level shallow model. Both considerations are applied due to high memory demand in processing typically large medical images.
Accordingly, both of the aforementioned techniques limit the model's capabilities to derive rich and interrelated features. Lately, thanks to advances in computational platforms and DL methods, unified attention-based methods, such as Transformers <cit.>, were developed. Transformers have opened a possibility for holistic modeling in diverse multi-modal scenarios, with little to no modification of the original data <cit.>. In medical imaging, they were shown to often provide higher accuracy, particularly, when used with pre-training or in high volume data setting <cit.>.
In this study, we introduce a multi-modal DL-based method for predicting radiographic KOA progression (hereinafter referred to as "KOA progression") and investigate the value of various modalities in this task. The contributions of our work are three-fold:
* We propose a new end-to-end method to study KOA progression from multi-modal imaging data. We apply the method for prediction of rapid, middle-, and long-term radiographic progression, where we clarify the predictive value of imaging in the task and establish the new baseline models.
* We comprehensively analyze the complementary value of common imaging modalities (X-ray, structural, and compositional MRI) with respect to the considered outcomes. Our study is the first to use the quantitative T_2 maps of MRI in an end-to-end predictive model, and among the few to study compositional MRI in a large-scale setting.
* We analyze the efficacy of the best-performing models across different subject sub-groups and discuss the directions for further development of top-down methods for KOA progression prediction.
§ RESULTS
§.§ Training and testing datasets
Five observation intervals were considered (0-12/24/36/48/96 months) to derive 5 independent datasets from the Osteoarthritis Initiative (OAI) database. The complete sample selection procedure is presented in Figure <ref>. The most common reasons for exclusion were patient dropouts and missing clinical or imaging data. The progression target was defined based on the change in KLG within the considered interval. The knees with no recorded change in KLG were assigned to the "control" group and the ones with observed worsening of KLG - to the "progressor" group. Following the popular research practice, grades KLG0 and KLG1 were pooled together, as the corresponding change is often not considered clinically significant or reliable (KL1 is defined as “doubtful OA”) <cit.>. After grade pooling, a small number of subjects still showed an improvement in KLG, with or without accompanying worsening. To avoid ambiguity in the definition of disease progression, those subjects were excluded from the study. The final sample sizes were 3967, 3735, 3585, 3448, and 2421 for 12m, 24m, 36m, 48m, and 96m intervals, respectively. The ratio of progressors to the total number of subjects was notably higher with longer observation periods - 5.7, 8.4, 11.9, 14.5, and 27.7% for 12m, 24m, 36m, 48m, and 96m, respectively.
The resulting datasets were split into training, validation, and testing subsets. In the OAI, the subjects were observed at multiple data acquisition sites. All the subjects from the site "D" were assigned to the test set. While the acquisition protocols in the OAI are supposed to be standardized between the sites, a small domain shift between the images from different sites is still present. This subject allocation scheme allowed us to additionally model the potential discrepancy between training-time and testing-time images and, thus, make the evaluation more objective. The testing subsets' sample sizes were 1016, 933, 896, 867, and 626 for 12m, 24m, 36m, 48m, and 96m targets, respectively, which is 25-26% of the total sample. The remaining samples were split following a 5-fold stratifed cross-validation scheme (≈80/20%) while balancing the ratio of controls and progressors in the training and the validation subsets for each split (no overlapping subject-wise between the training and validation sets).
§.§ Progression prediction from individual modalities
Clinical data and semi-quantitative X-ray assessments
To better understand the predictive power of common clinical risk factors, a set of baseline models was developed. The variables included subject age, sex, BMI, history of past surgeries and injuries, symptomatic score (WOMAC; Western Ontario and McMaster Universities Osteoarthritis Index) <cit.>, as well as routinely assessed radiographic KLG score. The models along with their performance are described in Table <ref>. For the 12-month prediction horizon, adding WOMAC score and history of knee alterations yielded a notable increase of 0.07 in both average ROC AUC (p=0.079) and AP (p=0.042). Inclusion of KLG further improved AP by 0.03 (p=0.030), suggesting an added value of imaging in predicting progression short-term. For 24m-48m horizons, similar findings were observed, however, the predictive power of knee history and WOMAC score decreased, and the additional value of KLG, given other risk factors, was marginal. Interestingly, for 96m horizon, the presence of knee alteration history, WOMAC (model C3), and also KLG (model C4) yielded a notable increase in ROC AUC (0.03 [p=0.031] and 0.05 [p=0.008], respectively) and AP (0.05 [p=0.019] and 0.05 [p=0.023], respectively). Towards longer horizons, the average performance of all models grew faster in AP than the prevalence rate, suggesting that the identification of long-term progressors compared to rapid ones is more feasible. Taking the observed performance benefits of KLG into account, a purely non-imaging model C3 was used as a baseline in subsequent analysis.
Raw X-ray images
End-to-end models trained on raw radiographic images (XR) showed moderate performance at all horizons, as summarized in Table <ref>. Compared to the baseline, the modesl were inferior in both metrics at 12m and comparable at 24m. From 36m onwards, the models showed higher scores than the baseline, reaching statistically significant (p<0.021) improvements of 0.08 in AP for 48-96m targets.
MRI data
The performance of MRI-based models varied depending on whether structural (DESS/TSE) or compositional (T_2map) protocol was used (see Table <ref>). Structural modalities showed improved performance in ROC AUC - comparable to C3 and higher than X at 12m, and generally higher than both from 24m onward. Most notable increases in average AUC were observed for the 24m and 96m horizons. T_2map-based model M3 showed similar ROC AUCs as the XR one. In terms of AP, all models were similar to X, except for 48m (where the scores were marginally lower by 0.02-0.03) and 96m (where they improved the mean score by notable 0.05-0.07). Of all the observed improvements, the significant ones were found mostly for 96m prediction horizon. Here, all the MRI models were significantly better than the clinical baseline in both metrics (p<0.023). When compared against XR, the structural MRI protocols (M1 [DESS] and M2 [TSE]) also showed higher performance, both in ROC AUC (p=0.020 and p=0.007, respectively) and AP (p=0.138 and p=0.017, respectively). The model M1 was significantly better than the clinical baseline in ROC AUC also at 48m (p=0.030).
§.§ Multi-modal fusion
To clarify the complementary value of the considered imaging modalities, we performed an exhaustive experimental investigation. Here, three sets of models were developed based on the individual modalities studied earlier: fusion of XR with single MRI protocol (XR1MR1), two MRI protocols (MR2), and XR with two MRI protocols (XR1MR2). The best models selected within each setting are summarized in Table <ref> and the complete results including all models can be found in Table <ref>.
Fusion of MRI sequences
A combination of two MRI modalities resulted in only marginal improvement over individual structural MR sequences. Particularly, the fusion of DESS and TSE showed an increase in ROC AUC over individual modalities by 0.03 (p>0.221), but only at the 12m horizon. When either DESS or TSE was used in combination with the T_2map, no clear and consistent differences were observed compared to just the structural MR sequence. Against the individual T_2map modality, the models yielded an increase by 0.02-0.04 of ROC AUC, which was, however, significant (p=0.010) for model F5 at 36m target and insignificant (p>0.057) elsewhere. The same models were able to marginally improve the AP scores at the 12m horizon by 0.02-0.03 (p>0.375) over individual TSE and T_2maps, but not higher than the DESS model. Otherwise, no noticeable difference in AP was observed. Among the MR2 models, DESS with TSE was marginally better for 12-24m horizons in ROC AUC, while DESS with T_2map was more dominant at 36-48m in both metrics.
Fusion of multiple imaging modalities
A combination of radiographic and single-protocol MRI images generally resulted in a performance similar to the latter, yet a few notable improvements were observed in the ROC AUC space. Namely, the F1 model (XR, DESS) showed an increase of 0.11 (p=0.039) and 0.05 (p=0.106) in the score at the 12m horizon compared to the individual XR and MRI DESS modalities, respectively. With the model F3 (XR, T_2map), the gains of 0.03 (p=0.103) and 0.02 (p=0.177) in ROC AUC were observed over M3 at the 48m and 96m horizons. Several performance drops were observed for the model F3 at 12m (by 0.08) and all the models F1-F3 at 24m (by 0.01-0.04) horizons. In terms of AP, the F1 model showed a marginal gain of 0.02 (p>0.238) for 48m and 96m targets over the model M1. The models F2 (XR, TSE) and F3 yielded rather consistent performance regression of 0.01-0.04 at all targets compared to the corresponding models M2 and M3.
In the setting with 3 modalities (XR and two MR sequences), the scores were largely similar to the XR1MR1 models. However, both ROC AUCs and APs recovered to the level highest across the included individual modalities at 12m-36m horizons. Compared to the corresponding MR2 models, the metrics were also generally similar, with an exception being the 12m and 48-96m horizons. At the 12m target, the ROC AUCs further improved over MR2 by 0.01-0.04 (p>0.090), which resulted in the model F7 being significantly (p=0.021) better than the model X and the model F8 - over X (p=0.005) and M1 (p=0.026). At 48m and 96m targets, a marginal consistent gain of 0.01 over MR2 was observed in all models XR1MR2 in both metrics. Overall, the top performing model was F8 (XR, DESS, T_2map), yielding the highest number of statistically significant improvements over the individual clinical and imaging modalities.
Fusion of all imaging modalities and clinical data
Lastly, the modalities from the best performing model F8 were combined with the clinical variables in a holistic fusion model U. Here, the XR1MR2 architecture was extended with an additional shallow fully connected branch to embed the clinical variables (see Figure <ref>). The model demonstrated a performance similar or marginally lower to the one without clinical variables, namely, 0.70-0.76 in ROC AUC across the targets and 0.10 (0.02), 0.15 (0.03), 0.23 (0.03), 0.26 (0.03), and 0.55 (0.03) in AP for 12m, 24m, 36m, 48m, and 96m horizons respectively. Interestingly, the model U was not able to achieve the highest AP at the 12m target, demonstrated previously by C3 model.
§.§ Performance with respect to patient sub-groups
The performance of models on the heterogeneous patient cohorts brings rather limited interpretation capabilities and, thus, actionable insights. To explore which patients may benefit from using certain imaging modalities and predictive models, we analyzed the performance metrics sub-group-wise. Here, we selected only those subjects, for whom the labels were available at all the horizons. Next, all the subjects were assigned to one of the three groups - "no prior injury or surgery", "prior injury, but no surgery", or "prior surgery". The prevalence rates of progressors in the groups were 0.059, 0.106, and 0.067, respectively. Post-traumatic cases may show distinct imaging findings and are often considered separate phenotypes in scientific literature <cit.>, thus, such separation. Within each of these groups, the subjects were further divided into sub-groups, based on the severity of radiographic KOA ("KLG 0-1", "KLG 2", "KLG 3") and presence of symptoms ("WOMAC 0-10", "WOMAC 10-100"). Within each sub-group, we calculated the performance metrics by averaging them over all the horizons. For AP, to account for different prevalences across the targets, the metric was calibrated before averaging to a fixed prevalence of 0.15 <cit.>. The models compared included the individual modalities – clinical, X-ray, and DESS MRI –, as well as the top-ranked multi-modal fusion model. The latter was selected via a multi-objective ranking procedure over all horizons and both performance metrics (see the details in Methods).
We first considered the "no prior injury or surgery" group. Here, the overall ROC AUCs were moderate with all the models. The highest performance (AUC=0.65-0.80) was observed in asymptomatic KLG0/1, as well as symptomatic KLG2 and KLG3 sub-groups. The X-ray model was more consistent across the sub-groups but was inferior to other models for symptomatic KLG2 subjects. All models performed poorly with the asymptomatic KLG3 sub-group (AUC<0.50), which was also the smallest one. In terms of AP, the performance was generally low (AP=0.20-0.55), showing the challenging nature of the OA progression prediction problem. MRI- (M1) and fusion-based (F8) models performed stronger with asymptomatic KLG0/1, all KLG2, and symptomatic KLG3 subjects.
In the "prior injury, but no surgery" group, the overall performance in ROC AUC was high-to-very-high, with M1 and F8 models showing an increase up to 0.10 over the rest (Figure <ref>). Here, the imaging models showed high AUC in all the sub-groups. The models using MRI were more accurate at KLG0-2, while the XR model was slightly more accurate at KLG3. In AP, M1 and F8 were dominant in the same sub-groups as previously (Figure <ref>). The model based on the clinical data showed the highest score in the symptomatic KLG0/1 sub-group and was comparable at KLG2, otherwise performing poorly. The X-ray-based model was more accurate towards severe OA stages, particularly, at KLG3. Both metrics were notably higher than in the "no prior injury or surgery" subject group, suggesting the clear added value of imaging, particularly, MRI in post-traumatic subjects.
In the "prior surgery" group analysis all the considered imaging models showed moderate-to-very-high ROC AUCs. Importantly, all the sub-groups here had very small sample sizes. The clinical model was notably inferior in performance, except for the small asymptomatic KLG2 sub-group. M1 and F8 showed performance similar to each other, with the former having much higher AP for the asymptomatic KLG0/1 sub-group. The X-ray model was more accurate in both metrics for the symptomatic KLG2 sub-group.
To summarize the findings, the performance of all the models in predicting KOA progression was consistently higher in post-traumatic and post-intervention knees. In the same groups, the imaging models showed more notable improvement over the clinical variable model, particularly, in positive predictive value. In the "no prior injury or surgery" group, the APs were poor with all models. However, the imaging with MRI provided additional value for normal and mild OA knees. Interestingly, the fusion model to a degree resembled the average performance of XR- and DESS MRI-based models.
§.§ Contribution of imaging modalities in multi-modal setting
To understand the relative contribution of imaging modalities to the final decision in the top-performing fusion models, a model interpretation technique called "feature ablation" was employed. Here, the entire inputs corresponding to the modalities were individually masked, and the drop in the model performance was recorded. The decrements were inverted and normalized across the modalities to derive Relative Utilization Rate (RUR).
The RURs computed for the selected models are shown in Figure <ref>. In the case where radiographic and structural MRI (DESS) data were fused, the average contributions were 0.04-0.13 and 0.87-0.96, respectively, across the horizons (Figure <ref>). This suggests that the anatomical information provided by the volumetric MRI scan is dominantly more informative in the scope of radiographic KOA progression prediction.
When structural (DESS) and compositional (T_2map) MRI protocols were considered together (Figure <ref>), the average RURs were 0.72 and 0.28 at 12m horizon and they gradually changed to 0.81 and 0.19 at 96m horizon, respectively. The reduced RUR for DESS MRI may indicate the importance of tissue compositional changes provided with T_2map in the scope of KOA progression, but also that certain imaging biomarkers are more easily derived from high-contrast T_2maps. The observed trend from 12m towards 96m horizon may indicate lower overall importance of the visualized tissue composition (particularly, cartilage) on the progression long-term. The model fusing radiographic data with two MRI protocols (Figure <ref>) also showed that volumetric structural data dominates other imaging sources (0.85-0.92 [DESS] versus 0.08-0.14 [T_2map] and <0.02 [XR]). Interestingly, the model assigned very low RUR to the XR modality. When the clinical data were additionally incorporated into the model (Figure <ref>), it also barely showed any contribution at all the horizons (average RURs<0.01). Overall, these findings suggest that MRI-based modalities are highly informative and visualize symptomatic, post-surgical, and post-traumatic cues at the level or higher than the clinical variables and X-ray data that are relevant to radiographic KOA progression.
§ DISCUSSION
In this study, we presented a multi-modal method for prediction of radiographic KOA progression and applied it to perform an exhaustive study of commonly acquired modalities in the task. Our proposed approach enables leveraging unique large-scale longitudinal cohorts, such as OAI, for studying the disease progression in broad populations.
The primary finding of our work is that the fusion of multiple widely acquired modalities, particularly, imaging, does not seem to provide significant improvement in the prediction of knee osteoarthritis progression, defined as radiographic worsening, over single modalities, both in short- and long-term horizons. It is important to note, however, that the overall best-ranked model in our experiments was based on XR, structural (DESS), and compositional (T_2map) MRI, suggesting that some of the subjects may still benefit from the multi-modal examination.
We have shown that T_2maps seem to have marginal additional value in all prediction horizons. This may be partially explained by the potentially limited association between compositional tissue properties and KOA progression defined radiographically.
Furthermore, unresolved methodological challenges, such as considerable field orientation dependence of T_2, might have also contributed to this finding <cit.>. Importantly, we also acknowledge the fact that the studied MRI protocols, despite providing excellent contrast for major tissues, such as cartilage, bone, menisci, fat, and associated lesions, may still provide incomplete details on the knee status. The emerging imaging methods, particularly, Magnetic Resonance Fingerprinting <cit.>, have the potential to perform holistic parametric tissue mapping and, thus, deliver a more objective view on the value of KOA MR imaging, however, they are still in the process of getting wide adoption.
Generally, all the imaging models yielded larger gains on top of the clinical data models towards longer progression horizons. This finding suggests that the role of imaging biomarkers in shorter-term progression prediction is lower, and other factors, such as subject metabolic health, environmental factors, or physical activity, may be more informative than imaging. From the practical utility perspective, using structural MRI sequences led to consistent, yet non-significant improvements over the model trained on radiographic images. While, currently, MRI is a rather expensive imaging modality, recent development in low-field MRI and fast multi-parametric techniques (e.g. aforementioned MR Fingerprinting) hold great promise that MRI could eventually become an affordable tool for osteoarthritis screening. It is important to note that not all subjects may necessarily benefit from imaging. In our sub-group analysis, we observed that the performance of the predictive models was heterogeneous and, at least, depended on whether the knee was subject to trauma, intervention, or neither. This finding suggests also that post-traumatic and post-surgical subjects should be considered independently in future scale imaging studies <cit.>.
In our study, we defined the OA progression as an increase in KLG score. While KLG is the most established and widespread grading scheme for OA, it naturally lacks sensitivity to fine-grained joint changes that are not reflected directly or indirectly in radiographic images. Further works could explore more comprehensive grading schemes for the task, such as MRI-based MOAKS <cit.>. However, this comes with a challenge – how to define the common progression trajectory from multivariate scoring data <cit.>. Here, already existing considerations on OA phenotypes can be used <cit.>, however, they still require a thorough validation. Accordingly, the development of new OA surrogates in a data-driven manner could be an exciting area for future research.
In this work, we aimed to clarify the value of imaging modalities in the prediction of radiographic OA progression within multiple horizons. When targeted for downstream clinical use, DL could be used within other established domain-specific frameworks, such as time-to-event <cit.> or disease trajectory forecasting <cit.>.
Next, we used the data from a single observation point to produce the predictions. With the high-dimensional imaging data, it may be beneficial for the predictive model not only to rely on the joint anatomy but also on the rate of change derived from several successive exams of an individual. While this approach has been proven feasible for individual tissues <cit.>, processing multiple complete 3D knee scans could be an expensive computational problem, and the development of new methods is still needed.
Overall, computational and data efficiency is an important issue in multi-modal data fusion. Having larger sample sizes would likely be beneficial both for improving the performance and robustness of our models. Alternatively, modifications to the fusion model architecture can be done to reduce the number of parameters, e.g. via alternating or factorized attention in transformers <cit.>. Further works could also investigate emerging foundation models for medical imaging <cit.>, which are to provide generic medical-specific visual features, thus, notably reducing data demand.
Finally, as previously discussed, other modalities/factors could be studied in the problem, particularly, subject lifestyle, physical activity, and metabolic biomarkers.
We interpreted the relative contribution of imaging modalities within the fusion models and observed that the structural DESS MRI was dominant across all the horizons. Such protocol certainly provides higher information and a more comprehensive view on the knee joint status. A recent study <cit.> suggested that DL-based models are prone to greedy learning, at least, in multi-view fusion scenarios, which practically leads to unequal optimization rates across the modality branches. While the effect of this finding on the performance shown by the authors was rather small, its magnitude with diverse modalities of different shapes needs further investigation. Furthermore, the fusion of modalities may be orchestrated in a more clinically meaningful way, where using highly accessible data (e.g. clinical variables or XR images) is prioritized during the model training. Given the scope of this study, we intentionally focused on high-level model interpretability. We acknowledge that finer methods for feature attribution exist and have been applied in the KOA studies <cit.>, yet their generalization and applicability to multi-modal imaging settings may not be straightforward <cit.>.
We hope that the findings from our study, along with the publicly released source code, will facilitate further advances in the data-driven development of knee OA progression surrogates, efficient OA progression prediction models, and clinical guidelines for OA screening.
§ METHODS
*Sample selection
The data from The Osteoarthritis Initiative (OAI, <https://nda.nih.gov/oai/>) – a multi-center longitudinal osteoarthritis study – was used in this work. We derived five datasets from the baseline visit of OAI, one per studied progression horizon – 12, 24, 36, 48, and 96 months (see Table <ref>). All the selected subjects had demographic and clinical variables recorded, and their studied knees were imaged with posteroanterior bilateral X-ray and underwent comprehensive MRI examination (3T Siemens MAGNETOM Trio, quadrature T/R knee coils). Obtained X-ray images were weight-bearing and imaged in fixed flexion using a SynaFlexer positioning frame (CCBR-SYNARC, San Francisco, CA). The MRI exam included, among others, 3 MRI sequences - sagittal 3D dual-echo steady state (DESS, voxel 0.37×0.37×0.7mm, matrix 384×384, 160 slices, FOV 140mm, TR 16.3ms, TE 4.7ms, flip angle 25^∘), coronal intermediate-weighted turbo spin-echo (TSE, voxel 0.37×0.37×3.0mm, matrix 384×384, 31 slices, FOV 140mm, TR 3.0ms, TE 29ms, flip angle 180^∘), and sagittal multi-slice multi-echo T_2 mapping (T_2map, voxel 0.31×0.31×3.0mm, matrix 384×384, 27 slices, FOV 120mm, TR 2.7s, TE 10-70ms). Since T_2maps were only acquired for right knees, only one knee per subject was included. The knees within each dataset were marked as "progressor" if an increase in KLG was recorded during the respective follow-up period, and as "non-progressors" if there was no change in KLG between the baseline and the end of the interval. A small number of knees that showed an improvement in KLG during the interval was excluded. The complete sample selection procedure is provided in detail in Figure <ref>.
*Clinical variables
Widely acquired demographic variables, history of past injuries and past surgeries, symptomatic and knee function score - Western Ontario and McMaster Universities Arthritis Index (WOMAC), and radiographic OA severity - Kellgren-Lawrence grade (KLG) were considered. The continuous variables – age, history of past injuries and past surgeries, body mass index (BMI), and WOMAC total score – were standardized to zero mean and unit variance. The categorical variables – sex, KLG, history of past injuries, and past surgeries – were transformed using one-hot encoding.
*X-ray images
The ROIs were extracted from the bilateral posteroanterior X-ray images. For that, the DL-based tool KNEEL <cit.> was used, which was previously developed and validated on the OAI data. The tool localized a set of bone surface landmarks in the femorotibial joint area. The landmarks were aggregated to derive the location of the knee joint center. The ROIs of 140×140 mm were cropped around the knee centers. The obtained patches were resampled to an isotropic pixel spacing of 0.195×0.195 mm^2.
After extraction of the knee ROIs, they were further cropped to the central patches of 700×700 pixels. Before feeding the data into the model, the patches were first standardized in intensity to [0; 1] range, underwent data augmentation (for the training samples only), and finally standardized to zero mean and unit range. Data augmentation included cropping to a random 700×700 pixels patch instead of the center one, random rotation within [-15, 15] degree range, and random gamma correction with γ from the range [0.0; 2.0]. Lastly, the patches were downsampled using bilinear interpolation to 350×350 pixels (pixel spacing of 0.390×0.390 mm^2).
*MR images
One of the aims of MR image preprocessing was to reduce the storage and memory demand while maintaining the ROI size and the visual quality of the samples. In DESS and TSE sequence data, the 3 least significant bits were truncated, resulting in 8 significant bits for DESS and 9 bits for TSE. Subsequently, the images were clipped in intensity to [0.0; 99.9] percentile range scan-wise.
For all the sequences, to exclude the image registration artifacts, we cut the slice edges of 16 voxels.
T_2maps were derived from the multi-slice multi-echo images via exponential fitting. On average, the OAI T_2 mapping acquisition protocol yielded 27 slices over 7 echo times. We used the T_2 relaxation monoexponential model (Equation <ref>) and optimized both I_0 and T_2 parameters voxel-wise using the available raw image intensities I_TE_i and the corresponding echo times TE_i. All the available echoes were used for fitting. The obtained T_2maps were clipped in intensity to [0; 100] ms range. Since the T_2 mapping protocol in the OAI is optimized for cartilage tissues, this helped to ensure that unreliable T2 values, which corresponded mainly to bone and fat pads, are excluded <cit.>. An example of the resulting T_2map is shown in Figure <ref>.
I_TE_i = I_0× exp(- TE_i/T_2)
In the next step, the images were cropped to the central area of [320, 320, 128] voxels for DESS, [320, 320, 32] for TSE, and [320, 320, 25] for T_2maps, where the first two dimensions correspond to the number of voxel rows and voxel columns in-slice, respectively, and the last dimension corresponds to the number of slices. Similarly to the radiographic data, the images were then transformed to [0; 1] intensity range, augmented, and standardized to zero mean and unit range. Data augmentation started with random cropping to the aforementioned dimensions, in-slice rotation (random degree from [-15, 15] range), and gamma correction (random γ from [0.0; 2.0] range). The gamma correction was not applied to the T_2maps. Finally, the images were downsampled using trilinear interpolation to [160, 160, 64] voxels for DESS, [160, 160, 32] for TSE, and [160, 160, 25] for T_2maps.
*Clinical data baselines
An independent logistic regression model was constructed for each target and each considered set of clinical variables (scikit-learn, version 0.24.2 <cit.>). In every setting, 5-fold cross-validation was used on the development data subset to find the best hyper-parameter – whether to use balanced class-weighting. Subsequently, 5 models were optimized using average precision scoring on the training data and evaluated on the testing subset. The ensemble predictions were derived by averaging softmax outputs across the folds.
*Imaging model architectures
The imaging model architectures varied depending on the considered set of modalities while following the same design principles. A schematic description of the architectures is shown in Figure <ref>, with more details provided in Section <ref> and the accompanying source code (PyTorch, version 1.8.2 <cit.>). For radiographic data processing, we reimplemented the previously validated model <cit.> based on pre-trained ResNeXt-5032x4d CNN (see Figure <ref>). For individual MRI sequences, the models comprised a shared pre-trained ResNet-50 CNN to extract slice-wise image descriptors, followed by a Transformer module to aggregate the representations across slices. Such design was previously shown to achieve higher performance compared to purely CNN-based models <cit.>, while also providing pre-training capability that is challenging to obtain with pure Transformers and moderate sample size. For the fusion of two modalities – XR1MR1 and MR2, an overall similar design was used. Here, two independent CNNs were used for each of the modalities, and their outputs were concatenated before the Transformer to allow for cross-modal fusion (see Figure <ref>). Lastly, in the fusion of three-to-four modalities, the MRI-related branches of the model had their independent mid-level Transformers to embed the features into common latent space before combining with other sources (Figure <ref>). The models with clinical data input had a shallow Fully-Connected network to transform the variables before fusion. A Transformer module was used on top of concatenated multi-modal embeddings, as previously.
All the described models were trained in 5-fold cross-validation, where the splits were done maintaining the consistent distribution of target labels. The training was run until convergence with a computational budget of 60 epochs. Adam optimizer <cit.> was used with weight decay of 1e-4 and learning rate warmup (from 1e-5 to 1e-4) over 5 initial training epochs. To address the effects of severe class imbalance, Focal loss (γ=2.0) was used along with an oversampling of the minority class. The best model within each fold was chosen based on the highest average precision score at validation. The batch size was 16 for the models with at least two MRI modalities, and 32 otherwise. Hardware-wise, a computational node with 4 NVIDIA A100 GPU was used for model training, and a PC with 2 NVIDIA 2080 Ti was used for evaluation and subsequent analysis. The single model training time (i.e. one fold) for the highest sample size 0-12m target varied from 0.5 (XR) to 6.5 hours (fusion of 4 modalities).
*Evaluation and model comparison
For each prediction target, the corresponding models were scored with ROC AUC and AP on the hold-out data. The mean and the standard error of each metric were estimated using bootstrapping (iter=1000) stratified by the target label. The statistical significance of improvements was assessed in two scenarios – (1) single-modality imaging models against the best clinical model, (2) fusion models against the clinical, XR, or DESS MRI models. For this, one-sided paired permutation testing (iter=1000, SciPy, version 1.9.3 <cit.>) was used.
For the subsequent analysis, the "best overall" multi-modal fusion setting s^* was selected using a multi-objective ranking procedure:
s^* = s∈ Sargmin(∑_f∈{ROC AUC,AP}∑_t∈{12, ..., 96} rank(f̅(s_t))), S={F1,...,F9,U}
Here, every fusion setting s was ranked from 1 to 10 (best to worst, respectively) for each target t and in each metric independently by the mean metric value f̅. Then, the ranks were summed, and the model with the highest total rank was chosen.
In subgroup analysis, average model performance across different targets was derived. Since the prevalence of progressors is different for different targets, which prohibits direct averaging, instead of standard AP we used its calibrated version <cit.>. Here, the scores within subgroups were calculated for target prevalence of 0.15, and only then averaged. ROC AUC scores were used unchanged. Symptomatic and non-symptomatic patient subgroups were defined based on the WOMAC total score. Clinical interpretation of WOMAC score is still rather non-standardized <cit.>. We used a threshold value of 10 on a total score 0-96 scale, which is an estimate of the minimal clinically important difference <cit.>.
The importance of individual modalities in the multi-modal fusion settings was estimated using the feature ablation method (Captum, version 0.5.0, Facebook Open Source <cit.>). Here, the unimodal inputs were replaced with the mean values one-by-one and degradation of the model performance was recorded for each sample. The values were normalized and averaged across the testing subset, which resulted in Relative Utilization Rates.
§ ACKNOWLEDGEMENTS
The authors acknowledge the following funding sources: strategic funding of Infotech Institute, University of Oulu; 6GESS Profiling Research Programme (Academy of Finland project 336449); Orion Research Foundation, Finland. CSC – IT Center for Science, Finland is kindly acknowledged for providing the generous computational resources, which made the study possible. Khanh Nguyen is acknowledged for preprocessing the radiographic images. We also thank Dr. Valentina Pedoia for an insightful discussion on the topic of the study.
The OAI is a public-private partnership comprised of five contracts (N01-AR-2-2258; N01-AR-2-2259; N01-AR-2-2260; N01-AR-2-2261; N01-AR-2-2262) funded by the National Institutes of Health, a branch of the Department of Health and Human Services, and conducted by the OAI Study Investigators. Private funding partners include Merck Research Laboratories; Novartis Pharmaceuticals Corporation, GlaxoSmithKline; and Pfizer, Inc. Private sector funding for the OAI is managed by the Foundation for the National Institutes of Health. This manuscript was prepared using an OAI public use data set and does not necessarily reflect the opinions or views of the OAI investigators, the NIH, or the private funding partners.
§ CREDIT AUTHOR STATEMENT
Egor Panfilov: Methodology; Software; Formal Analysis; Investigation; Data Curation; Writing-Original Draft; Visualization. Miika T. Nieminen: Project Administration; Funding Acquisition. Simo Saarakkala: Project Administration; Funding Acquisition. Aleksei Tiulpin: Methodology; Data Curation; Supervision; Funding Acquisition. All authors: Conceptualization; Writing-Review and Editing; Final Approval.
§ ADDITIONAL INFORMATION
§.§ Data and code availability statement
The data used in the study is derived from the publicly available Osteoarthritis Initiative database (https://nda.nih.gov/oai/https://nda.nih.gov/oai/). The source code of sample selection and subset allocation procedures, all the developed methods, and the performed analysis are made available at https://github.com/Oulu-IMEDS/OAProgressionMMFhttps://github.com/Oulu-IMEDS/OAProgressionMMF.
§.§ Competing interests
The authors declare no competing interests in relation to the present work.
§ SUPPLEMENTAL MATERIALS
§.§ Architectures of the models
The exact implementations of all the studied models can be found in the accompanying source code https://github.com/Oulu-IMEDS/OAProgressionMMF[link]. Here, we provide only a brief overview of the architectures (see Figure <ref>) along with the most important aspects. All the models were constructed by combining CNN and Transformer modules. The CNNs were ResNet-50 pre-trained on ImageNet, with the exception of the XR model, where previously developed ResNeXt-5032x4d was used. The latter model has previously shown stronger performance in a similar task <cit.>. The prepared images (i.e. slices) were transformed into descriptor vectors of 2048 elements by the CNNs. Next, a sequence of descriptors was passed through a Transformer (4 levels, 8 attention heads) to obtain an output of the same shape. If the Transformer was concluding the model (Figures <ref> and <ref>, green in Figure <ref>), a fully connected network with 1 hidden layer of 2048 neurons was used to map Transformers' output to the binary target. Otherwise, the complete output state of the Transformer was propagated further. In the architecture with clinical variables, they were concatenated into a single vector and transformed to the common embedding vector of 2048 elements using a fully connected network with one layer. Dropout with the rate of 0.1 was extensively used throughout every architecture.
In the experiments reported in the article, the CNN outputs were taken after the Global Average Pooling layer. We also experimented with using non-pooled representations but did not observe any consistent improvements. For the multi-sequence MRI fusion (Figure <ref>), we additionally investigated the setting with a cascade of Transformers (as in Figure <ref>), which resulted in similar scores yet higher computational demand. Lastly, for the holistic fusion model (Figure <ref>) we tried mixing in the modalities one at a time, starting with XR images. This lead to generally lower performance than the one obtained with the reported architecture.
|
http://arxiv.org/abs/2307.00330v1
|
20230701130107
|
Scalar induced gravitational waves in symmetric teleparallel gravity with a parity-violating term
|
[
"Fengge Zhang",
"Jia-Xi Feng",
"Xian Gao"
] |
gr-qc
|
[
"gr-qc"
] |
[email protected]
School of Physics and Astronomy, Sun Yat-sen University, Zhuhai 519088, China
[email protected]
School of Physics and Astronomy, Sun Yat-sen University, Zhuhai 519088, China
[email protected] (corresponding author)
School of Physics and Astronomy, Sun Yat-sen University, Zhuhai 519088, China
Gravitational waves (GWs) are useful to test gravitational theories and to probe the physics in the early universe. In this paper, we investigate the scalar induced gravitational waves (SIGWs) in symmetric teleparallel gravity with a parity-violating term. The presence of the parity-violating term leads to the velocity birefringence effect of the SIGWs. However, after taking into account the observational constraints on the speed of GWs, the contribution from the parity-violating term to SIGWs is negligible. Nevertheless, the contribution to SIGWs from the perturbations of the connection can be significant, and results in a multi-peak structure in the energy density of SIGWs. This feature makes the symmetric teleparallel gravity distinguishable from the general relativity.
Scalar induced gravitational waves in symmetric teleparallel gravity with a parity-violating term
Xian Gao
=================================================================================================
§ INTRODUCTION
The detection of gravitational waves (GWs) by the Laser Interferometer Gravitational-Wave Observatory (LIGO) scientific collaboration and Virgo collaboration <cit.> opens a new window to probe the nature of gravity in the strong gravitational field and nonlinear regime. Although the observation from cosmic microwave background (CMB) constrains the power spectrum of primordial curvature perturbation to be 𝒜_ζ∼𝒪(10^-9) on large scales <cit.>, it can be as large as 𝒜_ζ∼𝒪(10^-2) on small scales <cit.>. Such large scalar perturbation will induce the generation of gravitational waves, which are dubbed of the scalar induced gravitational waves (SIGWs), due to the nonlinear interactions between the scalar and tensor perturbations <cit.>. The SIGWs can be large enough to be detected by the space-based GW observatories, such as Laser Interferometer Space Antenna (LISA) <cit.>, TianQin <cit.> and Taiji <cit.>, as well as by the Pulsar Timing Array (PTA) <cit.> and the Square Kilometer Array (SKA) <cit.> in the future.
Discrete symmetries, such as parity, play an important role in modern physics.
While the parity is known to be violated in weak interactions <cit.>, one may wonder whether this symmetry violation exists in gravitational interactions and/or in the early universe as well. The parity-violating (PV) gravitational theories are generally predicted in quantum gravity theories such as the superstring theory and M-theory <cit.>. The recent hints of parity-violation in our universe from galaxy trispectrum and the CMB E/B cross-correlation also have attracted much attention <cit.>. The parity-violating scalar trispectrum was also studied in <cit.> recently.
The simplest PV term in the Riemannian geometry is the Chern-Simons (CS) term, which is quadratic in the Riemann tensor. The CS gravity was first proposed in <cit.> in four-dimensional spacetime, and later extensively studied in cosmology, GWs, and primordial non-Gaussianity <cit.>. Besides CS gravity, the PV gravity models with Lorentz breaking, such as Hořava gravity <cit.>, the PV higher derivative gravity <cit.> and the PV spatially covariant gravity <cit.> have also been proposed. In these Lorentz breaking PV gravity models, the chiral GWs have been studied extensively <cit.>, wherein interesting features of GWs were revealed, notably including phenomena such as the velocity and amplitude birefringence.
Recently, there are also interests in gravity theories based on non-Riemannian geometry. In particular, symmetric teleparallel gravity, which is with non-metricity tensor Q_ρμν = ∇_ρ g_μν and vanishing Riemann tensor, were proposed and attracted much attention <cit.>.
Similar to the CS gravity, the simplest PV term built of the non-metricity tensor is QQ ≡ε_μνρσQ^μν_μνλQ^ρσλ, which is quadratic in the non-metricity tensor. The simplest symmetric teleparallel gravity with PV was constructed by appending this term to the symmetric teleparallel equivalent Einstein-Hilbert action, of which the linear cosmological perturbation has also been studied <cit.>.
It was shown that the PV term has no contribution to the background evolution or the linear scalar perturbations.
The SIGWs in CS gravity have been studied in <cit.> recently.
The purpose of this work is to perform a similar study of the SIGWs in the symmetric teleparallel gravity with PV terms.
However, when the nonlinear perturbations are taken into account, the above simplest PV symmetric teleparallel gravity may be inconsistent.
Intuitively, the theory contains extra scalar degrees of freedom due to the PV term, which however do not show themselves on the linear order around a homogeneous and isotropic background.
This is reminiscent of the so-called strong coupling problem in the study of the Hořava gravity <cit.>.
As we will demonstrate in this paper, the simplest PV symmetric teleparallel gravity model suffers from such a strong coupling problem. Specifically, the scalar perturbations from the connection do not have the linear equations of motion of their own, which arise in the equation of motion of the SIGWs. To avoid this problem, we modify the symmetric teleparallel equivalent Einstein-Hilbert action by considering a general linear combination of quadratic monomials of the non-metricity tensor.
We then obtain the equations of motion of perturbations from connection, as well as find the solution of the perturbations from connection during the radiation-dominated era. Based on these results, we will calculate the contribution from the PV term as well as from the scalar perturbations of connection to the energy density of SIGWs in our model, respectively.
This paper is organized as follows. In section <ref>, we introduce the symmetric teleparallel gravity with a simple PV term. In section <ref>, we give the equations of motion for both the background evolution and the linear scalar perturbations, which we then solve during the radiation-dominated era. In section <ref>, we derive the equation of motion of SIGWs. In section <ref>, we calculate the power spectra of the SIGWs. In order to analyze the feature of SIGWs, we compute the energy density of SIGWs with the monochromatic power spectrum of primordial curvature perturbation. Our results are summarized in section <ref>. The quadratic action of linear scalar perturbations and the analytic part of the integral kernel are included in appendices <ref> and <ref>, respectively.
§ THE SYMMETRIC TELEPARALLEL GRAVITY WITH A PARITY-VIOLATING TERM
In symmetric teleparallel gravity, the affine connection is assumed to be free of the curvature and torsion, i.e.,
R^μ_ νρσ=∂_ρΓ^μ_ νσ-∂_σΓ^μ_ νρ+Γ^μ_ αρΓ^α_ νσ-Γ^μ_ ασΓ^α_ νρ=0,
T^μ_ νρ=Γ^μ_ ρν-Γ^μ_ νρ=0.
The gravitational effects are encoded in the non-metricity tensor, which is defined as
Q_ρμν=∇_ρg_μν=∂_ρg_μν-Γ^σ_ρμg_σν-Γ^σ_ρνg_σμ,
where g_μν is the spacetime metric, ∇ represents the covariant derivative.
With the condition of vanishing curvature and torsion tensors, the coefficients of the connection take the following general form <cit.>
Γ^ρ_ μν=∂ x^ρ/∂ y^σ∂_μ∂_νy^σ,
where y^μ(x) are four general scalar fields.
If we choose y^μ(x)=x^μ, then Γ^ρ_ μν=0. This is the so-called “coincident gauge”, which has been extensively used in the study of symmetric teleparallel gravity in order to simplify the calculation. In this paper, we do not take the coincident gauge, because it may not be compatible with the commonly used conventional parametrization for metric when dealing with cosmological perturbations <cit.>.
Consider the following action
S_g=∫d^4x√(-g)(ℚ/2-g(φ) QQ)+∫d^4x√(-g)(1/2g^μν∂_μφ∂_νφ-V(φ)),
where
ℚ=P^α_ μνQ_α^ μν <cit.>, with
P^α_ μν=c_1Q^α_ μν+c_2Q_(μ ν)^ α+c_3Q^αg_μν+c_4δ^α_ (μQ̃_ν)+c_5/2(Q^αg_μν+δ^α_ (μQ_ν)),
where c_1,⋯,c_5 are constants, and
Q_μ=Q_μ α^ α, Q̃^μ=Q_α^ μα.
The PV term is represented as <cit.>
QQ=ε^μνρσQ_μναQ_ρσ^ α,
where ε^μνρσ=ϵ^μνρσ/√(-g) is the Levi-Civita tensor, with ϵ^μνρσ the antisymmetric symbol.
In Eq. (<ref>) the scalar field effectively describes matter content in the universe.
By choosing the values of the parameters in the action (<ref>) to be
c_1=-1/4, c_2=1/2, c_3=1/4, c_4=0, c_5=-1/2,
the expression for ℚ becomes
ℚ=1/4Q_ρμνQ^ρμν-1/2Q_ρμνQ^μνρ-1/4Q^αQ_α+1/2Q^αQ_α=-R-∇_α(Q^α-Q^α),
which corresponds to the teleparallel equivalent Einstein-Hilbert Lagrangian.
Here, R is constructed with the metric g_μν, and ∇ is metric-compatible covariant derivative.
In this case, the linear cosmological perturbations were studied in <cit.>.
However, in the next section, we will show that this model suffers from the strong coupling problem at nonlinear orders, which can be avoided by choosing a suitable parameter set instead of (<ref>).
§ THE COSMOLOGICAL PERTURBATIONS
In this section, we study the evolution of background and the linear scalar cosmological perturbations.
Consider the spatially flat Friedmann-Robertson-Walker (FRW) background with small perturbations around it, the metric under the Newtonian gauge is
ds^2=a^2{(1+2ϕ+2ϕ^2)dτ^2-[(1-2ψ+2ψ^2)δ_ij+h_ij+1/2h_ikh^k_ j]dx^idx^j},
up to the second order in perturbations ϕ, ψ and h_ij,
and the components of the inverse metric are
g^00=1/a^2(1-2ϕ+2ϕ^2), g^0i=0,
g^ij=-1/a^2[(1+2ψ+2ψ^2)δ^ij-h^ij-4ψ h^ij+1/2h^i_ lh^lj+8ψ^2 h^ij].
Note that for our purpose to evaluate the SIGWs, only quadratic action for the scalar and tensor perturbations as well as cubic action involving two scalar and one tensor perturbation modes are needed.
Therefore, in the above expression for g^ij we have kept only the cubic term ψ^2h^ij for notational simplicity.
We also have
√(-g)=a^4(1+ϕ-3ψ+1/2ϕ^2-3ϕψ+9/2ψ^2).
At the background level, we can take y^μ(x)=x^μ. In the perturbed universe, we introduce ξ^μ to represent small deviations from the background functions y^μ and thus y^μ=x^μ+ξ^μ. We further decompose ξ^μ as ξ^μ={C,∂^iD}, where C and D are scalar perturbations. With these settings, the components of the connection can be expressed as
Γ^ρ_ μν=∂_μ∂_νξ^ρ-∂_σξ^ρ∂_μ∂_νξ^σ,
up to the second order.
We split the scalar field φ to be φ̅ + δφ, where φ̅(t) is the background value and δφ is the perturbation.
§.§ The EOMs of background
By expanding the action (<ref>) to the linear order in perturbations, we obtain the following action for the perturbations
S^(1)= ∫d^3x dτ a^2[(2𝒞_1ℋ^2-1/2(φ')^2-a^2V)ϕ-2𝒞_2ℋϕ'+6𝒞_3ℋψ'.
.+3(2𝒞_1ℋ^2-1/2(φ')^2+a^2V)ψ-a^2V_φδφ+φ'δφ'+2𝒞_2ℋC”],
where a prime denotes derivative with respect to the conformal time τ, and
𝒞_1=4c_1+c_2+16c_3+c_4+4c_5,
𝒞_2=2c_1+2c_2+8c_3+2c_4+5c_5,
𝒞_3=2c_1+8c_3+c_5.
Here and in what follows, we denote φ the background value for the scalar field for simplicity.
Varying the above action (<ref>) with respect to the perturbations ϕ, ψ, δφ, and C, we obtain the equations of motion (EOMs) for the background
2(𝒞_1+2𝒞_2)ℋ^2+2𝒞_2ℋ'=1/2(φ')^2+a^2V,
2(𝒞_1-2𝒞_3)ℋ^2-2𝒞_3ℋ'=1/2(φ')^2-a^2V,
φ”+2ℋφ'+a^2V_φ=0,
(a^2ℋ)”=0 (𝒞_2≠ 0).
From the above EOMs (<ref>)-(<ref>), we observe that the evolution of the background is unaffected by the PV term, as expected.
As a consistency check, once we choose the parameter sets as (<ref>), namely, 𝒞_1=3/2,𝒞_2=0, and 𝒞_3=1, the above equations of motion are the same as those in GR.
It is interesting to note that in the case of 𝒞_2≠ 0, (<ref>) acts as an extra constraint equation for the background.
§.§ The EOMs of the linear scalar perturbations
In order to get the equations of motion for the linear perturbations, we expand the action (<ref>) to the quadratic order in perturbations, which is tedious and can be found in Appendix <ref>. [Note that the form of the quadratic action (<ref>) will change by performing integrations by parts.]
Varying the quadratic action (<ref>) with respect to the scalar perturbations ϕ, ψ, δφ, C and D, we can obtain the EOMs for the corresponding scalar perturbations. Notably, the EOMs contain terms that are higher order in time derivatives, which implies that with an arbitrary choice of values of c_1,⋯,c_5, the action (<ref>) may propagate additional degree of freedom and some of them may suffer from the Ostrogradsky instability.
Therefore we need to find conditions for the parameters c_1,⋯,c_5 such that no term with higher time derivative are present.
Given the complexity of EOMs and the fact that the exact expressions are not crucial to our discussion below, in the following we only present terms that involve higher-order time derivatives.
* In the EOM of ϕ:
EOM(ϕ) ⊃ -4(c_1+c_2+c_3+c_4+c_5)C”',
* In the EOM of ψ:
EOM(ψ) ⊃ 6(2c_3+c_5)C”',
* In the EOM of C:
EOM(C)⊃ -(2c_1+3c_2+4c_3+3c_4+4c_5)∂^i∂_i D”'
-16(c_1+c_2+c_3+c_4+c_5)ℋC”'+4(c_1+c_2+c_3+c_4+c_5)ϕ”'
-6(2c_3+c_5)ψ”'-4(c_1+c_2+c_3+c_4+c_5)C””,
* In the EOM of D:
EOM(D) ⊃ (2c_1+3c_2+4c_3+3c_4+4c_5)∂^i∂_i C”'-4(2c_1+c_2+c_4)ℋ∂^i∂_iD”'
-(2c_1+c_2+c_4)∂^i∂_i D””.
There are no higher-order time derivative terms in the EOM of perturbation δφ.
To avoid the possible Ostrogradsky instability, the coefficients of the higher-order time derivative terms should vanish.
Therefore, the parameters c_1,⋯,c_5 must satisfy the following constraints
2c_3+c_5=0,
2c_1+c_2+c_4=0,
c_1+c_2+c_3+c_4+c_5=0,
2c_1+3c_2+4c_3+3c_4+4c_5=0.
Note that the above four equations (<ref>)-(<ref>) are not independent. Solving these equations yields the following solutions for the parameters:
c_1=1/2c_5, c_2=-c_4-c_5, c_3=-1/2c_5.
In Ref. <cit.>, the authors obtain the same results by demanding the second derivatives of the non-metricity tensor in the action to be vanishing. Note that the parameter set (<ref>), which makes ℚ to become the symmetric teleparallel equivalent GR Lagrangian, is a special case of (<ref>). Furthermore, by substituting the solutions (<ref>) into Eqs. (<ref>)-(<ref>), we find that 𝒞_1=-3c_5, 𝒞_2=0 and 𝒞_3=-2c_5, which implies that there is no extra constraint imposed on the background equations (<ref>).
In the rest of this work, we will perform the calculation with the solutions (<ref>), which can avoid the Ostrogradsky instability for the linear perturbations.
With Eq. (<ref>), the quadratic action for the scalar perturbations (<ref>) reduces to be
S^(2)_SS= ∫d^3x dτ a^2[-1/2a^2V_φφδφ^2-a^2V_φδφϕ+3a^2V_φδφψ-1/2∂^iδφ∂_iδφ+1/2(δφ')^2.
.
-(ϕ+3ψ)δφ'φ'+4c_5∂^iψ∂_iϕ-2c_5∂^iψ∂_iψ+6c_5(2ℋϕψ'+6ℋψψ'+(ψ')^2).
.+(9ψ^2+ϕ^2)(6c_5ℋ^2+1/2(φ')^2)-4c_4ℋ(∂_i∂^iD-C'+ϕ+ψ)∂_i∂^i(D'-C)],
where we have used the background equations (<ref>)-(<ref>).
By varying the action (<ref>) with respect to the scalar perturbations, we obtain the following EOMs for the linear scalar perturbations
4c_4ℋ∂_i∂^i(C-D')+4c_5(3ℋψ'+3ℋ^2ϕ-∂_i∂^iψ)=-(φ')^2ϕ+δφ'φ'+a^2V_φδφ,
4c_4ℋ∂_i∂^i(C-D')+4c_5∂_i∂^i(ψ-ϕ)-12c_5ℋ'(ϕ+3ψ)-12c_5ℋ(ϕ'+2ψ')
-12c_5ψ”+12c_5ℋ^2(3ψ-2ϕ)+9(φ')^2ψ-3δφ'φ'+3a^2V_φδφ=0,
c_5(ψ-ϕ)=c_4ℋ(D'-C),
δφ”+2ℋδφ'-∂_i∂^iδφ+a^2V_φφδφ+2a^2V_φϕ-(ϕ'+3ψ')φ'=0,
ℋ∂_i∂^i(ϕ+ψ-D”+∂_i∂^iD)+(2ℋ^2+ℋ')∂_i∂^i(C-D')=0 (c_4≠ 0),
(2ℋ^2+ℋ')∂_i∂^i(ϕ+ψ-C'+∂_i∂^i D)+ℋ∂_i∂^i(ϕ'+ψ'-C”+∂_i∂^i C)=0 (c_4≠ 0).
In Eq. (<ref>), the perturbations C and D disappear when c_4=0, which implies that they do not acquire the linear equations of motion of their own. However, in the next section, we will observe that C and D do exist at nonlinear orders and will contribute to the SIGWs, even in the case of c_4 = 0. This is reminiscent of the so-called strong coupling problem in the study of the Hořava gravity <cit.>, in which some perturbation modes do not show up at the linear order around a homogeneous and isotropic background, but do exist either at nonlinear orders or at linear order around an inhomogeneous background.
§.§ The evolution of the background and the linear perturbation
In order to calculate the SIGWs, in this subsection, we first discuss the evolution of the background and the linear scalar perturbations.
Although we are left with only two independent parameters c_4 and c_5 under the condition (<ref>) in order to remove higher-order time derivatives, the EOMs of background, (<ref>)-(<ref>), and linear scalar perturbations, (<ref>)-(<ref>), are still difficult to solve in general.
Fortunately, we observe that Eqs. (<ref>)-(<ref>) simplify dramatically if C=D', which by itself does not invalidate Eqs. (<ref>)-(<ref>).
In other words, C=D' is a special solution for the Eqs. (<ref>)-(<ref>). If we further choose the parameter c_5=-1/2, the scalar perturbations from the metric and the fluctuation of the scalar field are the same as those in GR.
For our purpose to investigate the contributions of the non-metricity tensor and the PV term to the SIGWs, we expect that our model deviates from GR minimally, namely the evolution of background, the perturbations from metric, and the fluctuation of scalar field are the same as those in GR. As a result, we require c_5=-1/2, and ϕ=ψ without the presence of anisotropic stress.
From Eq. (<ref>), this is also consistent with the special solution C=D'.
We may view this choice of parameters and C=D' as the minimal modification of GR in the framework of symmetric teleparallel gravity, which meanwhile evades the strong coupling problem. In the rest of this paper, we will evaluate the SIGWs with this minimal modification, while leaving c_4 as a free parameter.
During the radiation-dominated era, we have P̅/ρ̅=1/3, where
ρ̅=1/2a^2(φ')^2+V, P̅=1/2a^2(φ')^2-V.
By making use of (<ref>) together with the choice c_5=-1/2, the EOMs of the background, (<ref>)-(<ref>), are the same as those in GR.
Thus we can obtain the evolution of the background during the radiation-dominated era <cit.>,
ρ̅=ρ_0 a^-4, a=√(1/3ρ_0)τ=a_0τ, φ'=±2τ^-1.
With the solution C=D' and c_5=-1/2, the EOMs of linear scalar perturbations from the metric and the fluctuation of the scalar field are also the same as those in GR.
Additionally, with C=D' the Eq. (<ref>) implies ϕ=ψ.
As a result, the EOM of perturbation D reduces to be
D”-∂_i∂^iD=2ϕ.
For late convenience of calculating the SIGWs, we split the perturbations into the primordial perturbation and the transfer functions as follows,
ϕ( k,τ)=2/3ζ( k)T_ϕ(x),
D( k,τ)=2/3ζ( k)1/k^2T_D(x),
where ζ is the primordial curvature perturbation and x=kτ. The transfer function T_ϕ is solved to be
T_ϕ(x)=9/x^2(sin(x/√(3))/x/√(3)-cos(x/√(3))).
Substituting the transfer function into Eq. (<ref>), we can obtain the transfer function of D,
T_D(x)= C_1cos(x)+C_2sin(x)+9√(3)/xsin(x/√(3))-3√(3)cos(x)(Ci(x+x/√(3))-Ci(x-x/√(3)))
-3√(3)sin(x)(Si(x+x/√(3))-Si(x-x/√(3))),
where
Si(x)=∫_0^x d ysin y/y, Ci(x)=-∫_x^∞d y cos y/y
are sine integral and cosine integral, respectively.
In Eq. (<ref>), C_1, C_2 are integral constants. We expect the perturbation decays and tends to 0 during the radiation-dominated era, therefore we choose C_1=C_2=0.
§ THE SCALAR INDUCED GRAVITATIONAL WAVES
In this section, we derive the EOM for the SIGWs.
To this end, we expand the action (<ref>) up to the third order and focus on terms that are quadratic in the scalar perturbation and linear in the tensor perturbations, which correspond to the source term in the EOM for the SIGWs.
The action that is relevant to the SIGWs is given by
S_GW=S^(2)_TT+S^(3)_SST,
where
S^(2)_TT=∫d^3x dτ a^2[1/8(h^'_ijh^'ij-∂_k h_ij∂^k h^ij)+1/2ℳϵ^ijk∂_j h_klh^ l_i],
is the quadratic action for the tensor perturbations with
ℳ=2(2ℋg(φ)+g'(φ)).
The cubic action involving two scalar modes and one the tensor modes is
S^(3)_SST=∫d^3x dτ a^2(ℒ^PC_ij+ℒ^PV_ij)h^ij,
where
ℒ^PC_ij= 1/2∂_iδφ∂_jδφ-2c_5∂_iϕ∂_jψ-c_4(2ℋϕ∂_i∂_j D'+ϕ'∂_i∂_j D'+ϕ∂_i∂_j D”.
.+2∂^kϕ∂_k∂_j∂_i D+2ϕ∂^k∂_k∂_i∂_j D+∂_j C'∂_iϕ-ϕ'∂_i∂_j C+2ℋϕ∂_i∂_jC-3ψ'∂_i∂_jC.
.+2ℋψ∂_i∂_j C-3ψ∂_i∂_j C'+2ℋψ∂_i∂_j D'+3ψ'∂_i∂_j D'+3ψ∂_i∂_j D”-2∂^kψ∂_k∂_i∂_j D.
.-2ψ∂^k∂_k∂_i∂_j D-∂_i∂_j C∂^k∂_k C+∂_k∂_i C∂^k∂_j C+C'∂_i∂_j C'+C”∂_i∂_j C-2ℋC'∂_i∂_j C.
.+2ℋ∂_i C'∂_j D'+∂_i D'∂_jC”+∂_i C'∂_jD”-2ℋ∂^k C∂_k∂_i∂_j D+∂_i C'∂^k∂_k∂_j D.
.-2∂_i∂_k D'∂^k∂_j C-∂_i D'∂^k∂_k∂_j C+2ℋ∂^k∂_j D'∂_k∂_i D+2∂^k∂_jD”∂_k∂_i D.
.+∂_k∂_j D'∂^k∂_i D'+∂_iD”∂^k∂_k∂_j D+2∂^k∂_i∂_j D∂^l∂_l∂_k D-2∂_l∂_k∂_j D∂^l∂^k∂_i D)
correspond to terms that preserve the parity symmetry,
and
ℒ^PV_ij= ℳϵ_jkl(∂^k∂^m D∂^l∂_m∂_i D-∂^k∂_i C∂^l D')+2g_φϵ_jkl∂^lδφ(∂^k∂_i C+∂^k∂_i D')
+2gϵ_jkl(2∂^k∂_i C∂^lϕ-2∂^k∂_i D'∂^lϕ+4∂^kψ∂^l∂_i D'
-∂^k∂_i C∂^l C'-∂^k C'∂^l∂_i D'-∂^k∂_i C∂^l D”-2∂^k∂_i∂_m D∂^l∂^m C
+2∂^k∂^m D∂^l∂_m∂_i D'-∂^k D”∂^l∂_i D')
correspond to terms that are parity-violating, respectively.
By varying the action (<ref>) with respect to the tensor perturbations h^ij, we obtain the EOM for the SIGWs,
-1/4(h^”_ij+2ℋh^'_ij-∇^2h_ij)+1/2ℳ(ϵ_ilk∂_l h_kj+ϵ_jlk∂_l h_ki)=𝒯^lm_ ijs_lm,
where 𝒯^lm_ ij is the projection tensor, and the source reads
s_ij=-1/2(ℒ^PC_ij+ℒ^PC_ji+ℒ^PV_ij+ℒ^PV_ji).
In the above, we have symmetrized the source with respect to i↔ j.
According to Eq. (<ref>), if c_4=0, the perturbations C and D drop out in the quadratic action (<ref>).
However, they do appear in the cubic action of the SIGWs (<ref>) even in the case of c_4 = 0, which results in the strong coupling problem.
In order to solve the EOM of SIGWs (<ref>), we decompose h_ij into circularly polarized modes as
h_ij(x,τ)=∑_A=R,L∫d^3k/(2π)^3/2e^ik·xp^A_ijh^A_k(τ),
where the circular polarization tensors are defined as
p^R_ij=1/√(2)(𝐞^+_ij+i𝐞^×_ij), p^L_ij=1/√(2)(𝐞^+_ij-i𝐞^×_ij).
The plus and cross polarization tensors can be expressed as
𝐞^+_ij= 1/√(2)(𝐞_i 𝐞_j-𝐞̅_i 𝐞̅_j),
𝐞_ij^×= 1/√(2)(𝐞_i𝐞̅_j+𝐞̅_i 𝐞_j),
where 𝐞_i(k) and 𝐞̅_i(k) are two basis vectors which are orthogonal to each other and perpendicular to the wave vector k, i.e., satisfying k·𝐞= k·𝐞̅=𝐞·𝐞̅=0 and |𝐞|=|𝐞̅|=1.
In Eq. (<ref>), the projection tensor extracts the transverse and trace-free part of the source, of which the definition is
𝒯^lm_ ijs_lm(x,τ)=∑_A=R,L∫d^3 k/(2π)^3/2e^i k· xp_ij^A p^Alms̃_lm( k,τ),
where s̃_ij is the Fourier transformation of the source s_ij.
With the above settings, we can now rewrite the EOM of SIGWs in Fourier space as
u^A”_ k+(ω^2_A-a”/a)u^A_ k=-4aS^A_k,
where u^A=ah^A,
ω^2_A=k^2-4ℳλ^Ak, (λ^R=1, λ^L=-1),
and
S^A_k=p^Aijs̃_ij(k,τ).
The source S^A_ k can be divided into two parts: the parity-conserved part and the parity-violating part, given by
S^A_ k=S^A(PC)_ k+S^A(PV)_ k,
where
S^A(PC)_ k= ∫d^3 k'/(2π)^3/2p^Aijk^'_i k^'_jζ( k')ζ( k- k') f_PC(u,v,x),
S^A(PV)_ k= ∫d^3 k'/(2π)^3/2p^Aijk^'_i k^'_jζ( k')ζ( k- k') f_PV(k,u,v,x),
and u=k'/k, v=| k-k'|/k.
Note
p^Aijk^'_i k^'_j=1/2k^'2sin^2(θ)e^2iλ^Aℓ,
where θ is the angle between k' and k while ℓ is the azimuthal angle of k'. The function f_PC(u,v,x) and f^A_PV(u,v,x) are defined as
f_PC(u,v,x)= -2/9[1/2(ukT^*_ψ(ux)+ℋT_ϕ(ux))(vkT^*_ψ(vx)+ℋT_ϕ(vx))/ℋ^2-ℋ'+T_ϕ(ux)T_ϕ(vx).
.-c_4(-8ℋ/vkT_ϕ(ux)T^*_D(vx)+4ℋ/vkT^**_D(ux)T^*_D(vx)..
..-2ℋ1-u^2-v^2/uv^2kT^*_D(ux)T_D(vx)-1-u^2+v^2/v^2T^**_D(ux)T_D(vx)..
..-(1-u^2-v^2)(1-u^2+v^2)/2u^2v^2T_D(ux)T_D(vx))+(u↔ v)],
and
f^A_PV(u,v,x)= -2/9λ^A[ℳ(1-u^2-v^2/2uv^2kT_D(ux)T_D(vx)-1/vkT^*_D(ux)T^*_D(vx)).
.+4g_φφ'u/vukT^*_ϕ(ux)+ℋT_ϕ(ux)/(ℋ^2-ℋ')T^*_D(vx).
.+2g(-4u/vT_ψ(ux)T^*_D(vx)+ 2u-v/vT^**_D(ux)T^*_D(vx)..
..+21-u^2-v^2/uvT_D(ux)T^*_D(vx))+u↔ v],
respectively. The ∗ represents derivatives with respect to the arguments. In deriving Eqs. (<ref>) and (<ref>), we have used the relations C=D', ϕ=ψ, and
δφ=ψ'+ℋϕ/ℋ^2-ℋ'φ'.
Eq. (<ref>) can be solved by the method of Green's function,
h^A_ k(τ)=-4/a(τ)∫^τdτ̅ G^A_k(τ,τ̅)
a(τ̅)S^A_ k(τ̅),
where the Green's function G^A_k(τ,τ̅) satisfies the equation
G^A”_k(τ,τ̅)+(ω_A^2-a”/a)G^A_k(τ,τ̅)=δ(τ-τ̅).
As for the Green's function, the deviation from the standard GR is characterized by the parameter ℳ.
Generally, since ω_A given in Eq. (<ref>) is an involved function of both the wave number k and the conformal time τ (see Eq. (<ref>)), it is difficult to solve Eq. (<ref>) and get the expression for the Green's function analytically. Nevertheless, for our purpose of studying the contributions of the scalar perturbations to the SIGWs, we assume the change of the Green's function from that in GR is also “minimally”. Precisely, since ω_A is related to the propagating speeds of the GWs, we assume that in the duration of generation of SIGWs, ω_A is approximately time independent and depends only on the wave number.
In fact, an exponential form of the coupling function
g(φ)=g_0e^αφ,
renders ω_A independent of time and allows us to obtain an analytical solution of Eq. (<ref>).
Using the background Eqs. (<ref>), the solution of the scalar field is found to be
φ=2βln(τ/τ_0)+φ_0,
where φ_0 is the value of φ at τ_0 and β=±1, which corresponds to φ'=± 2/τ, respectively.
Substituting Eqs. (<ref>) and (<ref>) into the definition of ℳ, we have
ℳ=4(1+αβ)g_0e^αφ_0τ^2αβ-1/τ^2αβ_0.
From Eq. (<ref>), it is clear that if we set 2αβ-1=0, ℳ becomes constant.
As a result,
ω^2_A=k^2(1-4λ^Aℳ_0/k),
with
ℳ_0=6g_0e^αφ_0/τ_0,
which is independent of time.
With these assumptions, we can solve Eq. (<ref>) analytically to get the expression of Green's function,
G^A_k(τ,τ̅)=sin[ω_A(τ-τ̅)]/ω_AΘ(τ-τ̅),
where Θ is the Heaviside step function.
The constant ℳ_0 defined in Eq. (<ref>) has the dimension of energy, which can be viewed as the characteristic energy scale of parity violation in our model.
It is therefore interesting to have an estimation of ℳ_0 based on the current observation.
The recent observations from GW170817 <cit.> and GRB170817A <cit.> constrain the speed of GWs to be
-3× 10^-15≤ c_gw-1≤ 7× 10^-16.
Recalling the definition of ω_A in Eq. (<ref>),
c_gw=ω_A=(1-4ℳ_0λ^A/k)^1/2≃ 1-2ℳ_0λ^A/k,
which means
|ℳ_0|/k<3.5× 10^-16.
Therefore, the typical energy scale of parity violation is much smaller than the wave numbers of interest.
In Ref. <cit.>, the authors constrain ℳ_0 with the GW events of binary black hole merger (BBH) in the LIGO-Virgo catalogs GWTC-1 and GWTC-2, the result is ℳ_0<1.6 × 10^-42Gev∼𝒪(10^-3) Mpc^-1. Since the SIGWs generate on small scales, k≫1 Mpc^-1, we have ℳ_0/k≪ 1. From the EOM of SIGWs (<ref>) and the source term (<ref>), the PV term is also suppressed by |ℳ_0|/k, namely, f^A_PV∝ℳ_0/k, which means the effect of PV term on SIGWs is negligible.
§ THE POWER SPECTRA OF THE SIGWS
The solutions of the circularly polarized modes can be written in a compact form
h^A_ k(τ)=4 ∫d^3 k'/(2π)^3/2 p^Aijk^'_i k^'_jζ( k')ζ( k- k')1/k^2I^A(k,u,v,x),
where
I^A(k,u,v,x) =-∫_0^xdx̅a(τ̅)/a(τ)k G^A_k(τ,τ̅)(f_PC(u,v,x̅)+f^A_PV(u,v,x̅))
= I^A_PC(k,u,v,x)+I^A_PV(k,u,v,x),
with
I^A_PC(k,u,v,x)=-∫_0^xdx̅a(τ̅)/a(τ)kG^A_k(τ,τ̅)f_PC(u,v,x̅),
and
I^A_PV(k,u,v,x)=-∫_0^xdx̅a(τ̅)/a(τ)k G^A_k(τ,τ̅)f^A_PV(u,v,x̅).
According to whether the perturbations C and D contribute or not, we can split I^A_PC into two parts, which we denote I^A_PC1 and I^A_PC2, respectively.
Specifically, I^A_PC1 does not include contributions from C and D, which correspond to the first two terms of f_PC. The analytic expression for I^A_PC1 can be found in Appendix <ref>. The other parts, I^A_PC2 and I^A_PV cannot be calculated analytically, so we will compute them numerically.
The power spectra of the SIGWs 𝒫_h^A are defined by
⟨ h^A_k h^C_k'⟩ =2π^2/k^3δ^3( k+ k')δ^AC𝒫^A_h(k).
With the above definition of 𝒫_h^A and the solution of SIGWs, we can obtain the power spectra of the SIGWs [Here, we assume ζ is Gaussian, please refer to Refs. <cit.> and references therein for the non-Gaussian effects.]
𝒫^A_h(k,x)=4∫_0^∞du∫_|1-u|^1+udv
𝒥(u,v)I^A(u,v,x)^2𝒫_ζ(uk)𝒫_ζ(vk),
where
𝒥(u,v)=[4u^2-(1+u^2-v^2)^2/4uv]^2,
and 𝒫_ζ is the power spectrum of primordial curvature perturbation.
The fractional energy density of the SIGWs is [Note that in the literature <cit.>, there is an additional 1/2 in front of h_ij defined in metric, and the prefactor of Ω_GW is 1/48. Of course, the results are independent of the definition of h_ij <cit.>.]
Ω_GW(k,x) =1/12(k/ℋ)^2∑_A=R,L𝒫^A_h(k,x)=x^2/12∑_A=R,L𝒫^A_h(k,x)
=1/3∫_0^∞du∫_|1-u|^1+udv
𝒥(u,v)∑_A=R,LĨ^A(k,u,v,x)^2𝒫_ζ(uk)𝒫_ζ(vk),
where the overline represents the time average, and Ĩ^A(k,u,v,x)^2=I^A(k,u,v,x)^2x^2.
The GWs behave as free radiation, thus the fractional energy density of the SIGWs at the present time Ω_GW,0 can be expressed as
<cit.>
Ω_GW,0(k)=Ω_GW(k,η→∞)Ω_r,0,
where Ω_r,0 is the current fractional energy density of the radiation and approximately 9× 10^-5 <cit.>.
In order to analyze the features of the SIGWs in our model, we use a concrete power spectrum of the primordial curvature perturbation to compute the energy density of the SIGWs. Consider the energy density of SIGWs induced by the monochromatic power spectrum,
𝒫_ζ(k)=𝒜_ζδ(ln(k/k_p)),
then we obtain the energy density of SIGWs at the present time
Ω_GW,0(k)=1/3Ω_r,0𝒜_ζ^2k̃^-2𝒥(k̃^-1,k̃^-1)∑_A=R,LĨ^A(k,k̃^-1,k̃^-1,x→∞)^2Θ(2-k̃),
where k̃=k/k_p.
We numerically calculate the energy density of SIGWs and show the results in Figs. <ref> and <ref>. In order to compare the energy density of SIGWs in our model with that in GR, we also present the results of GR.
According to Fig. <ref>, the energy density of SIGWs from the left-hand polarized mode is almost the same as that from the right-hand polarized mode, which means that the effect from the PV term on the SIGWs is negligible. However, the contributions from the perturbations C and D can have a significant impact on SIGWs, particularly at peak scales,
which can be seen in both Figs. <ref> and <ref>.
The SIGWs in our model also exhibit some other interesting features. In GR, the scalar perturbation oscillates in the manner of sin(1/√(3)x) and cos(1/√(3)x), there is a divergence at k̃=2/√(3) due to the resonant amplification <cit.>. In our model, the perturbations C and D behave differently, which have other oscillatory manners, namely of sin(x) and cos(x). This results in the resonant amplification at other scales. From Fig. <ref>, we can see that another peak appears at k̃=1+1/√(3) in the case of the monochromatic power spectrum. This multi-peak feature can be used to distinguish our model from GR.
§ CONCLUSION
In this paper, we calculated the SIGWs in symmetric teleparallel gravity with a simple PV term QQ. The action of our model is given in Eq. (<ref>).
In order to evade the strong coupling problem that has shown up in Ref. <cit.>, we replace the teleparallel equivalent Einstein-Hilbert action by a general non-metricity scalar ℚ, which is a linear combination of scalar monomials that are quadratic in the non-metricity tensor.
Under the requirement of no higher-order time derivative terms in the EOMs of linear scalar perturbations such that the possible Ostrogradsky instability is evaded, the constant parameters in ℚ must satisfy the constraint Eq. (<ref>) and only two parameters c_4 and c_5 are independent. The strong coupling problem can be avoided only when c_4≠ 0.
We solved the EOMs of linear scalar perturbations and obtained their transfer functions during the radiation-dominated era. We have chosen the coupling function of the PV term to be the exponential form Eq. (<ref>), which ensures that the speed of SIGWs is independent of time.
We further derived the analytical expression for Green's function of the tensor perturbations Eq. (<ref>).
We then calculated the power spectra and the energy density of SIGWs. In order to analyze the features of SIGWs in our model, we evaluated numerically the energy density of SIGWs with a monochromatic power spectrum for the primordial curvature perturbation.
Under the observation constraints on the propagating speeds of the GWs, we found that the effect of the PV term to the SIGWs is negligible.
However, the contribution to SIGWs from the perturbations of connection can be significant, and results in a multi-peak structure in the energy density of SIGWs.
This feature makes our model, and in fact more general symmetric teleparallel gravity theories, distinguishable from GR.
Fengge Zhang thanks Zheng Chen and Yang Yu for their helpful discussion. This work was supported by the National Natural Science Foundation of China (NSFC) under the grant No. 11975020.
§ THE QUADRATIC ACTION OF LINEAR SCALAR PERTURBATIONS
The quadratic action of scalar perturbation is S^(2)_SS=∫d^3x dτ a^2 ℒ, where
ℒ= 1/2(δφ')^2-1/2∂_i δφ∂^i δφ-1/2 a^2 V_φφδφ^2-a^2 V_φδφϕ+3 a^2 V_φδφψ
-(ϕ+3ψ)δφ' φ'-2(c_1+c_2+c_3+c_4+c_5)(ϕ')^2-6(c_1+3c_3)(ψ')^2
-2(c_1+c_2+c_3+c_4+c_5)(C”)^2
-((4c_1+c_2+16c_3+c_4+4c_5)ℋ^2-1/4(φ')^2+1/2 a^2 V) ϕ^2
-6((4c_1+c_2+16c_3+c_4+4c_5)ℋ^2-1/4(φ')^2-1/2a^2 V) ϕψ
-9((4c_1+c_2+16c_3+c_4+4c_5)ℋ^2-1/4(φ')^2+1/2 a^2 V) ψ^2
+2(2c_1+2c_2+8c_3+2c_4+5c_5)ℋϕϕ'+6(2c_1+2c_2+8 c_3+2c_4+5c_5)ℋψϕ'
-6(2c_1+8c_3+c_5)ℋϕψ'-18(2c_1+8c_3+c_5)ℋψψ'+6(2c_3+c_5)ϕ'ψ'
-2(2c_1+2c_2+8c_3+2c_4+5c_5)ℋϕ C”-6(2c_1+2c_2+8c_3+2c_4+5c_5)ℋψ C”
-2(2c_1+2c_2+8c_3+2c_4+5c_5)ℋ C' C”+4(c_1+c_2+c_3+c_4+c_5)ϕ' C”
-6(2c_3+c_5)ψ' C”+2(3c_1+c_2+9c_3+c_4+3 c_5)∂_i ψ∂^i ψ+2(c_1+c_3)∂_i ϕ∂^i ϕ
-2(6c_3+c_5)∂_i ψ∂^i ϕ-2(2c_1+c_2+12c_3+3c_4+6c_5)ℋϕ∂_i ∂^i D'
-2(c_2+2c_3+c_4+2c_5)ϕ∂^i ∂_i D”
-6(2c_1+c_2+8c_3+c_4+3c_5)ℋψ∂_i ∂^i D'-(4c_1+2c_2+12c_3+3c_5)ψ'∂_i ∂^i D'
-2(2c_3+c_5)∂^i ϕ∂_j ∂^j ∂_i D+4(c_1+c_2+3c_3+c_4+2c_5)∂^i ψ∂_j ∂^j ∂_i D
+(12c_3+2c_4+5c_5)∂_i C' ∂^i ψ-(2c_4+3c_5)∂_i D”∂^i ψ
-(4c_1+2c_2+4c_3+c_5)∂_i C' ∂^i ϕ-2(c_2+c_4+2c_5)ℋϕ∂_i ∂^i C
-(2c_4+c_5)ϕ' ∂_i ∂^i C+2(c_2+c_4+2c_5)ℋψ∂_i ∂^i C+(2c_2+3c_5)ψ' ∂_i ∂^i C
+2(c_2+c_4+2c_5)ℋ C' ∂_i ∂^i C+1/2(6c_1+5c_2+4c_3+c_4+2c_5)∂_i C' ∂^i C'
-2(2c_1+c_2+8c_3+c_4+3c_5)ℋ∂_i D'∂^i C'
-1/2(6c_1+5c_2+4c_3+c_4+2c_5)∂_i∂^i D' ∂_j∂^j D'-(4c_3+2c_4+3c_5)∂_i ∂^i D' C”
-(2c_1+3c_2+c_4+c_5)∂^i C' ∂_i D”+1/2(2c_1+c_2+c_4)∂_i D”∂^i D”
+(2c_1+3c_2+c_4+c_5)∂_j ∂^j D' ∂_i ∂^i C+(2c_4+c_5) C”∂_i ∂^i C
-2(2c_1+2c_2+8c_3+2c_4+5c_5)ℋ∂_i D”∂^i C-1/2(2c_1+c_2+c_4)∂_i ∂^i C ∂_j ∂^j C
+(4c_3+2c_4+3c_5)∂^i C' ∂_j ∂^j ∂_i D-(2c_4+c_5)∂^i D”∂_j ∂^j ∂_i D
+2(c_2+c_4+2c_5)ℋ∂^i C ∂_j ∂^j ∂_i D-2(2c_1+c_2+8c_3+c_4+3c_5)ℋ∂_j ∂_i D ∂^j ∂^i D'+
+2(c_1+c_2+c_3+c_4+c_5)∂_j ∂^j ∂^i D ∂_k ∂^k ∂_i D.
§ THE INTEGRAL KERNEL
In this appendix, we give the integral kernel I^A_PC1. With the Green's function (<ref>), I^A_PC1 can be expressed as
I^A_PC1(k,u,v,x)=sin(wx)/wxI^A_PC1s(k,u,v,x)+cos(wx)/wxI^A_PC1c(k,u,v,x),
where the subscript “s” and “c” stand for contributions involving the sine and cosine functions, respectively, and w=ω_A/k. We also write
I^A_PC1s(k,u,v,x)=ℐ^A_pc1s(k,u,v,x)- ℐ^A_pc1s(k,u,v,0),
I^A_PC1c(k,u,v,x)=ℐ^A_pc1c(k,u,v,x)- ℐ^A_pc1c(k,u,v,0),
where ℐ^A_pc1s and ℐ^A_pc1c are defined by
ℐ^A_pc1s(k,u,v,y)=-∫d y cos y f_PC(u,v,y) y,
ℐ^A_pc1c(k,u,v,y)=∫d y sin y f_PC(u,v,y) y.
After lengthy calculations, we obtain
ℐ^A_pc1s(k,u,v,y)= 3/2u^3v^3y^4(-18uvy^2cosuy/√(3)cosv y/√(3)cos wy +6uvwy^3cosuy/√(3)cosv y/√(3)sin wy.
.-6√(3)vwy^2cosvy/√(3)sinu y/√(3)sin wy-6√(3)uwy^2cosuy/√(3)sinv y/√(3)sin wy.
.+√(3)vy(18-u^2y^2+v^2y^2-3w^2y^2)cosvy/√(3)cos wysinu y/√(3).
.+√(3)uy(18-v^2y^2+u^2y^2-3w^2y^2)cosuy/√(3)cos wysinv y/√(3).
.+3(18-u^2y^2-v^2y^2-3w^2y^2)cos wysinuy/√(3)sinv y/√(3).
.+3wy(6+u^2y^2+v^2y^2-3w^2y^2)sinuy/√(3)sinv y/√(3)sin wy)
-3(u^2+v^2-3w^2)^2/8u^3v^3(Ci[(w+u+v/√(3))y]+Ci[|w-u+v/√(3)|y].
..
.-Ci[(w+u-v/√(3))y]-Ci[(w-u-v/√(3))y]),
and
ℐ^A_pc1c(k,u,v,y)= 3/2u^3v^3y^4(6uvwy^3cosuy/√(3)cosv y/√(3)cos wy -6√(3)vwy^2cosvy/√(3)cos wysinu y/√(3).
.-6√(3)uwy^2cosuy/√(3)cos wysinv y/√(3)+18uvy^2cosuy/√(3)cosv y/√(3)sin wy.
.-√(3)vy(18-u^2y^2+v^2y^2-3w^2y^2)cosvy/√(3)sinu y/√(3)sin wy.
.-√(3)uy(18-v^2y^2+u^2y^2-3w^2y^2)cosuy/√(3)sinv y/√(3)sin wy.
.+3(18-u^2y^2-v^2y^2-3w^2y^2)sinuy/√(3)sinv y/√(3)sin wy.
.+3wy(6+u^2y^2+v^2y^2-3w^2y^2)cos wysinuy/√(3)sinv y/√(3))
+3(u^2+v^2-3w^2)^2/8u^3v^3(Si[(w+u+v/√(3))y]+Si[(w-u+v/√(3))y].
..
.-Si[(w+u-v/√(3))y]-Si[(w-u-v/√(3))y]).
We also have the following limits for ℐ^A_pc1s
ℐ^A_pc1s(u,v,y→ 0)=3(u^2+v^2-3w^2)/8u^3v^3(4uv-(u^2+v^2-3w^2)log|3w^2-(u+v)^2/3w^2-(u-v)^2|),
and
ℐ^A_pc1s(u,v,y→∞)=0,
thus
I^A_PC1s(u,v,x →∞)=-3(u^2+v^2-3w^2)/8u^3v^3(4uv-(u^2+v^2-3w^2)log|3w^2-(u+v)^2/3w^2-(u-v)^2|).
As for ℐ^A_pc1c, we have
ℐ^A_pc1c(u,v,y→ 0)=0,
and
ℐ^A_pc1c(u,v,y→∞)=-3(u^2+v^2-3w^2)^2π/8u^3v^3Θ(u+v-√(3)w),
thus
I^A_PC1c(u,v,x →∞)=-3(u^2+v^2-3w^2)^2π/8u^3v^3Θ(u+v-√(3)w).
From the above expressions (<ref>) and (<ref>), if w=1, these two expressions are the same as those in GR <cit.> except for an extra factor 1/2 due to the definition of tensor perturbations.
117
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Abbott et al.(2016a)Abbott et al.]Abbott:2016nmj
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW151226: Observation of Gravitational Waves from a
22-Solar-Mass Binary Black Hole Coalescence, https://doi.org/10.1103/PhysRevLett.116.241103 journal
journal Phys. Rev. Lett. volume 116, pages 241103 (year 2016a), https://arxiv.org/abs/1606.04855 arXiv:1606.04855 NoStop
[Abbott et al.(2016b)Abbott et al.]Abbott:2016blz
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title Observation of Gravitational Waves from a Binary Black
Hole Merger, https://doi.org/10.1103/PhysRevLett.116.061102
journal journal Phys. Rev. Lett. volume 116, pages 061102 (year
2016b), https://arxiv.org/abs/1602.03837
arXiv:1602.03837 NoStop
[Abbott et al.(2017a)Abbott et al.]Abbott:2017gyy
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW170608: Observation of a 19-solar-mass Binary Black
Hole Coalescence, https://doi.org/10.3847/2041-8213/aa9f0c
journal journal Astrophys. J. Lett. volume 851, pages L35 (year
2017a), https://arxiv.org/abs/1711.05578
arXiv:1711.05578 NoStop
[Abbott et al.(2017b)Abbott et al.]TheLIGOScientific:2017qsa
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW170817: Observation of Gravitational Waves from a
Binary Neutron Star Inspiral, https://doi.org/10.1103/PhysRevLett.119.161101 journal
journal Phys. Rev. Lett. volume 119, pages 161101 (year 2017b), https://arxiv.org/abs/1710.05832 arXiv:1710.05832 NoStop
[Abbott et al.(2017c)Abbott et al.]Abbott:2017oio
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW170814: A Three-Detector Observation of Gravitational
Waves from a Binary Black Hole Coalescence, https://doi.org/10.1103/PhysRevLett.119.141101 journal
journal Phys. Rev. Lett. volume 119, pages 141101 (year 2017c), https://arxiv.org/abs/1709.09660 arXiv:1709.09660 NoStop
[Abbott et al.(2017d)Abbott et al.]Abbott:2017vtc
author author B. P. Abbott et al. (collaboration LIGO Scientific, VIRGO), title GW170104: Observation of a 50-Solar-Mass Binary Black
Hole Coalescence at Redshift 0.2, https://doi.org/10.1103/PhysRevLett.118.221101 journal
journal Phys. Rev. Lett. volume 118, pages 221101 (year 2017d), note [Erratum: Phys.Rev.Lett. 121, 129901 (2018)], https://arxiv.org/abs/1706.01812 arXiv:1706.01812 NoStop
[Abbott et al.(2019)Abbott
et al.]LIGOScientific:2018mvr
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GWTC-1: A Gravitational-Wave Transient Catalog of Compact
Binary Mergers Observed by LIGO and Virgo during the First and Second
Observing Runs, https://doi.org/10.1103/PhysRevX.9.031040
journal journal Phys. Rev. X volume 9, pages 031040 (year 2019), https://arxiv.org/abs/1811.12907 arXiv:1811.12907 NoStop
[Abbott et al.(2020a)Abbott et al.]Abbott:2020khf
author author R. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW190814: Gravitational Waves from the Coalescence of a
23 Solar Mass Black Hole with a 2.6 Solar Mass Compact Object, https://doi.org/10.3847/2041-8213/ab960f journal journal Astrophys. J. Lett. volume 896, pages L44 (year 2020a), https://arxiv.org/abs/2006.12611 arXiv:2006.12611 NoStop
[Abbott et al.(2020b)Abbott et al.]Abbott:2020uma
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW190425: Observation of a Compact Binary Coalescence
with Total Mass ∼ 3.4 M_⊙, https://doi.org/10.3847/2041-8213/ab75f5 journal journal Astrophys. J. Lett. volume 892, pages L3 (year 2020b), https://arxiv.org/abs/2001.01761 arXiv:2001.01761 NoStop
[Abbott et al.(2020c)Abbott et al.]LIGOScientific:2020stg
author author R. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW190412: Observation of a Binary-Black-Hole Coalescence
with Asymmetric Masses, https://doi.org/10.1103/PhysRevD.102.043015 journal journal Phys. Rev. D volume 102, pages 043015 (year 2020c), https://arxiv.org/abs/2004.08342 arXiv:2004.08342 NoStop
[Akrami et al.(2020)Akrami
et al.]Akrami:2018odb
author author Y. Akrami et al. (collaboration Planck), title Planck 2018 results. X. Constraints on inflation, https://doi.org/10.1051/0004-6361/201833887 journal journal Astron. Astrophys. volume 641, pages A10 (year 2020), https://arxiv.org/abs/1807.06211 arXiv:1807.06211 NoStop
[Sato-Polito et al.(2019)Sato-Polito, Kovetz, and Kamionkowski]Sato-Polito:2019hws
author author G. Sato-Polito, author E. D. Kovetz, and author M. Kamionkowski, title Constraints on the primordial curvature
power spectrum from primordial black holes, https://doi.org/10.1103/PhysRevD.100.063521 journal journal Phys. Rev. D volume 100, pages 063521 (year 2019), https://arxiv.org/abs/1904.10971 arXiv:1904.10971 NoStop
[Lu et al.(2019)Lu,
Gong, Yi, and Zhang]Lu:2019sti
author author Y. Lu, author Y. Gong, author Z. Yi, and author
F. Zhang, title Constraints
on primordial curvature perturbations from primordial black hole dark matter
and secondary gravitational waves, https://doi.org/10.1088/1475-7516/2019/12/031 J. Cosmol. Astropart. Phys. volume 12year year (2019) pages pages 031, https://arxiv.org/abs/1907.11896 arXiv:1907.11896 NoStop
[Ananda et al.(2007)Ananda,
Clarkson, and Wands]Ananda:2006af
author author K. N. Ananda, author C. Clarkson, and author D. Wands, title The Cosmological gravitational wave background from primordial
density perturbations, https://doi.org/10.1103/PhysRevD.75.123518
journal journal Phys. Rev. D volume 75, pages 123518 (year 2007), https://arxiv.org/abs/gr-qc/0612013 arXiv:gr-qc/0612013
NoStop
[Saito and Yokoyama(2009)]Saito:2008jc
author author R. Saito and author J. Yokoyama, title Gravitational wave background as a probe of
the primordial black hole abundance, https://doi.org/10.1103/PhysRevLett.102.161101 journal
journal Phys. Rev. Lett. volume 102, pages 161101 (year 2009), note
[Erratum: Phys.Rev.Lett. 107, 069901 (2011)], https://arxiv.org/abs/0812.4339 arXiv:0812.4339 NoStop
[Orlofsky et al.(2017)Orlofsky, Pierce, and Wells]Orlofsky:2016vbd
author author N. Orlofsky, author A. Pierce, and author J. D. Wells, title Inflationary theory and pulsar timing investigations of
primordial black holes and gravitational waves, https://doi.org/10.1103/PhysRevD.95.063518 journal journal Phys. Rev. D volume 95, pages 063518 (year 2017), https://arxiv.org/abs/1612.05279 arXiv:1612.05279 NoStop
[Nakama et al.(2017)Nakama,
Silk, and Kamionkowski]Nakama:2016gzw
author author T. Nakama, author J. Silk, and author M. Kamionkowski, title Stochastic gravitational waves associated with the
formation of primordial black holes, https://doi.org/10.1103/PhysRevD.95.043511 journal journal Phys. Rev. D volume 95, pages 043511 (year 2017), https://arxiv.org/abs/1612.06264 arXiv:1612.06264 NoStop
[Wang et al.(2018)Wang,
Wang, Huang, and Li]Wang:2016ana
author author S. Wang, author Y.-F. Wang,
author Q.-G. Huang, and author T. G. F. Li, title Constraints on the Primordial Black Hole Abundance from the First
Advanced LIGO Observation Run Using the Stochastic Gravitational-Wave
Background, https://doi.org/10.1103/PhysRevLett.120.191102
journal journal Phys. Rev. Lett. volume 120, pages 191102 (year
2018), https://arxiv.org/abs/1610.08725 arXiv:1610.08725
NoStop
[Kohri and Terada(2018)]Kohri:2018awv
author author K. Kohri and author T. Terada, title Semianalytic calculation of gravitational wave spectrum
nonlinearly induced from primordial curvature perturbations, https://doi.org/10.1103/PhysRevD.97.123532 journal journal Phys. Rev. D volume 97, pages 123532 (year 2018), https://arxiv.org/abs/1804.08577 arXiv:1804.08577 NoStop
[Espinosa et al.(2018)Espinosa, Racco, and Riotto]Espinosa:2018eve
author author J. R. Espinosa, author D. Racco, and author A. Riotto, title A Cosmological Signature of the SM Higgs Instability: Gravitational
Waves, https://doi.org/10.1088/1475-7516/2018/09/012 J. Cosmol.
Astropart. Phys. volume 09year year
(2018) pages pages 012, https://arxiv.org/abs/1804.07732 arXiv:1804.07732 NoStop
[Kuroyanagi et al.(2018)Kuroyanagi, Chiba, and Takahashi]Kuroyanagi:2018csn
author author S. Kuroyanagi, author T. Chiba, and author T. Takahashi, title Probing the Universe through the Stochastic Gravitational
Wave Background, https://doi.org/10.1088/1475-7516/2018/11/038 J.
Cosmol. Astropart. Phys. volume 11year year (2018) pages pages 038, https://arxiv.org/abs/1807.00786 arXiv:1807.00786 NoStop
[Domènech(2020)]Domenech:2019quo
author author G. Domènech, title Induced gravitational waves in a general
cosmological background, https://doi.org/10.1142/S0218271820500285
journal journal Int. J. Mod. Phys. D volume 29, pages 2050028 (year
2020), https://arxiv.org/abs/1912.05583 arXiv:1912.05583
NoStop
[Fumagalli et al.(2021)Fumagalli, Renaux-Petel, and Witkowski]Fumagalli:2020nvq
author author J. Fumagalli, author S. Renaux-Petel, and author L. T. Witkowski, title Oscillations in the stochastic
gravitational wave background from sharp features and particle production
during inflation, https://doi.org/10.1088/1475-7516/2021/08/030
J. Cosmol. Astropart. Phys. volume 08year
year (2021) pages pages
030, https://arxiv.org/abs/2012.02761 arXiv:2012.02761
NoStop
[Lin et al.(2020)Lin,
Gao, Gong, Lu, Zhang, and Zhang]Lin:2020goi
author author J. Lin, author Q. Gao, author Y. Gong, author
Y. Lu, author C. Zhang, and author F. Zhang, title Primordial black holes and
secondary gravitational waves from k and G inflation, https://doi.org/10.1103/PhysRevD.101.103515 journal journal Phys. Rev. D volume 101, pages 103515 (year 2020), https://arxiv.org/abs/2001.05909 arXiv:2001.05909 NoStop
[Domènech et al.(2020)Domènech, Pi, and Sasaki]Domenech:2020kqm
author author G. Domènech, author S. Pi, and author M. Sasaki, title Induced gravitational waves as a probe of thermal history of the
universe, https://doi.org/10.1088/1475-7516/2020/08/017 J.
Cosmol. Astropart. Phys. volume 08year year (2020) pages pages 017, https://arxiv.org/abs/2005.12314 arXiv:2005.12314 NoStop
[Lu et al.(2020)Lu,
Ali, Gong, Lin, and Zhang]Lu:2020diy
author author Y. Lu, author A. Ali, author Y. Gong, author
J. Lin, and author
F. Zhang, title Gauge
transformation of scalar induced gravitational waves, https://doi.org/10.1103/PhysRevD.102.083503(2020) journal
journal Phys. Rev. D volume 102, pages 083503 (year 2020), https://arxiv.org/abs/2006.03450 arXiv:2006.03450 NoStop
[Domènech(2021)]Domenech:2021ztg
author author G. Domènech, title Scalar Induced Gravitational Waves
Review, https://doi.org/10.3390/universe7110398 journal journal Universe volume 7, pages 398 (year 2021), https://arxiv.org/abs/2109.01398 arXiv:2109.01398 NoStop
[Zhang et al.(2021)Zhang,
Lin, and Lu]Zhang:2021vak
author author F. Zhang, author J. Lin, and author Y. Lu, title
Double-peaked inflation model: Scalar induced gravitational waves and
primordial-black-hole suppression from primordial non-Gaussianity, https://doi.org/10.1103/PhysRevD.104.063515 journal journal Phys. Rev. D volume 104, pages 063515 (year 2021), note [Erratum:
Phys.Rev.D 104, 129902 (2021)], https://arxiv.org/abs/2106.10792
arXiv:2106.10792 NoStop
[Wang et al.(2022)Wang,
Vardanyan, and Kohri]Wang:2021djr
author author S. Wang, author V. Vardanyan, and author K. Kohri, title Probing primordial black holes with anisotropies in stochastic
gravitational-wave background, https://doi.org/10.1103/PhysRevD.106.123511 journal journal Phys. Rev. D volume 106, pages 123511 (year 2022), https://arxiv.org/abs/2107.01935 arXiv:2107.01935 NoStop
[Zhang(2022)]Zhang:2021rqs
author author F. Zhang, title Primordial black holes and scalar induced
gravitational waves from the E model with a Gauss-Bonnet term, https://doi.org/10.1103/PhysRevD.105.063539 journal journal Phys. Rev. D volume 105, pages 063539 (year 2022), https://arxiv.org/abs/2112.10516 arXiv:2112.10516 NoStop
[Yi and Fei(2023)]Yi:2022ymw
author author Z. Yi and author Q. Fei, title Constraints on primordial curvature spectrum from
primordial black holes and scalar-induced gravitational waves, https://doi.org/10.1140/epjc/s10052-023-11233-3 journal
journal Eur. Phys. J. C volume 83, pages 82 (year 2023), https://arxiv.org/abs/2210.03641 arXiv:2210.03641 NoStop
[Danzmann(1997)]Danzmann:1997hm
author author K. Danzmann, title LISA: An ESA cornerstone mission for a
gravitational wave observatory, https://doi.org/10.1088/0264-9381/14/6/002 journal journal Class. Quant. Grav. volume 14, pages 1399 (year 1997)NoStop
[Amaro-Seoane et al.(2017)Amaro-Seoane et al.]LISA:2017pwj
author author P. Amaro-Seoane et al. (collaboration LISA), title Laser Interferometer Space Antenna, @noop (year 2017), https://arxiv.org/abs/1702.00786
arXiv:1702.00786 NoStop
[Luo et al.(2016)Luo et al.]Luo:2015ght
author author J. Luo et al. (collaboration TianQin), title
TianQin: a space-borne gravitational wave detector, https://doi.org/10.1088/0264-9381/33/3/035010 journal
journal Class. Quant. Grav. volume
33, pages 035010 (year 2016), https://arxiv.org/abs/1512.02076 arXiv:1512.02076 NoStop
[Hu and Wu(2017)]Hu:2017mde
author author W.-R. Hu and author Y.-L. Wu, title The Taiji Program in Space for gravitational wave physics
and the nature of gravity, https://doi.org/10.1093/nsr/nwx116
journal journal Natl. Sci. Rev. volume 4, pages 685 (year
2017)NoStop
[Kramer and Champion(2013)]Kramer:2013kea
author author M. Kramer and author D. J. Champion, title The European Pulsar Timing Array and the
Large European Array for Pulsars, https://doi.org/10.1088/0264-9381/30/22/224009 journal
journal Class. Quant. Grav. volume
30, pages 224009 (year 2013)NoStop
[Hobbs et al.(2010)Hobbs et al.]Hobbs:2009yy
author author G. Hobbs et al., title The international pulsar timing
array project: using pulsars as a gravitational wave detector, https://doi.org/10.1088/0264-9381/27/8/084013 journal
journal Class. Quant. Grav. volume
27, pages 084013 (year 2010), https://arxiv.org/abs/0911.5206 arXiv:0911.5206 NoStop
[McLaughlin(2013)]McLaughlin:2013ira
author author M. A. McLaughlin, title The North American Nanohertz Observatory
for Gravitational Waves, https://doi.org/10.1088/0264-9381/30/22/224008 journal
journal Class. Quant. Grav. volume
30, pages 224008 (year 2013), https://arxiv.org/abs/1310.0758 arXiv:1310.0758 NoStop
[Hobbs(2013)]Hobbs:2013aka
author author G. Hobbs, title The Parkes Pulsar Timing Array, https://doi.org/10.1088/0264-9381/30/22/224007 journal
journal Class. Quant. Grav. volume
30, pages 224007 (year 2013), https://arxiv.org/abs/1307.2629 arXiv:1307.2629 NoStop
[Moore et al.(2015)Moore,
Cole, and Berry]Moore:2014lga
author author C. J. Moore, author R. H. Cole, and author C. P. L. Berry, title Gravitational-wave sensitivity curves, https://doi.org/10.1088/0264-9381/32/1/015014 journal
journal Class. Quant. Grav. volume
32, pages 015014 (year 2015), https://arxiv.org/abs/1408.0740 arXiv:1408.0740 NoStop
[Lee and Yang(1956)]Lee:1956qn
author author T. D. Lee and author C.-N. Yang, title Question of Parity Conservation in Weak Interactions, https://doi.org/10.1103/PhysRev.104.254 journal
journal Phys. Rev. volume 104, pages 254 (year 1956)NoStop
[Wu et al.(1957)Wu,
Ambler, Hayward, Hoppes, and Hudson]Wu:1957my
author author C. S. Wu, author E. Ambler, author R. W. Hayward, author
D. D. Hoppes, and author
R. P. Hudson, title
Experimental Test of Parity Conservation in β Decay, https://doi.org/10.1103/PhysRev.105.1413 journal journal Phys. Rev. volume 105, pages
1413 (year 1957)NoStop
[Green and Schwarz(1984)]Green:1984sg
author author M. B. Green and author J. H. Schwarz, title Anomaly Cancellation in Supersymmetric D=10
Gauge Theory and Superstring Theory, https://doi.org/10.1016/0370-2693(84)91565-X journal
journal Phys. Lett. B volume 149, pages 117 (year 1984)NoStop
[Witten(1984)]Witten:1984dg
author author E. Witten, title Some Properties of O(32) Superstrings, https://doi.org/10.1016/0370-2693(84)90422-2 journal
journal Phys. Lett. B volume 149, pages 351 (year 1984)NoStop
[Philcox(2022)]Philcox:2022hkh
author author O. H. E. Philcox, title Probing parity violation with
the four-point correlation function of BOSS galaxies, https://doi.org/10.1103/PhysRevD.106.063501 journal journal Phys. Rev. D volume 106, pages 063501 (year 2022), https://arxiv.org/abs/2206.04227 arXiv:2206.04227 NoStop
[Hou et al.(2022)Hou,
Slepian, and Cahn]Hou:2022wfj
author author J. Hou, author Z. Slepian, and author R. N. Cahn, title Measurement of Parity-Odd Modes in the Large-Scale 4-Point
Correlation Function of SDSS BOSS DR12 CMASS and LOWZ Galaxies, @noop
(year 2022), https://arxiv.org/abs/2206.03625
arXiv:2206.03625 NoStop
[Minami and Komatsu(2020)]Minami:2020odp
author author Y. Minami and author E. Komatsu, title New Extraction of the Cosmic Birefringence
from the Planck 2018 Polarization Data, https://doi.org/10.1103/PhysRevLett.125.221301 journal
journal Phys. Rev. Lett. volume 125, pages 221301 (year 2020), https://arxiv.org/abs/2011.11254 arXiv:2011.11254 NoStop
[Eskilt and Komatsu(2022)]Eskilt:2022cff
author author J. R. Eskilt and author E. Komatsu, title Improved constraints on cosmic birefringence
from the WMAP and Planck cosmic microwave background polarization data, https://doi.org/10.1103/PhysRevD.106.063503 journal
journal Phys. Rev. D volume 106, pages 063503 (year 2022), https://arxiv.org/abs/2205.13962 arXiv:2205.13962 NoStop
[Cabass et al.(2023)Cabass,
Jazayeri, Pajer, and Stefanyszyn]Cabass:2022rhr
author author G. Cabass, author S. Jazayeri,
author E. Pajer, and author D. Stefanyszyn, title
Parity violation in the scalar trispectrum: no-go theorems and yes-go
examples, https://doi.org/10.1007/JHEP02(2023)021 J. High Energ.
Phys. volume 02year year (2023) pages pages 021, https://arxiv.org/abs/2210.02907 arXiv:2210.02907 NoStop
[Creque-Sarbinowski et al.(2023)Creque-Sarbinowski, Alexander, Kamionkowski, and Philcox]Creque-Sarbinowski:2023wmb
author author C. Creque-Sarbinowski, author S. Alexander, author M. Kamionkowski, and author O. Philcox, title Parity-Violating Trispectrum from
Chern-Simons Gravity, @noop (year 2023), https://arxiv.org/abs/2303.04815 arXiv:2303.04815 NoStop
[Jackiw and Pi(2003)]Jackiw:2003pm
author author R. Jackiw and author S. Y. Pi, title Chern-Simons modification of general
relativity, https://doi.org/10.1103/PhysRevD.68.104012 journal journal Phys. Rev. D volume
68, pages 104012 (year 2003), https://arxiv.org/abs/gr-qc/0308071 arXiv:gr-qc/0308071 NoStop
[Lue et al.(1999)Lue,
Wang, and Kamionkowski]Lue:1998mq
author author A. Lue, author L.-M. Wang, and author M. Kamionkowski, title Cosmological signature of new parity violating
interactions, https://doi.org/10.1103/PhysRevLett.83.1506
journal journal Phys. Rev. Lett. volume 83, pages 1506 (year
1999), https://arxiv.org/abs/astro-ph/9812088
arXiv:astro-ph/9812088 NoStop
[Satoh et al.(2008)Satoh,
Kanno, and Soda]Satoh:2007gn
author author M. Satoh, author S. Kanno, and author J. Soda, title Circular Polarization of Primordial Gravitational Waves in
String-inspired Inflationary Cosmology, https://doi.org/10.1103/PhysRevD.77.023526 journal journal Phys. Rev. D volume 77, pages 023526 (year 2008), https://arxiv.org/abs/0706.3585 arXiv:0706.3585 NoStop
[Saito et al.(2007)Saito,
Ichiki, and Taruya]Saito:2007kt
author author S. Saito, author K. Ichiki, and author A. Taruya, title Probing polarization states of primordial gravitational waves with
CMB anisotropies, https://doi.org/10.1088/1475-7516/2007/09/002
J. Cosmol. Astropart. Phys. volume 09year
year (2007) pages pages
002, https://arxiv.org/abs/0705.3701 arXiv:0705.3701
NoStop
[Alexander and Yunes(2009)]Alexander:2009tp
author author S. Alexander and author N. Yunes, title Chern-Simons Modified General Relativity, https://doi.org/10.1016/j.physrep.2009.07.002 journal
journal Phys. Rept. volume 480, pages 1 (year 2009), https://arxiv.org/abs/0907.2562 arXiv:0907.2562 NoStop
[Yunes et al.(2010)Yunes,
O'Shaughnessy, Owen, and Alexander]Yunes:2010yf
author author N. Yunes, author R. O'Shaughnessy, author B. J. Owen, and author S. Alexander, title Testing gravitational parity violation
with coincident gravitational waves and short gamma-ray bursts, https://doi.org/10.1103/PhysRevD.82.064017 journal journal Phys. Rev. D volume 82, pages 064017 (year 2010), https://arxiv.org/abs/1005.3310 arXiv:1005.3310 NoStop
[Gluscevic and Kamionkowski(2010)]Gluscevic:2010vv
author author V. Gluscevic and author M. Kamionkowski, title Testing Parity-Violating Mechanisms
with Cosmic Microwave Background Experiments, https://doi.org/10.1103/PhysRevD.81.123529 journal journal Phys. Rev. D volume 81, pages 123529 (year 2010), https://arxiv.org/abs/1002.1308 arXiv:1002.1308 NoStop
[Myung and Moon(2014)]Myung:2014jha
author author Y. S. Myung and author T. Moon, title Primordial massive gravitational waves from
Einstein-Chern-Simons-Weyl gravity, https://doi.org/10.1088/1475-7516/2014/08/061 J. Cosmol. Astropart. Phys. volume 08year year (2014) pages pages 061, https://arxiv.org/abs/1406.4367 arXiv:1406.4367 NoStop
[Kawai and Kim(2019)]Kawai:2017kqt
author author S. Kawai and author J. Kim, title Gauss–Bonnet Chern–Simons
gravitational wave leptogenesis, https://doi.org/10.1016/j.physletb.2018.12.019 journal
journal Phys. Lett. B volume 789, pages 145 (year 2019), https://arxiv.org/abs/1702.07689 arXiv:1702.07689 NoStop
[Nair et al.(2019)Nair,
Perkins, Silva, and Yunes]Nair:2019iur
author author R. Nair, author S. Perkins,
author H. O. Silva, and author N. Yunes, title Fundamental Physics Implications for Higher-Curvature Theories from
Binary Black Hole Signals in the LIGO-Virgo Catalog GWTC-1, https://doi.org/10.1103/PhysRevLett.123.191101 journal
journal Phys. Rev. Lett. volume 123, pages 191101 (year 2019), https://arxiv.org/abs/1905.00870 arXiv:1905.00870 NoStop
[Nishizawa and Kobayashi(2018)]Nishizawa:2018srh
author author A. Nishizawa and author T. Kobayashi, title Parity-violating gravity and GW170817, https://doi.org/10.1103/PhysRevD.98.124018 journal
journal Phys. Rev. D volume 98, pages 124018 (year 2018), https://arxiv.org/abs/1809.00815 arXiv:1809.00815 NoStop
[Odintsov and Oikonomou(2022)]Odintsov:2022hxu
author author S. D. Odintsov and author V. K. Oikonomou, title Chirality of gravitational waves in
Chern-Simons f(R) gravity cosmology, https://doi.org/10.1103/PhysRevD.105.104054 journal journal Phys. Rev. D volume 105, pages 104054 (year 2022), https://arxiv.org/abs/2205.07304 arXiv:2205.07304 NoStop
[Bartolo and Orlando(2017)]Bartolo:2017szm
author author N. Bartolo and author G. Orlando, title Parity breaking signatures from a
Chern-Simons coupling during inflation: the case of non-Gaussian
gravitational waves, https://doi.org/10.1088/1475-7516/2017/07/034
J. Cosmol. Astropart. Phys. volume 07year
year (2017) pages pages
034, https://arxiv.org/abs/1706.04627 arXiv:1706.04627
NoStop
[Bartolo et al.(2019)Bartolo,
Orlando, and Shiraishi]Bartolo:2018elp
author author N. Bartolo, author G. Orlando, and author M. Shiraishi, title Measuring chiral gravitational waves in Chern-Simons
gravity with CMB bispectra, https://doi.org/10.1088/1475-7516/2019/01/050 J. Cosmol. Astropart. Phys. volume 01year year (2019) pages pages 050, https://arxiv.org/abs/1809.11170 arXiv:1809.11170 NoStop
[Horava(2009)]Horava:2009uw
author author P. Horava, title Quantum Gravity at a Lifshitz Point, https://doi.org/10.1103/PhysRevD.79.084008 journal journal Phys. Rev. D volume 79, pages 084008 (year 2009), https://arxiv.org/abs/0901.3775 arXiv:0901.3775 NoStop
[Crisostomi et al.(2018)Crisostomi, Noui, Charmousis, and Langlois]Crisostomi:2017ugk
author author M. Crisostomi, author K. Noui,
author C. Charmousis, and author D. Langlois, title Beyond Lovelock gravity: Higher derivative metric theories, https://doi.org/10.1103/PhysRevD.97.044034 journal
journal Phys. Rev. D volume 97, pages 044034 (year 2018), https://arxiv.org/abs/1710.04531 arXiv:1710.04531 NoStop
[Gao and Hong(2020)]Gao:2019liu
author author X. Gao and author X.-Y. Hong, title Propagation of gravitational waves in a cosmological
background, https://doi.org/10.1103/PhysRevD.101.064057 journal journal Phys. Rev. D volume
101, pages 064057 (year 2020), https://arxiv.org/abs/1906.07131 arXiv:1906.07131 NoStop
[Hu and Gao(2022)]Hu:2021bbo
author author Y.-M. Hu and author X. Gao, title Covariant 3+1 correspondence of the spatially covariant
gravity and the degeneracy conditions, https://doi.org/10.1103/PhysRevD.105.044023 journal journal Phys. Rev. D volume 105, pages 044023 (year 2022), https://arxiv.org/abs/2111.08652 arXiv:2111.08652 NoStop
[Hu and Gao(2021)]Hu:2021yaq
author author Y.-M. Hu and author X. Gao, title Spatially covariant gravity with 2 degrees of freedom:
Perturbative analysis, https://doi.org/10.1103/PhysRevD.104.104007
journal journal Phys. Rev. D volume 104, pages 104007 (year 2021), https://arxiv.org/abs/2104.07615 arXiv:2104.07615 NoStop
[Takahashi and Soda(2009)]Takahashi:2009wc
author author T. Takahashi and author J. Soda, title Chiral Primordial Gravitational Waves from a
Lifshitz Point, https://doi.org/10.1103/PhysRevLett.102.231301
journal journal Phys. Rev. Lett. volume 102, pages 231301 (year
2009), https://arxiv.org/abs/0904.0554 arXiv:0904.0554
NoStop
[Myung(2010)]Myung:2009ug
author author Y. S. Myung, title Chiral gravitational waves from z=2
Hořava-Lifshitz gravity, https://doi.org/10.1016/j.physletb.2009.12.059 journal
journal Phys. Lett. B volume 684, pages 1 (year 2010), https://arxiv.org/abs/0911.0724 arXiv:0911.0724 NoStop
[Wang et al.(2013)Wang,
Wu, Zhao, and Zhu]Wang:2012fi
author author A. Wang, author Q. Wu, author W. Zhao, and author
T. Zhu, title Polarizing
primordial gravitational waves by parity violation, https://doi.org/10.1103/PhysRevD.87.103512 journal journal Phys. Rev. D volume 87, pages 103512 (year 2013), https://arxiv.org/abs/1208.5490 arXiv:1208.5490 NoStop
[Zhu et al.(2013)Zhu,
Zhao, Huang, Wang, and Wu]Zhu:2013fja
author author T. Zhu, author W. Zhao, author Y. Huang, author
A. Wang, and author
Q. Wu, title Effects of
parity violation on non-gaussianity of primordial gravitational waves in
Hořava-Lifshitz gravity, https://doi.org/10.1103/PhysRevD.88.063508 journal journal Phys. Rev. D volume 88, pages 063508 (year 2013), https://arxiv.org/abs/1305.0600 arXiv:1305.0600 NoStop
[Cannone et al.(2015)Cannone,
Gong, and Tasinato]Cannone:2015rra
author author D. Cannone, author J.-O. Gong, and author G. Tasinato, title Breaking discrete symmetries in the effective field
theory of inflation, https://doi.org/10.1088/1475-7516/2015/08/003
J. Cosmol. Astropart. Phys. volume 08year
year (2015) pages pages
003, https://arxiv.org/abs/1505.05773 arXiv:1505.05773
NoStop
[Zhao et al.(2020a)Zhao, Liu, Wen, Zhu,
Wang, Hu, and Zhou]Zhao:2019szi
author author W. Zhao, author T. Liu, author L. Wen, author
T. Zhu, author A. Wang, author Q. Hu, and author C. Zhou, title Model-independent test of the parity symmetry of gravity
with gravitational waves, https://doi.org/10.1140/epjc/s10052-020-8211-4 journal
journal Eur. Phys. J. C volume 80, pages 630 (year 2020a), https://arxiv.org/abs/1909.13007 arXiv:1909.13007 NoStop
[Zhao et al.(2020b)Zhao, Zhu, Qiao, and Wang]Zhao:2019xmm
author author W. Zhao, author T. Zhu, author J. Qiao, and author
A. Wang, title Waveform of
gravitational waves in the general parity-violating gravities, https://doi.org/10.1103/PhysRevD.101.024002 journal journal Phys. Rev. D volume 101, pages 024002 (year 2020b), https://arxiv.org/abs/1909.10887 arXiv:1909.10887 NoStop
[Qiao et al.(2020)Qiao,
Zhu, Zhao, and Wang]Qiao:2019hkz
author author J. Qiao, author T. Zhu, author W. Zhao, and author
A. Wang, title Polarized
primordial gravitational waves in the ghost-free parity-violating gravity, https://doi.org/10.1103/PhysRevD.101.043528 journal
journal Phys. Rev. D volume 101, pages 043528 (year 2020), https://arxiv.org/abs/1911.01580 arXiv:1911.01580 NoStop
[Qiao et al.(2019)Qiao,
Zhu, Zhao, and Wang]Qiao:2019wsh
author author J. Qiao, author T. Zhu, author W. Zhao, and author
A. Wang, title Waveform of
gravitational waves in the ghost-free parity-violating gravities, https://doi.org/10.1103/PhysRevD.100.124058 journal journal Phys. Rev. D volume 100, pages 124058 (year 2019), https://arxiv.org/abs/1909.03815 arXiv:1909.03815 NoStop
[Qiao et al.(2022)Qiao,
Zhu, Li, and Zhao]Qiao:2021fwi
author author J. Qiao, author T. Zhu, author G. Li, and author
W. Zhao, title Post-Newtonian
parameters of ghost-free parity-violating gravities, https://doi.org/10.1088/1475-7516/2022/04/054 J. Cosmol. Astropart. Phys. volume 04year year (2022) pages pages 054, https://arxiv.org/abs/2110.09033 arXiv:2110.09033 NoStop
[Gong et al.(2022)Gong,
Zhu, Niu, Wu, Cui, Zhang, Zhao, and Wang]Gong:2021jgg
author author C. Gong, author T. Zhu, author R. Niu, author
Q. Wu, author J.-L. Cui, author X. Zhang, author W. Zhao, and author A. Wang, title Gravitational wave
constraints on Lorentz and parity violations in gravity: High-order spatial
derivative cases, https://doi.org/10.1103/PhysRevD.105.044034
journal journal Phys. Rev. D volume 105, pages 044034 (year 2022), https://arxiv.org/abs/2112.06446 arXiv:2112.06446 NoStop
[Nieh and Yan(1982)]Nieh:1981ww
author author H. T. Nieh and author M. L. Yan, title An Identity in Riemann-cartan Geometry, https://doi.org/10.1063/1.525379 journal journal
J. Math. Phys. volume 23, pages 373
(year 1982)NoStop
[Chatzistavrakidis et al.(2020)Chatzistavrakidis, Karagiannis, and Schupp]Chatzistavrakidis:2020wum
author author A. Chatzistavrakidis, author G. Karagiannis, and author P. Schupp, title Torsion-induced gravitational θ term
and gravitoelectromagnetism, https://doi.org/10.1140/epjc/s10052-020-08600-9 journal
journal Eur. Phys. J. C volume 80, pages 1034 (year 2020), https://arxiv.org/abs/2007.06632 arXiv:2007.06632 NoStop
[Cai et al.(2022)Cai,
Fu, and Yu]Cai:2021uup
author author R.-G. Cai, author C. Fu, and author W.-W. Yu, title Parity violation in stochastic gravitational wave background from
inflation in Nieh-Yan modified teleparallel gravity, https://doi.org/10.1103/PhysRevD.105.103520 journal journal Phys. Rev. D volume 105, pages 103520 (year 2022), https://arxiv.org/abs/2112.04794 arXiv:2112.04794 NoStop
[Wu et al.(2022)Wu,
Zhu, Niu, Zhao, and Wang]Wu:2021ndf
author author Q. Wu, author T. Zhu, author R. Niu, author
W. Zhao, and author
A. Wang, title Constraints on
the Nieh-Yan modified teleparallel gravity with gravitational waves, https://doi.org/10.1103/PhysRevD.105.024035 journal journal Phys. Rev. D volume 105, pages 024035 (year 2022), https://arxiv.org/abs/2110.13870 arXiv:2110.13870 NoStop
[Långvik et al.(2021)Långvik, Ojanperä, Raatikainen, and Rasanen]Langvik:2020nrs
author author M. Långvik, author J.-M. Ojanperä, author S. Raatikainen, and author S. Rasanen, title Higgs inflation with the Holst and the
Nieh–Yan term, https://doi.org/10.1103/PhysRevD.103.083514 journal journal Phys. Rev. D volume 103, pages 083514 (year 2021), https://arxiv.org/abs/2007.12595 arXiv:2007.12595 NoStop
[Li et al.(2020)Li,
Rao, and Zhao]Li:2020xjt
author author M. Li, author H. Rao, and author D. Zhao, title A simple parity violating gravity model without ghost
instability, https://doi.org/10.1088/1475-7516/2020/11/023 J.
Cosmol. Astropart. Phys. volume 11year year (2020) pages pages 023, https://arxiv.org/abs/2007.08038 arXiv:2007.08038 NoStop
[Li et al.(2021)Li,
Rao, and Tong]Li:2021wij
author author M. Li, author H. Rao, and author Y. Tong, title Revisiting a parity violating gravity model without ghost
instability: Local Lorentz covariance, https://doi.org/10.1103/PhysRevD.104.084077 journal journal Phys. Rev. D volume 104, pages 084077 (year 2021), https://arxiv.org/abs/2104.05917 arXiv:2104.05917 NoStop
[Rao(2021)]Rao:2021azn
author author H. Rao, title Parametrized post-Newtonian limit of the
Nieh-Yan modified teleparallel gravity, https://doi.org/10.1103/PhysRevD.104.124084 journal journal Phys. Rev. D volume 104, pages 124084 (year 2021), https://arxiv.org/abs/2107.08597 arXiv:2107.08597 NoStop
[Li and Zhao(2022)]Li:2021mdp
author author M. Li and author D. Zhao, title A simple parity violating model in the symmetric
teleparallel gravity and its cosmological perturbations, https://doi.org/10.1016/j.physletb.2022.136968 journal
journal Phys. Lett. B volume 827, pages 136968 (year 2022), https://arxiv.org/abs/2108.01337 arXiv:2108.01337 NoStop
[Li et al.(2022a)Li, Li, and Rao]Li:2022mti
author author M. Li, author Z. Li, and author H. Rao, title
Ghost instability in the teleparallel gravity model with parity
violations, https://doi.org/10.1016/j.physletb.2022.137395
journal journal Phys. Lett. B volume 834, pages 137395 (year
2022a), https://arxiv.org/abs/2201.02357
arXiv:2201.02357 NoStop
[Li et al.(2022b)Li, Tong, and Zhao]Li:2022vtn
author author M. Li, author Y. Tong, and author D. Zhao, title Possible consistent model of parity violations in the symmetric
teleparallel gravity, https://doi.org/10.1103/PhysRevD.105.104002
journal journal Phys. Rev. D volume 105, pages 104002 (year
2022b), https://arxiv.org/abs/2203.06912
arXiv:2203.06912 NoStop
[Hohmann and Pfeifer(2021)]Hohmann:2020dgy
author author M. Hohmann and author C. Pfeifer, title Teleparallel axions and cosmology, https://doi.org/10.1140/epjc/s10052-021-09165-x journal
journal Eur. Phys. J. C volume 81, pages 376 (year 2021), https://arxiv.org/abs/2012.14423 arXiv:2012.14423 NoStop
[Bombacigno et al.(2021)Bombacigno, Boudet, Olmo, and Montani]Bombacigno:2021bpk
author author F. Bombacigno, author S. Boudet,
author G. J. Olmo, and author G. Montani, title Big bounce and future time singularity resolution in Bianchi I
cosmologies: The projective invariant Nieh-Yan case, https://doi.org/10.1103/PhysRevD.103.124031 journal journal Phys. Rev. D volume 103, pages 124031 (year 2021), https://arxiv.org/abs/2105.06870 arXiv:2105.06870 NoStop
[Iosifidis and Ravera(2021)]Iosifidis:2020dck
author author D. Iosifidis and author L. Ravera, title Parity Violating Metric-Affine Gravity
Theories, https://doi.org/10.1088/1361-6382/abde1a journal journal Class. Quant. Grav. volume 38, pages 115003 (year 2021), https://arxiv.org/abs/2009.03328 arXiv:2009.03328 NoStop
[Hohmann and Pfeifer(2022)]Hohmann:2022wrk
author author M. Hohmann and author C. Pfeifer, title Gravitational wave birefringence in
spatially curved teleparallel cosmology, https://doi.org/10.1016/j.physletb.2022.137437 journal
journal Phys. Lett. B volume 834, pages 137437 (year 2022), https://arxiv.org/abs/2203.01856 arXiv:2203.01856 NoStop
[Conroy and Koivisto(2019)]Conroy:2019ibo
author author A. Conroy and author T. Koivisto, title Parity-Violating Gravity and GW170817 in
Non-Riemannian Cosmology, https://doi.org/10.1088/1475-7516/2019/12/016 J. Cosmol. Astropart. Phys. volume 12year year (2019) pages pages 016, https://arxiv.org/abs/1908.04313 arXiv:1908.04313 NoStop
[Iosifidis(2022)]Iosifidis:2021bad
author author D. Iosifidis, title The full quadratic metric-affine gravity
(including parity odd terms): exact solutions for the affine-connection, https://doi.org/10.1088/1361-6382/ac6058 journal
journal Class. Quant. Grav. volume
39, pages 095002 (year 2022), https://arxiv.org/abs/2112.09154 arXiv:2112.09154 NoStop
[Pagani and Percacci(2015)]Pagani:2015ema
author author C. Pagani and author R. Percacci, title Quantum gravity with torsion and
non-metricity, https://doi.org/10.1088/0264-9381/32/19/195019
journal journal Class. Quant. Grav. volume 32, pages 195019 (year
2015), https://arxiv.org/abs/1506.02882 arXiv:1506.02882
NoStop
[Chen et al.(2022)Chen,
Yu, and Gao]Chen:2022wtz
author author Z. Chen, author Y. Yu, and author X. Gao, title
Polarized gravitational waves in the parity violating scalar-nonmetricity
theory, @noop (year 2022), https://arxiv.org/abs/2212.14362 arXiv:2212.14362 NoStop
[Zhang et al.(2022)Zhang,
Feng, and Gao]Zhang:2022xmm
author author F. Zhang, author J.-X. Feng, and author X. Gao, title Circularly polarized scalar induced gravitational waves from the
Chern-Simons modified gravity, https://doi.org/10.1088/1475-7516/2022/10/054 J. Cosmol. Astropart. Phys. volume 10year year (2022) pages pages 054, https://arxiv.org/abs/2205.12045 arXiv:2205.12045 NoStop
[Feng et al.(2023)Feng,
Zhang, and Gao]Feng:2023veu
author author J.-X. Feng, author F. Zhang, and author X. Gao, title
Scalar induced gravitational waves from Chern-Simons gravity during
inflation era, @noop (year 2023), https://arxiv.org/abs/2302.00950 arXiv:2302.00950 NoStop
[Blas et al.(2009)Blas,
Pujolas, and Sibiryakov]Blas:2009yd
author author D. Blas, author O. Pujolas, and author S. Sibiryakov, title On the Extra Mode and Inconsistency of Horava Gravity, https://doi.org/10.1088/1126-6708/2009/10/029 J. High Energ. Phys. volume 10year year (2009) pages pages 029, https://arxiv.org/abs/0906.3046 arXiv:0906.3046 NoStop
[Charmousis et al.(2009)Charmousis, Niz, Padilla, and Saffin]Charmousis:2009tc
author author C. Charmousis, author G. Niz,
author A. Padilla, and author P. M. Saffin, title
Strong coupling in Horava gravity, https://doi.org/10.1088/1126-6708/2009/08/070 J. High Energ. Phys. volume 08year year (2009) pages pages 070, https://arxiv.org/abs/0905.2579 arXiv:0905.2579 NoStop
[Blas et al.(2010a)Blas, Pujolas, and Sibiryakov]Blas:2009qj
author author D. Blas, author O. Pujolas, and author S. Sibiryakov, title Consistent Extension of Horava Gravity, https://doi.org/10.1103/PhysRevLett.104.181302 journal
journal Phys. Rev. Lett. volume 104, pages 181302 (year 2010a), https://arxiv.org/abs/0909.3525 arXiv:0909.3525 NoStop
[Papazoglou and Sotiriou(2010)]Papazoglou:2009fj
author author A. Papazoglou and author T. P. Sotiriou, title Strong coupling in extended Horava-Lifshitz
gravity, https://doi.org/10.1016/j.physletb.2010.01.054 journal journal Phys. Lett. B volume
685, pages 197 (year 2010), https://arxiv.org/abs/0911.1299 arXiv:0911.1299 NoStop
[Blas et al.(2010b)Blas, Pujolas, and Sibiryakov]Blas:2009ck
author author D. Blas, author O. Pujolas, and author S. Sibiryakov, title Comment on `Strong coupling in extended Horava-Lifshitz
gravity', https://doi.org/10.1016/j.physletb.2010.03.073
journal journal Phys. Lett. B volume 688, pages 350 (year
2010b), https://arxiv.org/abs/0912.0550
arXiv:0912.0550 NoStop
[Beltrán Jiménez et al.(2018)Beltrán Jiménez, Heisenberg, and Koivisto]BeltranJimenez:2017tkd
author author J. Beltrán Jiménez, author L. Heisenberg, and author T. Koivisto, title Coincident General Relativity, https://doi.org/10.1103/PhysRevD.98.044048 journal journal Phys. Rev. D volume 98, pages 044048 (year 2018), https://arxiv.org/abs/1710.03116 arXiv:1710.03116 NoStop
[D'Ambrosio et al.(2020)D'Ambrosio, Garg, Heisenberg, and Zentarra]DAmbrosio:2020nqu
author author F. D'Ambrosio, author M. Garg,
author L. Heisenberg, and author S. Zentarra, title ADM formulation and Hamiltonian analysis of Coincident General
Relativity, @noop (year 2020), https://arxiv.org/abs/2007.03261 arXiv:2007.03261 NoStop
[Zhao(2022)]Zhao:2021zab
author author D. Zhao, title Covariant formulation of f(Q) theory, https://doi.org/10.1140/epjc/s10052-022-10266-4 journal
journal Eur. Phys. J. C volume 82, pages 303 (year 2022), https://arxiv.org/abs/2104.02483 arXiv:2104.02483 NoStop
[Rünkla and Vilson(2018)]Runkla:2018xrv
author author M. Rünkla and author O. Vilson, title Family of scalar-nonmetricity theories of
gravity, https://doi.org/10.1103/PhysRevD.98.084034 journal journal Phys. Rev. D volume
98, pages 084034 (year 2018), https://arxiv.org/abs/1805.12197 arXiv:1805.12197 NoStop
[Abbott et al.(2017e)Abbott et al.]LIGOScientific:2017vwq
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW170817: Observation of Gravitational Waves from a
Binary Neutron Star Inspiral, https://doi.org/10.1103/PhysRevLett.119.161101 journal
journal Phys. Rev. Lett. volume 119, pages 161101 (year 2017e), https://arxiv.org/abs/1710.05832 arXiv:1710.05832 NoStop
[Abbott et al.(2017f)Abbott et al.]LIGOScientific:2017zic
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo,
Fermi-GBM, INTEGRAL), title Gravitational Waves and
Gamma-rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A, https://doi.org/10.3847/2041-8213/aa920c journal
journal Astrophys. J. Lett. volume
848, pages L13 (year 2017f), https://arxiv.org/abs/1710.05834 arXiv:1710.05834 NoStop
[Cai et al.(2019)Cai,
Pi, and Sasaki]Cai:2018dig
author author R.-g. Cai, author S. Pi, and author M. Sasaki, title Gravitational Waves Induced by non-Gaussian Scalar
Perturbations, https://doi.org/10.1103/PhysRevLett.122.201101
journal journal Phys. Rev. Lett. volume 122, pages 201101 (year
2019), https://arxiv.org/abs/1810.11000 arXiv:1810.11000
NoStop
[Unal(2019)]Unal:2018yaa
author author C. Unal, title Imprints of Primordial Non-Gaussianity on
Gravitational Wave Spectrum, https://doi.org/10.1103/PhysRevD.99.041301 journal journal Phys. Rev. D volume 99, pages 041301 (year 2019), https://arxiv.org/abs/1811.09151 arXiv:1811.09151 NoStop
[Adshead et al.(2021)Adshead,
Lozanov, and Weiner]Adshead:2021hnm
author author P. Adshead, author K. D. Lozanov, and author Z. J. Weiner, title Non-Gaussianity and the induced gravitational
wave background, https://doi.org/10.1088/1475-7516/2021/10/080 J.
Cosmol. Astropart. Phys. volume 10year year (2021) pages pages 080, https://arxiv.org/abs/2105.01659 arXiv:2105.01659 NoStop
[Garcia-Saenz et al.(2023a)Garcia-Saenz, Pinol, Renaux-Petel, and Werth]Garcia-Saenz:2022tzu
author author S. Garcia-Saenz, author L. Pinol,
author S. Renaux-Petel, and author D. Werth, title No-go theorem for scalar-trispectrum-induced gravitational
waves, https://doi.org/10.1088/1475-7516/2023/03/057 J. Cosmol.
Astropart. Phys. volume 03year year
(2023) pages pages 057, https://arxiv.org/abs/2207.14267 arXiv:2207.14267 NoStop
[Garcia-Saenz et al.(2023b)Garcia-Saenz, Lu, and Shuai]Garcia-Saenz:2023zue
author author S. Garcia-Saenz, author Y. Lu, and author Z. Shuai, title Scalar-Induced Gravitational Waves from Ghost Inflation, @noop (year 2023b), https://arxiv.org/abs/2306.09052 arXiv:2306.09052 NoStop
|
http://arxiv.org/abs/2307.00491v1
|
20230702063731
|
Generalized NOMP for Line Spectrum Estimation and Detection from Coarsely Quantized Samples
|
[
"Jiang Zhu",
"Hansheng Zhang",
"Ning Zhang",
"Zhiwei Xu",
"Jun Fang"
] |
eess.SP
|
[
"eess.SP"
] |
Generalized NOMP for Line Spectrum Estimation and Detection from Coarsely Quantized Samples
Jiang Zhu, Hansheng Zhang, Ning Zhang, Zhiwei Xu and Jun Fang
Jiang Zhu, Hansheng Zhang and Zhiwei Xu are with the Ocean College, Zhejiang University, and are also with the engineering research center of oceanic sensing technology and equipment, Ministry of Education, No.1 Zheda Road, Zhoushan, 316021, China (email: {jiangzhu16, 22234019, pxuzw}@zju.edu.cn). Ning Zhang is with the Nanjing Marine Radar Institute, Nanjing, China (email: [email protected]). Jun Fang is with the National Key Laboratory of Science and Technology on Communications, University of Electronic Science and Technology of China, Chengdu 611731, China (email: [email protected]).The corresponding author is Jun Fang (email: [email protected]).
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
As radar systems accompanied by large numbers of antennas and scale up in bandwidth, the cost and power consumption of high-precision (e.g., 10-12 bits) analog-to-digital converter (ADC) become the limiting factor. As a remedy, line spectral estimation and detection (LSE&D) from low resolution (e.g., 1-4 bits) quantization has been gradually drawn attention in recent years. As low resolution quantization reduces the dynamic range (DR) of the receiver, the theoretical detection probabilities for the multiple targets (especially for the weakest target) are analyzed, which reveals the effects of low resolution on weak signal detection and provides the guidelines for system design. The computation complexities of current methods solve the line spectral estimation from coarsely quantized samples are often high. In this paper, we propose a fast generalized Newtonized orthogonal matching pursuit (GNOMP) which has superior estimation accuracy and maintains a constant false alarm rate (CFAR) behaviour. Besides, such an approach are easily extended to handle the other measurement scenarios such as sign measurements from time-varying thresholds, compressive setting, multisnapshot setting, multidimensional setting and unknown noise variance. Substantial numerical simulations are conducted to demonstrate the effectiveness of GNOMP in terms of estimating accuracy, detection probability and running time. Besides, real data are also provided to demonstrate the effectiveness of the GNOMP.
Generalized Newtonized orthogonal matching pursuit (GNOMP), low resolution quantization, constant false alarm rate (CFAR), gridless compressed sensing, line spectral estimation and detection.
§ INTRODUCTION
Line spectral estimation and detection (LSE&D) aiming to estimating and detecting a superposition of several complex exponential signals from noisy measurements is a fundamental problem in signal processing fields due to its wide application arising in communication, sonar, speech and music analysis <cit.>. Several solutions to the LSE&D problem include discrete Fourier transform (DFT), subspace based methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariant techniques (ESPRIT) <cit.>, compressed sensing (CS) based methods such as atomic norm minimization (ANM), Newtonalized orthogonal matching pursuit (NOMP) <cit.>, variational LSE (VALSE) <cit.>, iterative reweighed LSE <cit.>, superfast LSE <cit.> and so on.
A fully digital multiple input multiple output (MIMO) radar with high-speed analog-to-digital converters (ADCs) will enable high performance and design flexibility due to the wide bandwidth, high dimensions, and fully-digital baseband processing <cit.>. The use of highspeed, high-resolution ADCs will result in huge power consumption and high hardware complexity <cit.>. Consequently, low precision (e.g., 1-3 bits) ADCs are employed to relieve this ADC bottleneck. Compared to the conventional radar, on one hand, the radar employed low resolution ADCs has two advantages. First, the low resolution ADC can be implemented inexpensively and energy efficiently. Second, the data rate generated by the antenna array can be largely reduced. However, because the low resolution ADC is a highly nonlinear device, conventional signal processing methods, e.g., the matched filtering (MF) and CS-based methods ignoring the quantization effects cause significant information loss and generates harmonic false alarms. On the other hand, the low resolution ADC reduces the dynamic range (DR) as DR is about ∝ 6b where b is the bit-depth of the ADC, and its effects on weak signal detection is still unknown. Therefore how to perform off-grid or gridless frequency estimation and constant false alarm rate (CFAR) based target detection from coarsely quantized samples and revealing the effects of low resolution quantization on weak signal detection in multiple targets scenario deserve in-depth study.
§.§ Related Work
LSE&D from coarsely quantized samples can be classified into two categories: signal-reconstruction-based and parameter-estimation-and-detection-based methods <cit.>. From the signal reconstruction point of view, several criterions of the ADC such as the DR, the spurious free dynamic range (SFDR) have been investigated, the spectrum of the low resolution quantized (especially 1 bit) signal is analyzed, and the effects of linear processing on target estimation and detection are revealed. As for the parameter-estimation-and-detection-based method, the goal is to perform target detection and estimation directly with the parameterized model via nonlinear processing, such as the CS-based algorithm. Now we detail these works.
The criterions of ADC with complex exponential inputs have been analyzed in depth. It has been shown that the DR of a general ADC is 6b+1.72 dB with b being the bit-depth. It also shown that the ADC generates harmonics which limits its SFDR. Such effects also limit the instantaneous SFDR. For radar related application, it is shown that the binary data contains plentiful self-generated <cit.> and cross-generated harmonics <cit.>. For low signal to noise (SNR) scenario, the strengths of harmonics decay quickly and conventional fast Fourier transform (FFT) performs well. While for high SNR scenario, FFT will overestimate the model order. In <cit.>, The fundamental and harmonic beams are formed separately within subbands, and the fine angular-resolution harmonic beams are utilized instead of suppression.
As for parameter estimation, the Cramèr Rao bound (CRB) has been adopted to analyze the performance bounds of the maximum likelihood estimators (MLE). In <cit.>, the scalar parameter estimation with additive noise control input, threshold control input and feed back control input has been analyzed in depth. It is revealed that suitable noise benefits the estimator's performance, well-known as the stochastic resonance phenomenon. It is also shown that the threshold input has a significant impact on the performance of the MLE. Given that the threshold is close to the true value compared with the noise, the performance degradation compared to the unquantized system is small, otherwise the performance degradation is significant. For the single tone frequency estimation <cit.>, it is found that 1 bit quantization gives a dramatic increase of variance at certain frequencies, and a slightly worse performance for other frequencies. In <cit.>, a single stochastic Gaussian point source model is assumed to study the DOA estimation which is a spatial analogy to the problem of temporal line spectral estimation (LSE) with multiple measurement vectors (MMVs), and the CRB for a two-sensor array case is derived. It is shown that the estimation error has weak dependency on SNR, and there exist two singular DOA angles 0^∘ and 30^∘ for which higher SNR results in better estimation performance. In <cit.>, the LSE estimation with MMVs from coarsely quantized samples is studied. For the single tone frequency estimation with multisnapshots and deterministic parameter modelling, the performance bound and its asymptotic under 1 bit quantization are provided. The asymptotic reveals that the CRB is inversely proportional to the number of snapshots and the cubic of the number of samples of a snapshot. For lower SNR scenario, the CRB is inversely proportional to SNR, while the CRB is inversely proportional to the square root of the SNR for high SNR scenario.
The on-grid methods which discrete the frequency into a finite number of grids has been adopted to solve the LSE <cit.>. However, they suffer from grid mismatch issue. For one bit LFMCW radar, the dimension reduced generalized approximate message passing (DR-GAMP) has been proposed to jointly perform the range, Doppler and DOA estimation. To overcome the model mismatch <cit.> issue incurred by on-grid assumptions <cit.>, a low complexity MVALSE-EP is proposed which performs Newton step to refine the frequencies. The expectation maximization (EM) algorithm is incorporated to estimate the noise variance automatically for bit-depth greater than 1. In <cit.>, based on signed measurements of LFMCW radar, range estimation and range-Doppler imaging are studied through the maximum likelihood approach. In order to reduce the computational complexity, a relaxation-based approach, referred to as the One-bit RELAX algorithm, is proposed, which uses the grid refinement to avoid grid mismatch issue. However, the computation complexity of One-bit RELAX algorithm is still high. Moreover, the Bayesian information criterion is used to determine the number of scatters. In <cit.>, the maximum a posteriori (MAP) approach is used to suppress ghosts caused by high-order harmonics for the one-bit SAR imaging. In summary, the above works do not study the effects of low resolution on the dynamic range of the ADC for coexistance signal recovery. Meanwhile, a fast algorithm which maintains CFAR behaviour, has a lower computation complexity, achieve super resolution capability and has high estimation accuracy is still lacking, which motivates our work.
§.§ Main Contributions
In this work, the LSE&D has been studied in depth. The main contributions can be summarized as follows: Firstly, the DR of the receiver employed the low resolution ADC is revealed, especially for 1-bit ADCs. It is shown that in multiple frequencies scenarios, the SNR loss of the weakest signal is due to the quantization and the synthesized signal. A stochastic phenomenon still arises and the detection of the weak signal may benefit from other signals. The Rao test is provided and its false alarm probability and detection probability in terms of the threshold are given. Secondly, the fast GNOMP is proposed which achieves the super resolution capability, avoids the grid mismatch issue and maintains the CFAR behaviour. Thirdly, the GNOMP is also extended to handle other settings such as the compressive measurement scenario, the multisnapshot measurement scenario, the multidimensional LSE, the sign measurements from time varying thresholds, unknown noise variance scenario, etc. Finally, substantial numerical simulations and real experiments are conducted to demonstrate the efficiency and excellent performance of GNOMP, compared to state-of-art methods.
The rest of this article is organized as follows. In Section <ref>, the signal model is introduced. The effects of low resolution quantization in a single signal scenario with nonidentical thresholds are studied, and its relationship with multiple targets scenario is discussed in Section <ref>. Then, the proposed GNOMP approach is presented in Section <ref>. In Section <ref>, substantial numerical experiments are conducted to illustrate the frequency estimation and detection performances of the GNOMP. The performance of GNOMP are also demonstrated via real data in Section <ref>. Finally, Section <ref> concludes this article.
For a complex vector 𝐱∈ℂ^M, let {𝐱} and {𝐱} denote the real and imaginary part of 𝐱, respectively, let |𝐱| and ∠𝐱 denote the componentwise amplitude and phase of 𝐱, respectively. For the square matrix 𝐀, let diag(𝐀) return a vector with elements being the diagonal of 𝐀. While for a vector 𝐚, let diag(𝐚) return a diagonal matrix with the diagonal being 𝐚, and thus diag( diag(𝐀)) returns a diagonal matrix. Let j denote the imaginary number. For the matrix 𝐀∈ℂ^N× N, let 𝐀^*, 𝐀^ T and 𝐀^ H be the conjugate, transpose and Hermitian transpose operator of 𝐀p, respectively. For the matrix 𝐀, let |𝐀| denote the elementwise absolute value of 𝐀. Let 𝐈_L denote the identity matrix of dimension L. Let 𝒞𝒩(𝐱;μ,Σ) denote the complex normal (CN) distribution of 𝐱 with mean μ and covariance Σ. For a random vector 𝐱 with probability density function (PDF) p(𝐱), let Proj[p(𝐱)] denote the projection of p(𝐱) onto Gaussian PDF with diagonal covariance matrix, where the means and variances are matched with that of p(𝐱). Let ϕ(x)=exp(-x^2/2)/√(2π) and Φ(x)=∫_-∞^xϕ(t) dt denote the standard normal probability density function (PDF) and cumulative distribution function (CDF), respectively.
§ PROBLEM SETUP
Consider the line spectrum estimation problem from low resolution quantized samples formulated as[The Generalized NOMP (GNOMP) can be easily extended to the sign measurements from time-varying thresholds, multisnapshot case and compressive case. For simplicity, we only present the standard case but the code we have made available does provide the flexibility.]
𝐲=𝒬({∑_k=1^K𝐚(ω_k)x_k+ϵ})+ j𝒬({∑_k=1^K𝐚(ω_k)x_k+ϵ}),
where
𝐚(ω)=[1, e^ jω,⋯, e^ j(N-1)ω]^ T
is the atom or array manifold vector, 𝐲∈ℂ^N are the measurements, ϵ∈ℂ^N is the additive white Gaussian noise (AWGN) satisfying ϵ∼𝒞𝒩(0,σ^2𝐈_N), σ^2 is the variance of the noise, N is the number of measurements, 𝒬(·) is a uniform/nonuniform quantizer which maps the continuous-valued observations into a finite number of bits. Here we use a uniform quantizer with bit-depth B given as
𝒬(x)=
-γ+2γ/b(d+1/2), x-d2γ/b+γ∈[0,2γ/b],
sign(x)(γ-γ/b), |x|>γ,
where b=2^B is the cardinality of the output of the quantizer, γ is the maximum full-scale range. Assume that the quantization intervals for the quantizer 𝒬(·) are {(τ_d,τ_d+1)}_d=0^b-1, where τ_0=-∞, τ_d=d2γ/b-γ, d=1,2,⋯,b-1, τ_b=∞. For example, one-bit quantization refers to B=1, b=2, τ_0=-∞, τ_1=0 and τ_2=∞, 𝒬(·) reduces to the signum function, i.e., 𝒬(·)= sign(·)γ/2. To better describe the quantizer, we define two functions l(·) and u(·) which return the componentwise lower thresholds l(𝐲) and upper thresholds u(𝐲) of the measurements 𝐲. For example, u(-γ+2γ/b(d+1/2))=(d+1)2γ/b-γ and l(-γ+2γ/b(d+1/2))=d2γ/b-γ.
The goal of this work is to perform line spectrum estimation and detection, i.e., inferring the unknown parameters {ω_k}_k=1^K, {x_k}_k=1^K, and K, and maintaining the CFAR property.
§ A SINGLE SIGNAL ESTIMATION AND DETECTION WITH NONZERO THRESHOLDS
Directly analyzing the theoretical estimation and detection performance limits in a multiple targets scenario is a very difficult problem due to the intersinusoidal interference. As we show in later in Section <ref>, the proposed GNOMP is a greedy approach which estimates the sinusoids sequentially. In an ideal setting, the GNOMP first detects the strongest signal by ignoring the other signals. Then, the GNOMP takes the first strong signal into consideration and detects whether the second signal is present or not. Provided that GNOMP has high estimation accuracy, detecting the presence of the current signal can be treated as a binary hypothesis testing problem defined later in (<ref>), where the thresholds can be viewed as the synthesis of the detected signals. Consequently, it is meaningful to investigate the single estimation and detection with nonzero thresholds.
This section introduces the mathematical model for a single signal estimation and detection with nonzero thresholds and known frequency case. Then, the results are extended to address the unknown frequency case.
§.§ Complex Amplitude Unknown
Consider the following binary hypothesis testing problem
ℋ_0:𝐲=𝒬((ζ+ϵ))+ j·𝒬((ζ+ϵ)),
ℋ_1:𝐲=𝒬((ζ+𝐚(ω)x+ϵ))+ j·𝒬((ζ+𝐚(ω)x+ϵ)),
where ζ∈ℂ^N are the nonzero thresholds. For the known frequency ω case, let θ=[{x},{x}]^ T denote the unknown deterministic parameters and θ_0=0_2× 1. Then one could obtain the GLRT, Wald Test and Rao Test. Since the asymptotic performances of GLRT and Wald Test are the same as that of Rao Test, and both GLRT and Wald Test depend on the ML estimate θ̂_1, making the computation heavy especially when the frequency is unknown. We focus on the Rao test, similar to <cit.> which focus on the zero threshold case. Note that the Rao Test depends on the FIM under hypothesis ℋ_0, and the asymptotic distributions of Rao test depends on the FIM under ℋ_1, we provide the FIM in the following proposition.
For the quantization with bit-depth B, the FIM 𝐈_B(θ) and the CRB 𝐈_B^-1(θ) are
𝐈_B(θ) =2/σ^2(𝐚^ H diag(𝐡_+(𝐚𝐱+ζ))𝐚[
[ 1 0; 0 1; ]]
+{𝐚^ T diag(𝐡_-(𝐚𝐱+ζ))𝐚}[
[ 1 0; 0 -1; ]].
.
-{𝐚^ T diag(𝐡_-(𝐚𝐱+ζ))𝐚}[
[ 0 1; 1 0; ]]),
and
𝐈_B^-1(θ)=σ^2/2((𝐚^ H diag(𝐡_+(𝐚𝐱+ζ))𝐚)^2-^2{(𝐚^ T diag(𝐡_-(𝐚𝐱+ζ))𝐚)}-^2{(𝐚^ T diag(𝐡_-(𝐚𝐱+ζ))𝐚)})
(𝐚^ H diag(𝐡_+(𝐚𝐱+ζ))𝐚[
[ 1 0; 0 1; ]]
-{𝐚^ T diag(𝐡_-(𝐚𝐱+ζ))𝐚}[
[ 1 0; 0 -1; ]].
.
+{𝐚^ T diag(𝐡_-(𝐚𝐱+ζ))𝐚}[
[ 0 1; 1 0; ]]),
where
𝐡_+(η)=h_B({η},σ^2)+h_B({η},σ^2)/2,
𝐡_-(η)=h_B({η},σ^2)-h_B({η},σ^2)/2,
h_B(x,σ^2) is given by
h_B(x,σ^2)=∑_d=0^b-1[ϕ(τ_d+1-x/σ/√(2))-ϕ(τ_d-x/σ/√(2))]^2/Φ(τ_d+1-x/σ/√(2))-Φ(τ_d-x/σ/√(2)).
Here we use 𝐚 instead of 𝐚(ω) for brevity. Besides, h_B=∞(x,σ^2)=lim_B→∞h_B(x,σ^2)=1, and the FIM 𝐈_∞(θ) is
𝐈_∞(θ)=2/σ^2𝐚^ H𝐚[
[ 1 0; 0 1; ]]=2N/σ^2[
[ 1 0; 0 1; ]].
For one-bit quantization where B=1, b=2^B=2, τ_0=-∞, τ_1=0, τ_2=∞, h_B=1(x,σ^2) simplifies to be
h_B=1(x,σ^2)=ϕ^2(x/σ/√(2))/Φ(x/σ/√(2))Φ(-x/σ/√(2))=1/2π e^-2x^2/σ^2/Φ(x/σ/√(2))Φ(-x/σ/√(2)).
The proof is postponed to Appendix <ref>.
§.§.§ Estimation Performance
We investigate the estimation performance by evaluating the CRB which equals to the trace of 𝐈_B^-1(θ), i.e., tr(𝐈_B^-1(θ)).
§.§.§ Detection Performance
The Rao test decides ℋ_1 if
T_ R(𝐲,ζ)= .∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0^ T𝐈_B^-1(θ_0) .∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0≥γ_th,
where
θ=[{x},{x}]^ T, θ_0=[0,0]^ T,
ln p(𝐲,ζ;θ)
=∑_n=1^N(log(Φ({u(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))-Φ({l(y_n)}-{ζ_n+ a_n(ω)x}/σ/√(2))).
.+log(Φ({u(y_n)}-{ζ_n+ a_n(ω)x}/σ/√(2))-Φ({l(y_n)}-{ζ_n+ a_n(ω)x}/σ/√(2)))),
∂ln p(𝐲,ζ;θ)/∂θ =∑_n=1^N-ϕ({u(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))-ϕ({l(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))/Φ({u(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))-Φ({l(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))[{a_n(ω)},-{a_n(ω)}]^ T/σ/√(2)
-∑_n=1^Nϕ({u(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))-ϕ({l(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))/Φ({u(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))-Φ({l(y_n)}-{ζ_n+a_n(ω)x}/σ/√(2))[{a_n(ω)},{a_n(ω)}]^ T/σ/√(2),
and γ_th is the detection threshold. Now we simplify the Rao test. We first compute 𝐈^-1(θ_0) and ∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0. According to (<ref>), 𝐈^-1(θ_0) is
𝐈_B^-1(θ_0)=σ^2/2((𝐚^ H diag(𝐡_+(ζ))𝐚)^2-^2{(𝐚^ T diag(𝐡_-(ζ))𝐚)}-^2{(𝐚^ T diag(𝐡_-(ζ))𝐚)})
(𝐚^ H diag(𝐡_+(ζ))𝐚[
[ 1 0; 0 1; ]]
-{𝐚^ T diag(𝐡_-(ζ))𝐚}[
[ 1 0; 0 -1; ]]
+{𝐚^ T diag(𝐡_-(ζ))𝐚}[
[ 0 1; 1 0; ]]),
∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0 is
.∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0
=-1/σ/√(2)[
[ {𝐚^ Hφ(σ)}; {𝐚^ Hφ(σ)} ]],
where φ(σ)=[φ_1(σ),φ_2(σ),⋯,φ_N(σ)]^ T and
φ_n(σ) = -ϕ({u(y_n)-ζ_n}/σ/√(2))-ϕ({l(y_n)-ζ_n}/σ/√(2))/Φ({u(y_n)-ζ_n}/σ/√(2))-Φ({l(y_n)-ζ_n}/σ/√(2))- jϕ({u(y_n)-ζ_n}/σ/√(2))-ϕ({l(y_n)-ζ_n}/σ/√(2))/Φ({u(y_n)-ζ_n}/σ/√(2))-Φ({l(y_n)-ζ_n}/σ/√(2)),
For simplicity, we use φ_n instead of φ_n(σ) for brevity. Inserting (<ref>) and (<ref>) into (<ref>) yields the simplified Rao test as
T_ R(𝐲,ζ)
= .∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0^ T𝐈_B^-1(θ_0) .∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0
= (𝐚^ H diag(𝐡_+(ζ))𝐚)|𝐚^ Hφ|^2
-{(𝐚^ T diag(𝐡_-(ζ))𝐚)(𝐚^ Hφ)^2}/(𝐚^ H diag(𝐡_+(ζ))𝐚)^2-|𝐚^ T diag(𝐡_-(ζ))𝐚|^2
= (1^ T𝐡_+(ζ))|𝐚^ Hφ|^2
-{(𝐚^ T diag(𝐡_-(ζ))𝐚)(𝐚^ Hφ)^2}/(1^ T𝐡_+(ζ))^2-|𝐚^ T diag(𝐡_-(ζ))𝐚|^2.
Note that the Rao test consists of two terms: The first term can be viewed as the matched filter (MF) based test with pseudo measurements φ, the second term can be viewed as a regularization term due to the nonidentical thresholds in Inphase/Quadrature (I/Q) channels.
The Rao test T_ R(𝐲,ζ) follows <cit.>
T_ R(𝐲,ζ)a∼χ_2^2, under ℋ_0,
χ_2^' 2(λ), under ℋ_1,
where λ reduces to
λ =[{x},{x}]𝐈_B(θ_0)[{x},{x}]^ T
=2/σ^21^ T𝐡_+(ζ)|x|^2+2/σ^2{𝐚^ T diag(𝐡_-(ζ))𝐚x^2}
Consequently, the false alarm probability P_FA and detection probability P_ D are
P_FA = ∫_γ^∞1/2 e^-x/2 dx
= e^-γ_th/2,
P_ D=Q_1(√(λ),√(γ_th))
=Q_1(√(λ),√(-2lnP_FA)),
due to γ_th = -2lnP_FA, where Q_1(·,·) is the Marcum Q-function.
We now hope to provide some insights to reveal the relationship between the proposed general Rao detector (<ref>), the detector without quantization and the detector under 1 bit quantization. For unquantized measurements one has 𝐡_+(ζ)=1_N
and 𝐡_-(ζ)=0_N. T_ R(𝐲,ζ) (<ref>) reduces to
T_ R,∞(𝐲,ζ)=1/𝐚^ H𝐚|𝐚^ Hφ|^2
According to
lim_Δ→ 0^+ϕ(x+Δ)-ϕ(x)/Φ(x+Δ)-Φ(x)
=ϕ'(x)/ϕ(x)=-x,
we can simplify φ_∞ as
φ_∞ = 𝐲-ζ/σ/√(2)
Therefore T_ R,∞(𝐲,ζ) (<ref>) is simplified to be
T_ R,∞(𝐲,ζ)=1/𝐚^ H𝐚| 𝐚^ H𝐲-ζ/σ/√(2)|^2=2/σ^21/𝐚^ H𝐚|𝐚^ H(𝐲-ζ)
|^2.
In addition, λ (<ref>) is simplified to be
λ_∞=2/σ^2(𝐚^ H𝐚)|x|^2.
It can be concluded that the Rao test T_ R,∞(𝐲) (<ref>) and the concentrality parameter λ_∞ (<ref>) are consistent with the results directly obtained with the unquantized measurement model. Besides, the effects caused by the nonzero thresholds can easily be cancelled and the asymptotic distributions under either hypothesis ℋ_0 or hypothesis ℋ_1 is irrelevant with respect to the thresholds ζ.
For quantized measurement model, in general, 𝐡_- is usually very small, compared to 𝐡_+. Dropping those terms involved with 𝐡_- or letting 𝐡_-=0, a simplified Rao test T_ R^'(𝐲,ζ)
T_ R^'(𝐲,ζ)=1/𝐚^ H diag(𝐡_+(ζ))𝐚|𝐚^ Hφ|^2=1/1^ T𝐡_+(ζ)|𝐚^ Hφ|^2
can be obtained, which is the weighted MF filter. According to (<ref>), a simplified λ^' is calculated to be
λ^'=2/σ^2(𝐚^ H𝐡_+(ζ)𝐚)|x|^2
=2/σ^21^ T𝐡_+(ζ)|x|^2
Compared to the unquantized measurements, one can define the SNR loss incurred by the quantization as
SNR_ loss=λ_∞/λ^'=N/1^ T𝐡_+(ζ).
Next we consider 1-bit quantization. According to (<ref>), φ_n, 1bit is simplified as
φ_n, 1bit = sign({y_n})ϕ({ζ_n}/σ/√(2))/Φ(sign({y_n}){ζ_n}/σ/√(2))+ jsign({y_n})ϕ({ζ_n}/σ/√(2))/Φ(sign({y_n}){ζ_n}/σ/√(2)).
We discuss two cases in which ζ_n/σ is near zero and |ζ_n|/σ is very large.
For ζ_n/σ≈ 0, φ_n, 1bit can be simplified as
φ_n, 1bit = (ϕ(0)+ϕ^'(0){ζ_n}/σ/√(2))sign({y_n})/Φ(0)+Φ^'(0)sign({y_n}){ζ_n}/σ/√(2)+ j(ϕ(0)+ϕ^'(0){ζ_n}/σ/√(2))sign({y_n})/Φ(0)+Φ^'(0)sign({y_n}){ζ_n}/σ/√(2)
=2sign({y_n})/√(2π)(1+sign({y_n})2{ζ_n}/√(π)σ)
+ j2sign({y_n})/√(2π)(1+sign({y_n})2{ζ_n}/√(π)σ)
=√(2/π)sign({y_n})(1-sign({y_n})2{ζ_n}/√(π)σ)
+ j√(2/π)sign({y_n})(1-sign({y_n})2{ζ_n}/√(π)σ),
it can be seen that φ_n, 1bit≈√(2/π)(sign({y_n})+ jsign({y_n})) independent of noise variance. For |ζ_n|≫σ, φ_n^ 1bit can be simplified as
φ_n, 1bit = sign({y_n})ϕ({ζ_n}/σ/√(2))+sign({y_n}) jϕ({ζ_n}/σ/√(2))
=sign({y_n})/√(2π) e^-^2{ζ_n}/σ^2
+ jsign({y_n})/√(2π) e^-^2{ζ_n}/σ^2,
and the real (or imaginary) part of φ_n, 1bit decays very quickly and approaches to 0 as the absolute value of the real (or imaginary) part of ζ_n increases. This demonstrates that these measurements will not contribute too much for signal detection.
For one-bit quantization, we use the the following two bounds
Φ(x)Φ(-x)≈1/4 e^-x^2/2,
1/√(2π)|x| e^-x^2/2
to approximate h(x,σ^2) (<ref>) as
h(x,σ^2)≥ (≈) 2/π e^-x^2/σ^2,x≤√(8/π)σ,
|x|/√(π)σ e^-x^2/σ^2, x≥√(8/π)σ.
Note that h(x,σ^2)|_x=0=2/π≈ 0.637, h(x,σ^2)|_x=√(8/π)σ≈ 0.05 and h(x,σ^2)|_x=0/h(x,σ^2)|_x=√(8/π)σ≈ 12.7.
Suppose that the number of terms such that ζ_n/σ≈ 0 and |ζ_n|/σ≫ 0 are N_ s and N_ l, respectively. The SNR loss (<ref>) can be calculated to be
SNR_ loss^ 1bit≈π/2N/N_ s.
The first term π/2 in (<ref>) is the minimal SNR loss incurred by the 1 bit quantization, The second term N/N_ s in (<ref>) is due to the nonzero thresholds which, in our case, is the synthesized signal except the current signal. Now we discuss when does |ζ_n|/σ≫ 0 holds. For detecting the weak signal under two signal coexistance, this demonstrates that the time domain SNR of the strong signal exceeds a certain value such as 0 dB (3 dB), and 2/π e^-x^2/σ^2|_x^2/σ^2=1≈ 0.234 (2/π e^-x^2/σ^2|_x^2/σ^2=2≈ 0.086), the contributions made by the terms φ_n, 1bit involved in the test T_ R^'(𝐲,ζ) (<ref>) are very small to detect the weak signal. For the practical LFMCW radar, there often exists a leakage component whose frequency is near zero and whose amplitude is large, this makes detecting the weak signal especially challenging, and the isolation between the transmitter and receiver should be good to enhance the weak signal detection performance for the low resolution LFMCW radar.
§.§ Frequency and Complex Amplitude are Unknown
For the frequency unknown case, we also evaluate the performance of unbiased estimator by deriving the CRB. Besides, we propose a detector and analyze its theoretical performance. We discrete the frequency into a number of grids Ω_DFT={g/N2π, g=0,1,2,⋯,N-1}. For each frequency ω_g, we define the Rao test as
T_ R(𝐲,ζ,ω_g)
= .∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0^ T𝐈^-1(θ_0) .∂ln p(𝐲,ζ;θ)/∂θ|_θ=θ_0
= (1^ T𝐡_+(ζ))|𝐚^ H(ω_g)φ|^2
-{(𝐚^ T(ω_g) diag(𝐡_-(ζ))𝐚(ω_g))(𝐚^ H(ω_g)φ)^2}/(1^ T𝐡_+(ζ))^2-|𝐚^ T(ω_g) diag(𝐡_-(ζ))𝐚(ω_g) |^2
We conjecture that the distribution of T_ R(𝐲,ζ,ω_g) follows
T_ R(𝐲,ζ,ω_g)a∼χ_2^2, under ℋ_0,
χ_2^' 2(λ_g), under ℋ_1,
where λ_g reduces to
λ_g =2/σ^2|𝐚^ H(ω_g)𝐚(ω)/|𝐚(ω_g)|_2^2|^2(1^ T𝐡_+(ζ)|x|^2+{𝐚^ T(ω) diag(𝐡_-(ζ))𝐚(ω)x^2})
=2/σ^2β_g(1^ T𝐡_+(ζ)|x|^2+{𝐚^ T(ω) diag(𝐡_-(ζ))𝐚(ω)x^2}),
β_g is
β_g=|sinN(ω-ω_g)/2/Nsinω-ω_g/2|^2
due to the mismatch between ω and ω_g and β_g≤ 1. Note that the term 𝐚^ H(ω_g)φ in {T_ R(𝐲,ζ,ω_g)}_g=0^N-1 can be evaluated efficiently through FFT, while the term 𝐚^ T(ω_g) diag(𝐡_-(ζ))𝐚(ω_g) can also be evaluated through FFT, which simplifies the computation complexity significantly. We propose the following Rao test as
T_ R(𝐲,ζ,ζ)
= max_ω_gT_ R(𝐲,ζ,ω_g)
Note that T_ R(𝐲,ζ,ω_g) is only related to 𝐚^ H(ω_g)φ. In addition, one has
E[φ]=0,
E[φ_nφ_m]=0, E[φ_nφ_m^*]=0,n≠ m,
Provided {ζ}={ζ} which often holds approximately, one has
E[φ_nφ_n]=0, E[φ_nφ_n^*]=h({ζ_n},σ^2)+h({ζ_n},σ^2)=h(ζ,σ^2)+h(ζ,σ^2).
In addition, suppose that
h({ζ_n},σ^2)+h({ζ_n},σ^2)= const,
independent of n, then the covariance matrix of φ is a scaled identity matrix. Due to 𝐚^ H(ω_g)𝐚(ω_g^')=δ_gg^', it can be shown that 𝐚^ H(ω_g)φ is uncorrelated with 𝐚^ H(ω_g^')φ for g≠ g^', thus T_ R(𝐲,ζ,ω_g) is uncorrelated with T_ R(𝐲,ζ,ω_g^'). Here we make an assumption that {T_ R(𝐲,ζ,ω_g)}_g=1^G are independent[Provided that φ follows Gaussian distribution, the assumption holds.], the false alarm probability P_FA is
P_ FA=Pr{g=0, ⋯, N-1max T_ R(𝐲,ζ,ω_g)>τ_ th}
= 1-Pr( g=0, ⋯, N-1max T_ R(𝐲,ζ,ω_g)≤τ_ th) = 1-Pr( T_ R(𝐲,ζ,ω_0)≤τ_ th,⋯, T_ R(𝐲,ζ,ω_N-1)≤τ_ th)
=1 - (Pr(T_ R(𝐲,ζ,ω_g)≤τ))^N=1-F_χ_2^2^N(τ_ th)=1-(1- e^-τ_ th/2)^N,
and the threshold is
τ_ th = -2ln(1-(1-P_FA)^1/N).
To find the detection probability P_ D, we first define a detection as a threshold crossing in the correct frequency bin g^* corresponding to the frequency ω_g^* closest to the true frequency ω in wrap-around distance. Hence P_ D is defined as the probability that the peak of the spectrum occurs in the correct frequency bin g^* and crosses the threshold τ. With this definition and for a given P_ FA, we have
P_ D=Q_1(√(λ_g),√(τ_ th))
=Q_1(√(λ_g^*),√(-2ln(1-(1-P_FA)^1/N))),
where Q_1(·,·) is the Marcum Q-function.
It is worth noting that once the signal is detected, the gradient descent or Newton method is adopted to refine the frequency estimates. In order to accelerate the GNOMP approach, one could use oversampling to evaluate T_ R(𝐲,ζ,ω_g), ω_g∈Ω_ os, where γ_ os is the oversampling factor and Ω_ os={2π g/(γ_osN),g=0,1,⋯,γ_osN-1}. The coarse estimate of the frequency can be obtained via finding the maximum of T_ R(𝐲,ζ,ω_g), ω_g∈Ω_ os.
§.§ Further Discussion on the Detection Probability of Multiple Frequencies
Although we focus on a single signal detection, the analysis can be used to provide an upper bound of the detection probabilities of all the targets. Let A_i denote the event that the the ith target being detected. Define λ_i as
λ_g^*,i=2/σ^2β_g^*,i1^ T𝐡_+(ζ_ı)|x_i|^2+2/σ^2{𝐚^ T(ω_i) diag(𝐡_-(ζ_∖ i))𝐚(ω_i)x_i^2},
where β_g^*,i is calculated by replacing ω with ω_i, ζ_∖ i=∑_k=1,k≠ i^K𝐚(ω_k)x_k, x_i denotes the complex amplitude of the ith signal. According to (<ref>), the detection probability with all the other signals perfectly known is
P_ D,i=Q_1(√(λ_g),√(τ_ th))
=Q_1(√(λ_g^*),√(-2ln(1-(1-P_FA)^1/N))).
Consequently, the detection probability P_ D^ all= Pr(A_1A_2⋯ A_K) of all the targets can be upper bounded as
P_ D^ all= Pr(A_1A_2⋯ A_K)= Pr(A_1) Pr(A_2|A_1) Pr(A_K|A_1A_2⋯ A_K-1)≤∏_k=1^K P_ D,k.
A particular case is that all the K-1 targets except the Kth target are strong, and the detection probability P_ D^ all (<ref>) can be simplified as
P_ D^ all≈ P_ D,K,
due to P_ D,k≈ 1, k=1,2,⋯,K-1. This demonstrates that the detection probability of all the targets is dominated by the detection probability of the weakest target, which makes sense.
§ GENERALIZED NOMP
Given the number of sinusoidal K and provided that the real and imaginary parts of y_n lie in the interval [l({y_n}),u({y_n})) and [l({y_n}),u({y_n})), respectively, the MLE of the frequencies and amplitudes are
ω,𝐱maximize l(𝐲;ω,𝐱),
where l(𝐲;ω,𝐱) denotes the loglikelihood
l(𝐲;ω,𝐱)
=∑_n=1^M(log(Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))).
.+log(Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2)))).
Directly solving the above problem needs a K dimensional search of the frequencies by restricting the frequencies onto the grids. With the frequencies being fixed, the amplitudes can be solved via
𝐱maximize l(𝐲;ω_g,𝐱).
It has been shown that (<ref>) is a convex optimization problem, which can be solved efficiently. In total, the number of convex optimization problems needed to be solved is N_g^K, where N_g denotes the number of grids. Then a gradient descent or Newton method can be adopted to eliminate the offgrid effects. The computation complexity of this method is huge especially when the number of frequencies K is large. In addition, the number of frequencies K is usually unknown. Therefore we propose a greedy based low complexity approach, and use the CFAR based criterion to perform target detection.
§.§ A Single Frequency Scenario
Motivated by the low complexity and high estimation accuracy of NOMP algorithm, we propose a generalized NOMP algorithm which iteratively cancels the interference in a nonlinear and greedy way. For a single target scenario, the model reduces to
𝐲=𝒬({𝐚(ω)x+ϵ})+ j𝒬({𝐚(ω)x+ϵ}).
The loglikelihood is l(𝐲;ω,x). The maximum likelihood estimation problem can be formulated as
(x̂_ ML,ω̂_ML)=x,ωargmax l(𝐲;ω,x).
Directly solving the maximum likelihood estimation of a single frequency is difficult as l(𝐲;ω,x) is not concave with respect to the frequency and the real and imaginary parts of x. However, with the frequency ω being known, the maximum likelihood function is concave with respect to the real and imaginary parts of x <cit.>. Therefore we use the alternating minimization (AM) approach to solve (<ref>). We first obtain a good initial point of ω as ω̂ and the amplitude x as x̂. Then we fix the amplitude estimate x̂ and refine the estimate ω̂ as ω̂^'. The amplitude is further optimized as x̂^' by fixing the frequency as ω̂^'. To ensure the fast convergence, the previous estimate can be used as the initial for the ensuing optimization problem. We new detail the two steps.
* Good initial points: The frequency ω∈ [0,2π) is first discretized into a finite number of grids. As shown in <cit.>, an oversampling factor γ_os=4 with respect to the Nyquist grid is preferable to ensure the convergence of Newton's method. For each grid ω_g=g2π/(γ_ osN), g=0,1,2,⋯,γ_ osN-1, γ_ os is the oversampling factor, the following subproblem
x̂_g=xargmax l(𝐲;ω_g,x),
is solved globally. The frequency yielding the maximum loglikelihood is calculated as
ω̂=ω_g∈Ω_osargmax l(𝐲;ω_g,x̂_g).
The computation complexity of the above steps are very high, and we instead propose a novel and low complexity approach. The Rao detector is adopted to obtain ω̂ instead as
ω̂=ω_g∈Ω_osargmax T_ R(𝐲,ζ,ω_g)
where ζ=0 due to detecting the first signal. In addition, the amplitude estimate x̂ is obtained via solving (<ref>) with ω_g replaced with ω̂.
* Alternating minimization: With the coarse detection frequency ω̂ and amplitude x̂, the Newton refinement is adopted to refine the frequency ω̂ with the amplitudes x̂ being fixed, i.e.,
ω̂^'=ω̂-l̇(𝐲;ω̂,x̂)/l̈(𝐲;ω̂,x̂),
where l̇(𝐲;ω̂,x̂) and l̈(𝐲;ω̂,x̂) denote the first and second order derivative of l(𝐲;ω,x̂) with respect to ω evaluated at ω̂. In the Appendix <ref>, the detailed computations of l̇(𝐲;ω,x̂) and l̈(𝐲;ω,x̂) are provided for a general case. Then the amplitude x̂ is refined via the Newton step with ω fixed at ω̂^', i.e.,
[
[ {x̂^'}; {x̂^'} ]]=[
[ {x̂}; {x̂} ]]-[∇^2l(𝐲;ω̂^',x̂)]^-1∇ l(𝐲;ω̂^',x̂),
where the gradient ∇ l(𝐲;ω,x) and Hessian ∇^2l(𝐲;ω,x) of l(𝐲;ω,x) with respect to [{x};{x}]^ T can be referred to Appendix <ref> which provides the gradient and Hessian of the multiple frequency case.
§.§ Multiple Frequencies Scenario
Suppose we have detected L mixtures of sinusoids. Let 𝒫 = {(x_k,ω_k),k=1,⋯,L} denote the set in which the sinusoids have been detected. Then the block coordinate descent (BCD) is applied to refine all the amplitudes frequencies pairs, which amounts to solve
{ω_k}_k=1^L,𝐱maximize l(𝐲;{ω_k}_k=1^L,𝐱),
where l(𝐲;{ω_k}_k=1^L,𝐱) denotes the loglikelihood
l(𝐲;{ω_k}_k=1^L,𝐱)
=∑_n=1^M(log(Φ(u({y_n})-{∑_k=1^L a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^L a_n(ω_k)x_k}/σ/√(2))).
.+log(Φ(u({y_n})-{∑_k=1^L a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^L a_n(ω_k)x_k}/σ/√(2)))).
The BCD proceeds as follows: For the lth sinusoid, the amplitudes frequencies pairs of the other sinusoids are fixed, and the amplitude frequency pair are optimized via AM similar to Subsection <ref> does. After optimizing the lth sinusoid, we begin to optimize the other sinusoid in a cyclic way.
Once all the amplitudes-frequencies pairs have been updated and put into the list 𝒫^' = {(x̂_k^',ω̂^'_k),k=1,⋯,L}, we reestimate all the amplitudes of the frequencies by fixing the frequencies to further improve the estimation accuracy, i.e.,
𝐱maximize l(𝐲;{ω_k^'}_k=1^L,𝐱).
The gradient and Hessian of l(𝐲;{ω_k^'}_k=1^L,𝐱) (<ref>) with respect to [{𝐱};{𝐱}] are put into Appendix <ref>. Note that all the above steps are accepted provided that these steps improve the loglikelihood.
We have found an interesting phenomenon in the numerical experiments. In the two-frequency coexistence scenario where one frequency's SNR is very large such as 60 dB, we first detect this strong signal. Then we will detect a spurious component whose amplitude is small. Next we detect the second frequency and its amplitude is stronger than that of the spurious component. We redo the CFAR detection for the spurious component and its test does not exceed the threshold. Therefore, we add a Spurious Component Suppression Step in GNOMP summarized in Algorithm <ref>. It is worthnoting that the cardinality of 𝒫_m, i.e., |𝒫_m|≠ m in some scenarios due to this step. Further details about GNOMP could be referred to the NOMP algorithm <cit.>.
It is worth noting that we have used the CFAR criterion to stop the GNOMP algorithm. We could use the one-bit Bayesian information criterion (1bBIC) proposed in <cit.> to select the model order K̂ that minimizes the 1bBIC cost function
1bBIC(K̂)=-2ln p(𝐲,ω̂,𝐱̂)+5K̂ln N.
Still, we emphasize that using the Rao test (which can be implemented via FFT) reduces the computation complexity significantly.
§.§ Further Discussion
Stochastic Resonance phenomenon. The nonzero thresholds can be viewed as the synthesis of the remaining signals which has a great effect on the estimation and detection performance of the weak signal. For example, we provide two estimation and detection problem as
P1: 𝐲=𝒬({𝐚(ω_1)x_1+𝐚(ω)x+ϵ})+ j𝒬({𝐚(ω_1)x_1+𝐚(ω)x+ϵ}),
P2: 𝐲=𝒬({∑_k=1^2𝐚(ω_k)x_k+𝐚(ω)x+ϵ})+ j𝒬({∑_k=1^2𝐚(ω_k)x_k+𝐚(ω)x+𝐚(ω)x+ϵ}),
where |x_1|≫σ, |x_2|≫σ, x≪σ, ω_1 and ω_2 is close but can be resolved by the GNOMP, {x_1}≈ -{x_2} and {x_1}≈ -{x_2}. We can expect that (P2) is more easy to solve than (P1), and the weak target under (P2) is more easily to be detected than that under (P1).
§.§ Extension to the Other Scenarios
Here we briefly discuss how to extend the proposed GNOMP to other measurement scenarios. Extending GNOMP to handle the multisnapshot scenario, one-bit time varying threshold scenario, the compressive quantized measurement scenario, the multidimensional measurement scenario is straightforward. We suggest authors refer to <cit.> and GNOMP. Here we extend GNOMP to the measurement scenario without knowing the knowledge of noise variance.
We still study the binary hypothesis testing model (<ref>) except that the noise variance is unknown.
For 1 bit quantization without the knowledge of noise variance, we do not need to estimate the noise variance as
𝐲 =γ/2( sign({∑_k=1^K𝐚(ω_k)x_k+ϵ})+ j sign({∑_k=1^K𝐚(ω_k)x_k+ϵ}))
=γ/2( sign({∑_k=1^K𝐚(ω_k)x_k+ϵ}/σ)+ j sign({∑_k=1^K𝐚(ω_k)x_k+ϵ}/σ)),
the measurements 𝐲 can be generated according to the same frequencies but with amplitudes 𝐱/σ and noise ϵ/σ with unit variance. Therefore, for 1 bit quantization without the knowledge of the noise variance σ, we could just set σ^2=1 to perform LSE&D. For quantization with bit-depth greater than 1, i.e., B>1, we reparameterize the model by defining
ξ = √(2)/σ.
Assume that the frequency ω is known, the unknown parameters are [θ,ξ]^ T, and the FIM is
𝐈([θ,ξ]^ T) =
[
[ 𝐈(θ) ρ; ρ^ T δ; ]],
where 𝐈(θ) is (<ref>), ρ is
ρ=- E[∂^2 log p(𝐲;θ,ξ)/∂θ∂ξ]= [[ {𝐚^ Hη}; {𝐚^ Hη} ]],
δ=- E[∂^2 log p(𝐲;θ,ξ)/∂ξ^2]= 1^ T(s({ζ+𝐚x},ξ)+s({ζ+𝐚x},ξ)),
where
η = q({ζ+𝐚x},ξ) + jq({ζ+𝐚x},ξ)
q(x,ξ)=∑_d=0^b-1-[ϕ(ξ(τ_d+1-x))-ϕ(ξ(τ_d-x))]
[(τ_d+1-x)ϕ(ξ(τ_d+1-x))-(τ_d-x)ϕ(ξ(τ_d-x))]/Φ(ξ(τ_d+1-x))-Φ(ξ(τ_d-x)),
s(x,ξ)=∑_d=0^b-1[(τ_d+1-x)ϕ(ξ(τ_d+1-x))-(τ_d-x)ϕ(ξ(τ_d-x))]^2/Φ(ξ(τ_d+1-x))-Φ(ξ(τ_d-x)).
For unquantized measurements, (<ref>), (<ref>), (<ref>) and (<ref>) can be simplified as
q(x,ξ)_∞=0,
η_∞=0,
s_∞(x,ξ)=2/ξ^2,δ_∞=4N/ξ^2,
which is consistent with the results obtained directly from the unquantized model. In addition, the CRB of amplitude 𝐈^-1(θ,δ)_[θ,θ] without knowledge of noise variance is
𝐈^-1(θ,δ)_[θ,θ] =(𝐈(θ)-δ^-1ρρ^ T)^-1
=𝐈^-1(θ)+δ^-1𝐈^-1(θ)ρρ^ T𝐈^-1(θ)/1-δ^-1ρ^ T𝐈^-1(θ)ρ
=𝐈^-1(θ)+μ𝐈^-1(θ)ρρ^ T𝐈^-1(θ)
where μ=δ^-1/1-δ^-1ρ^ T𝐈^-1(θ)ρ,
ρ^ T𝐈^-1(θ)ρ
=1/ξ^2(1^ T𝐡_+(ζ))|𝐚^ Hη|^2
-{(𝐚^ T diag(𝐡_-(ζ))𝐚)(𝐚^ Hη)^2}/(1^ T𝐡_+(ζ))^2-|𝐚^ T diag(𝐡_-(ζ))𝐚|^2.
In the presence of unknown noise variance, referring to model (<ref>), we have the following binary hypothesis testing problem
ℋ_0:x=0,σ^2>0,
ℋ_1:x≠ 0,σ^2>0.
Define
φ̂_n ≜φ_n(σ)|_σ=√(2)/ξ̂_0,
where ξ̂_0 denotes the MLE of ξ under the null hypothesis ℋ_0.
.∂ln p(𝐲,ζ;[θ;ξ])/∂θ|_[θ;ξ]=[θ_0;ξ̂_0]=-ξ[[ {𝐚^ Hφ̂}; {𝐚^ Hφ̂} ]]
The Rao test is
T_ R(𝐱) =.∂ln p(𝐲,ζ;[θ;ξ])/∂θ|_[θ;ξ]=[θ_0;ξ̂_0]^ T𝐈^-1(θ_0,ξ̂_0)_[2,2].∂ln p(𝐲,ζ;[θ;ξ])/∂θ|_[θ;ξ]=[θ_0;ξ̂_0]
=ξ^2[[ {𝐚^ Hφ̂}; {𝐚^ Hφ̂} ]]^ T(𝐈^-1(θ)+μ𝐈^-1(θ)ρρ^ T𝐈^-1(θ))[[ {𝐚^ Hφ̂}; {𝐚^ Hφ̂} ]]
=T_ R 1(𝐱)+μξ^2[[ {𝐚^ Hφ̂}; {𝐚^ Hφ̂} ]]^ T𝐈^-1(θ)ρρ^ T𝐈^-1(θ)[[ {𝐚^ Hφ̂}; {𝐚^ Hφ̂} ]]
=T_ R 1(𝐱)+μξ^2([[ {𝐚^ Hφ̂}; {𝐚^ Hφ̂} ]]^ T𝐈^-1(θ)ρ)^2
=T_ R 1(𝐱)+μ/ξ^2((1^ T𝐡_+(ζ)){(𝐚^ Hφ̂)^*𝐚^ Hη}
-{(𝐚^ T diag(𝐡_-(ζ))𝐚)(𝐚^ Hφ̂𝐚^ Hη)}/(1^ T𝐡_+(ζ))^2-|𝐚^ T diag(𝐡_-(ζ))𝐚|^2)^2
where ξ̂_0 is the MLE of ξ under ℋ_0. It can be seen that T_ R(𝐱) can also be evaluated efficiently through FFT. The asymptotic PDF of the Rao test is
T_ R(𝐲)a∼χ_2^2, under ℋ_0,
χ_2^' 2(λ), under ℋ_1,
where λ reduces to
λ =[{x},{x}](𝐈^-1(θ_0,ξ̂_0)_[2,2])^-1[{x},{x}]^ T
=[{x},{x}].(𝐈(θ)-δ^-1ρρ^ T)|_[θ;ξ]=[θ_0;ξ̂_0][{x},{x}]^ T
=2/σ^21^ T𝐡_+(ζ)|x|^2+2/σ^2{𝐚^ T diag(𝐡_-(ζ))𝐚x^2}-1/δ{η^ H𝐚x}
For the unknown frequency case, the procedure is similar to that in Subsection <ref>. The algorithm can be designed by referring to the GNOMP and the NOMP-CFAR algorithms by using the forward-backward steps <cit.>.
§ NUMERICAL SIMULATION
We set N=512 by default, and the integration gain is 10log N≈ 27 dB. The quantizer is designed such that γ is set as γ = max(|x_1|,⋯,|x_k|,⋯,|x_K|,3σ/√(2)), where |x_k| denotes the magnitude of the kth frequency.[Optimizing the quantizer to improve the estimation and detection performance is also of vital importance, but is out of scope in this paper. We suggest readers refer to <cit.> for further discussion.] The design of the quantizer's maximum full-scale range implies that when all the signals's amplitudes are weaker than that of the noise variance, the noise variance is used to design γ, otherwise we design γ based on the magnitude of the strongest signal. We use the time domain SNR 10log(|x|^2/σ^2) and the integrated SNR 10log(N|x|^2/σ^2) to characterize the strength of the signal, where the integrated SNR is 10log N dB higher than the time domain SNR. A common sense is that the signal can be detected reliably provided that its integrated SNR is greater than 15 dB from unquantized measurements in additive white Gaussian noise (AWGN) environments. For unquantized measurements, we run the NOMP but the notation GNOMP (B=∞) is used instead.
§.§ Validate the Estimation Performance In a Single Frequency Scenario with Nonidentical Thresholds
For the first numerical simulation, we verify the theoretical results of a single signal estimation with nonidentical thresholds established in Section <ref>. The nonidentical thresholds ζ is set as ζ=𝐚(ω_1)x_1. We set x_1=-0.96-1.75 j, and the SNR is about 6 dB, ω_1 = 0.15. We consider two scenarios: Scenario 1: the weak signal scenario, x=-0.27+0.29 j, and the time domain SNR is -8 dB. Scenario 2: the strong scenario, x=-0.68+0.73 j, and the time domain SNR is about 0 dB. The parameters are set as follows: ω=2.34, σ^2=1. We evaluate the MSE and the CRB of the amplitude of the signal with known thresholds ζ for frequency unknown case. The results are shown in Fig.<ref> (scale these subfigures to the same range). It can be seen that as the number of measurements N increases, the algorithms asymptotically approach to the CRBs.
§.§ Validate the Detection Performance In a Single Frequency Scenario with Nonidentical Thresholds
The false alarm is set as P_ FA=0.01, N=1024. The threshold ζ is set as ζ=𝐚(ω_1)x_1, where x_1=2 and the SNR is 6 dB [The thresholds is synthesized by the phase, the amplitude and the frequency which may have great effects on the detection performance of the selected frequency.]. We use two slightly different frequency to generate ζ. Scenario 1: ω_1 = π/2, and the SNR Loss under 1 bit, 2 bit and 3 bit quantization are 4.8 dB, 2.3 dB, 1 dB according to (<ref>). Scenario 2: ω_1 = π/2+0.1, and the SNR loss under 1 bit, 2 bit, 3 bit quantization are 6.4 dB, 2.2 dB, 0.9 dB, according to (<ref>). The detection probability versus the SNR of the target signal is shown in Fig. <ref>. It can be seen that the measured detection probability is very close to the theoretical detection probability, demonstrating the correctness of the analysis. As shown in Fig. <ref>, for the detection probability P_ D=0.5, the integrated SNRs of 1 bit, 2 bit, 3 bit and ∞ bit quantization are 17.2 dB, 14.7 dB, 13.3 dB, 12.4 dB, respectively. Thus the SNR losses of 1 bit quantization, 2 bit quantization and 3 bit quantization compared to unquantized measurements are about 17.2-12.4=4.8 dB, 14.7-12.4=2.3 dB and 13.3-12.4=0.9 dB in Scenario 1. As shown in Fig. <ref>, similarly, the SNR Losses under 1 bit, 2 bit and 3 bit quantization compared to unquantized measurements are about 18.7-12.4=6.3 dB, 14.6-12.4=2.2 dB and 13.2-12.4=0.8 dB in Scenario 2. These simulation results are consistent with the theoretical analysis.
§.§ Validate the CFAR Property
We use two criterions named overestimating probability P_ OE and false alarm probability P_ FA to characterize the performance of GNOMP. We declare an overestimating whenever GNOMP overestimates the model order K, and we declare a false alarm whenever the minimum of the the wrap-around distances between a given estimated frequency and all the true frequencies exceeds π/N. All K targets have identical SNRs SNR and their frequencies satisfy the minimum frequency separation Δω_ min=2.5Δ_ DFT.
The “measured” overestimating and the “measured” false alarm probability versus the “nominal” false alarm rates under different bit-depth and integrated SNRs are shown in Fig. <ref>. Each point in the plot is generated by 300 runs of GNOMP algorithm for estimating frequencies in a mixture of K=8 sinusoids of SNR=20 dB or SNR=30 dB. The minimum frequency separation satisfies Δω_ min=2.5Δ_ DFT. As shown in Fig. <ref>, both the empirical false alarm rate and overestimating probability closely follow the nominal value under SNR=20 dB and SNR=30 dB, demonstrating the high estimation accuracy of the GNOMP.
§.§ Dynamic Range
We take two signals into consideration. The amplitude of the first signal is stronger than that of the second signal. Define SNR_i as the SNR of the ith frequency and the DR as DR≜ SNR_1- SNR_2. Evaluate the recovery probability of the weakest signal versus SNR_1 and DR for a given false alarm rate. The detection probability of target 1 (the weaker target) under different bit-depth is shown in Fig.<ref>. Note that the DR is 6b+1.72 dB for a given bit depth b using signal analysis. It can be seen that the DR is about 10 dB under 1 bit quantization, about 2.3 dB higher than the linear approach [In fact, the performance of FFT based approach on two simultaneous signals are investigated in <cit.>. It is shown that when two signals are of the same amplitude, the receiver does not report them all the time. The receiver reports both signals only about 24% of the time. About 76% of the time the receiver only reports one signal. Besides, the instantaneous DR of the monobit receiver is about 5 dB and the receiver measure the weak signal whose amplitude is 5 dB weaker than that of the strong signal in 33/1000 trials. Here we show that our nonlinear approach performs significantly better than that of the FFT based approach.]. For 2 bit and 3 bit quantization, the DRs are about 22 dB and 30 dB, which are 8.3 dB and 10.3 dB higher than the linear approach. This demonstrates that the proposed GNOMP enlarge the DR, compared to the linear approach.
§.§ The MSE Performance of the Frequency
In Fig.<ref>, we plot the “measured” MSE and CRB versus the integrated SNR under different bit-depth. Each point is generated by 300 runs of GNOMP algorithm for estimating frequencies in a mixture of K=8 sinusoids of which the integrated SNRs are identical ranging from 14 dB to 40 dB with N=512. It can be seen that as SNR increases from 14 dB to about 34 dB, the GNOMP asymptotically approaches to the CRB under 1-3 bit quantization. As SNR increases, the GNOMP deviates away from the CRB except the no quantization situation.
§.§ The Detection Probability versus the SNR
We generate 8 sinusoids, 7 of them have the identical integrated SNR= 30 dB. The integrated SNR of the remaining sinusoid increases from 10 dB to 26 dB. The false alarm rate is set as P_ FA=0.01 and N=512. The measured false alarm probability, the measured overestimating probability and the detection probability of the remaining target are shown in Fig. Fig. <ref> and Fig. <ref>, where each point is generated by 1000 Monte Carlos. It can be seen that the measured false alarm rate and the overestimating probability are close to the nominal false alarm rate. Besides, the measured detection probability is close to the oracle detection probability (assuming that ζ is known), demonstrating the excellence performance of GNOMP.
§ REAL EXPERIMENT
We use the real data acquired in <cit.> to investigate the estimation and detection performance of GNOMP. For GNOMP, we use the FFT to estimate the noise variance, which is then input to the GNOMP algorithm. Besides, we also implement the GNOMP without the knowledge of the noise variance to perform target estimation and detection with bit-depth greater than 1. We set σ^2=250, P_FA=0.01, the maximum full-scale range of the quantizer is γ=60 and N=256.
§.§ Experiment 1
For the first experiment shown in Fig. <ref>, two people named people 1 and people 2 are in front of the radar with radial distances being about 4.88m and 3.05m, respectively. According to the normalized range spectrum shown in Fig. <ref>, the noise variance is about 24 dB, which is close to the nominal value 10log 250=23.97 dB. Results are shown in Fig. <ref>. It can be seen that people 1, people 2, and the leakage component are detected. GNOMP generates two false alarms only under 3 bit quantization with unknown noise variance. In terms of running time, the time taken by NOMP, GNOMP (B=1), GNOMP (B=2), GNOMP (B=3) are 0.027 sec, 0.126 sec, 0.16 sec (0.946 sec for unknown noise variance), 0.15 sec (1.42 sec for unknown noise variance), respectively.
§.§ Experiment 2
Fig. <ref> shows the setup of field experiment 2. The radial distances of the two static people named people 1 and people 2 are about 4.87 m and 2.63 m. A cyclist moves toward the radar with the radial distance starting from 7 m to 2 m and the velocity about 2 m/s. According to the normalized range spectrum shown in Fig. <ref>, the noise variance is still about 24 dB. The range estimation and detection results are shown in Fig. <ref>. It can be seen that under 1 bit quantization, GNOMP missed the cyclist due to the inherent low DR under 1 bit quantization. For B≥ 2, two people and the cyclist are detected. In addition, GNOMP under 3 bit quantization and NOMP generates a false alarm.
§ CONCLUSION
This paper theoretically analyze the false alarm probability and detection probability under low resolution quantization, which reveals that strong low frequency signal has significant negative effect on weak signal detection. In addition, we develop a superfast GNOMP which utilizes the FFT to implement the Rao detector to perform LSE&D, and the GNOMP maintains the CFAR behaviour. Substantial numerical simulations and real experiments are conducted to demonstrate the excellent performance of GNOMP, in terms of estimating accuracy, detection probability, by comparing with the CRB and the detection probability bound.
§ APPENDIX
§.§ The FIM for the Single Frequency Model (<ref>) with Nonzero Thresholds
We now evaluate the FIM for model (<ref>) under hypothesis ℋ_1 via utilizing the following lemma.
<cit.> Let κ∈ℝ^P denote the set of unknown deterministic parameters. Note that in the case of quantized observations 𝐲=𝒬(𝐫)∈ℝ^N where 𝐫∼𝒩(μ(κ),σ^2𝐈_N/2), the FIM is given by
𝐈(κ)=2/σ^2[∂μ(κ)/∂κ^ T]^ TΛ[∂μ(κ)/∂κ^ T],
where
∂μ(κ)/∂κ^ T=[
[ ∂ [μ(κ)]/∂κ_1 ∂ [μ(κ)]/∂κ_2 ⋯ ∂ [μ(κ)]/∂κ_P ]]∈ℝ^N× P,
and Λ is a diagonal matrix with the (i,i)th element
Λ_i,i=h_B(μ_i(κ),σ^2),
h_B(x,σ^2) is given by (<ref>) and B is the bit-depth of the quantizer. For unquantized system, the FIM (<ref>) is obtained with Λ=𝐈_N.
In our setting, the observations are [{𝐲};{𝐲}]. Note that κ∈ℝ^2 and μ(κ)∈ℝ^2N are
κ=[[ {𝐱}; {𝐱} ]],
μ(κ)=[[ {ζ+𝐚𝐱}; {ζ+𝐚𝐱} ]].
Thus
∂μ(κ)/∂κ^ T=[
[ {𝐚} -{𝐚}; {𝐚} {𝐚}; ]]∈ℝ^2N× 2.
Substituting (<ref>) in (<ref>), the FIM 𝐈_B(θ) is
𝐈_B(θ)=2/σ^2[
[ {𝐚} -{𝐚}; {𝐚} {𝐚}; ]]^ Tdiag[ h_B({𝐚x+ζ},σ^2); h_B({𝐚x+ζ},σ^2); ][
[ {𝐚} -{𝐚}; {𝐚} {𝐚}; ]]
Simplifying (<ref>) yields (<ref>). The inverse of 𝐈_B(θ) is shown to be (<ref>).
§.§ The gradients and Hessian of the unknown amplitudes
Suppose we want to maximize l(𝐲;{ω_k}_k=1^K,𝐱) with respect to 𝐱 with the frequencies {ω_k}_k=1^K being known. The optimization problem is
𝐱maximize∑_n=1^M(log(Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))).
.+log(Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))))
Define
𝐱̃=[[ {𝐱}; {𝐱} ]]∈ℝ^2K,
𝐇̃=[[ {𝐀(ω)} -{𝐀(ω)} ]]∈ℝ^N× 2K,
𝐔̃=[[ {𝐀(ω)} {𝐀(ω)} ]]∈ℝ^N× 2K.
Let 𝐡̃_n^ T and 𝐮̃_n^ T denote the nth row of 𝐇̃ and 𝐔̃, respectively.
Consequently, the gradient and Hessian of l(𝐱̃) are
∇ l(𝐱̃)
=∑_n=1^N-ϕ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-ϕ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))/Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))𝐡̃_n/σ/√(2)
-∑_n=1^Nϕ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-ϕ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))/Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))𝐮̅_n/σ/√(2),
and
∇^2 l(𝐱̃)
=∑_n=1^N-(ϕ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-ϕ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2)))^2/(Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2)))^2𝐡̅_n𝐡̅_n^ T/σ^2/2
+ϕ^'(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-ϕ^'(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))/Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))𝐡̅_n𝐡̅_n^ T/σ^2/2
∑_n=1^N-(ϕ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-ϕ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2)))^2/(Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2)))^2𝐮̅_n𝐮̅_n^ T/σ^2/2
+ϕ^'(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-ϕ^'(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))/Φ(u({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))-Φ(l({y_n})-{∑_k=1^K a_n(ω_k)x_k}/σ/√(2))𝐮̅_n𝐮̅_n^ T/σ^2/2,
where ϕ(x)=1/√(2π) e^-x^2/2 and ϕ^'(x)=-x/√(2π) e^-x^2/2.
§.§ The gradients and Hessian of an unknown frequency
Suppose we want to maximize l(𝐲;{ω_k}_k=1^K,𝐱) with respect to a single ω_k^' with all the other parameters being known. Obviously, the objective function can be written as
ω_k^'maximize g(ω_k^'),
where g(ω_k^') is
g(ω_k^')=∑_n=1^N(log(Φ({ũ_n}-{a_n(ω_k^')x_k^'}/σ/√(2))
-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))).
.+log(Φ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)))).
where ũ_n=u(y_n)-∑_k=1,k≠ k^'^K a_n(ω_k)x_k and l̃_n=l(y_n)-∑_k=1,k≠ k^'^K a_n(ω_k)x_k. The first order derivative ∂ g(ω_k^')/∂ω_k^' and the second order derivative ∂^2 g(ω_k^')/∂ω_k^'^2 of g(ω_k^') are
∂g(ω_k^')/∂ω_k^'=
∑_n=1^N-ϕ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-ϕ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))/Φ({ũ_n}
-{a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)){∂a_n(ω_k^')/∂ω_k^'x_k^'}/σ/√(2)
-∑_n=1^Nϕ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-ϕ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))/Φ({ũ_n}
-{ a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)){∂a_n(ω_k^')/∂ω_k^'x_k^'}/σ/√(2)
∂^2 g(ω_k^')/∂ω_k^'^2=
∑_n=1^N-ϕ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-ϕ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))/Φ({ũ_n}
-{ a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)){∂^2a_n(ω_k^')/∂^2ω_k^'x_k^'}/σ/√(2)
∑_n=1^N(-(ϕ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-ϕ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)))^2/(Φ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)))^2.
.+
ϕ^'({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-ϕ^'({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))/Φ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))){∂a_n(ω_k^')/∂ω_k^'x_k^'}^2/σ^2/2
∑_n=1^N-ϕ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-ϕ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))/Φ({ũ_n}
-{ a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)){∂^2a_n(ω_k^')/∂^2ω_k^'x_k^'}/σ/√(2)
∑_n=1^N(-(ϕ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-ϕ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)))^2/(Φ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2)))^2.
.+
ϕ^'({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-ϕ^'({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))/Φ({ũ_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))-Φ({l̃_n}-{ a_n(ω_k^')x_k^'}/σ/√(2))){∂a_n(ω_k^')/∂ω_k^'x_k^'}^2/σ^2/2
and
∇^2 l(χ̅_g) =∑_n=1^N-(ϕ({ũ_n}-𝐚̅_n^ Tχ̅_g/σ/√(2))-ϕ({l̃_n}-𝐚̅_n^ Tχ̅_g/σ/√(2)))^2/(Φ({ũ_n}-𝐚̅_n^ Tχ̅_g/σ/√(2))-Φ({l̃_n}-𝐚̅_n^ Tχ̅_g/σ/√(2)))^2𝐚̅_n𝐚̅_n^ T/σ^2/2+
ϕ^'({ũ_n}-𝐚̅_n^ Tχ̅_g/σ/√(2))-ϕ^'({l̃_n}-𝐚̅_n^ Tχ̅_g/σ/√(2))/Φ({ũ_n}-𝐚̅_n^ Tχ̅_g/σ/√(2))-Φ({l̃_n}-𝐚̅_n^ Tχ̅_g/σ/√(2))𝐚̅_n𝐚̅_n^ T/σ^2/2
∑_n=1^N-(ϕ({ũ_n}-𝐛̅_n^ Tχ̅_g/σ/√(2))-ϕ({l̃_n}-𝐛̅_n^ Tχ̅_g/σ/√(2)))^2/(Φ({ũ_n}-𝐛̅_n^ Tχ̅_g/σ/√(2))-Φ({l̃_n}-𝐛̅_n^ Tχ̅_g/σ/√(2)))^2𝐛̅_n𝐛̅_n^ T/σ^2/2+ϕ^'({ũ_n}-𝐛̅_n^ Tχ̅_g/σ/√(2))-ϕ^'({l̃_n}-𝐛̅_n^ Tχ̅_g/σ/√(2))/Φ({ũ_n}-𝐛̅_n^ Tχ̅_g/σ/√(2))-Φ({l̃_n}-𝐛̅_n^ Tχ̅_g/σ/√(2))𝐛̅_n𝐛̅_n^ T/σ^2/2,
99
Stoica
P. Stoica and R. L. Moses, Spectral Analysis of Signals. Upper Saddle River, NJ, USA: Prentice-Hall, 2005.
Schmidt
R. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Trans. Antennas Propag., vol. 34, no. 3, pp. 276-280, 1986.
Roy
R. Roy and T. Kailath, “ESPRIT-estimation of signal parameters via rotational invariance techniques,” IEEE Trans. Acoust., Speech, Signal Process., vol. 37, no. 7, pp. 984-995, 1989.
Madhow16TSP
B. Mamandipoor, D. Ramasamy and U. Madhow, “Newtonized orthogonal matching pursuit: Frequency estimation over the continuum,” IEEE Trans. Signal Process., vol. 64, no. 19, pp. 5066-5081, 2016.
Badiu
M. A. Badiu, T. L. Hansen and B. H. Fleury, “Variational Bayesian inference of line spectra,” IEEE Trans. Signal Process., vol. 65, no. 9, pp. 2247-2261, 2017.
FangTSP16
J. Fang, F. Wang, Y. Shen, H. Li and R. S. Blum, “Superresolution compressed sensing for line spectral estimation: an iterative reweighted approach,” IEEE Trans. Signal Process., vol. 64, no. 18, pp. 4649-4662, 2016.
SPFASTTSP18
T. L. Hansen, B. H. Fleury and B. D. Rao, “Superfast line spectral estimation,” IEEE Trans. Signal Process., vol. 66, no. 10, pp. 2511-2526, 2018.
Mishra19SPM
K. V. Mishra, M. R. Bhavani Shankar, V. Koivunen, B. Ottersten, and S. A. Vorobyov, “Toward millimeterwave joint radar communications: A signal processing perspective,” IEEE Signal Process. Mag., vol. 36, no. 5,
pp. 100-114, Sep. 2019.
KumariICASSP2020
P. Kumari, A. Mezghani, R. W. Heath, “A low-resolution ADC proof-of-concept development for a fully-digital millimeter-wave joint communication-Radar,” ICASSP, 2020.
LFMCWTAES20
B. Jin, J. Zhu, Q. Wu, Y. Zhang and Z. Xu, “One-bit LFMCW radar: spectrum analysis and target detection,” IEEE Trans. Aerospace and Electronic Syst., vol. 56. no. 4, pp. 2732-2750, 2020.
onebitDBF
X. Chen, L. Huang, H. Zhou, Q. Li, K. Yu and W. Yu, “One-bit digital beamforming,” IEEE Trans. Aerospace and Electronic Syst., ?vol. 59. no. 1, pp. 555-567, 2020.
SAR1991
F. Franceschetti, V. Pascazio and G. Schirinzi, “Processing of signum coded SAR signal: theory and experiments,” IEE Proceedings F - Radar and Signal Processing , vol. 138, no. 3, pp. 192-198, 1991.
Papatit
H. C. Papadopoulos, G. W. Wornell and A. V. Oppenheim, “Sequential signal encoding from noisy measurements using quantizers with dynamic bias control,” IEEE Trans. Inf. Theory, vol. 47, no. 3, pp. 978-1002, Mar. 2001.
SingletoneTSP
A. H. Madsen and P. Händel, “Effects of sampling and quantization on single-tone frequency estimation,” IEEE Trans. Signal Process., vol. 48, no. 3, pp. 650-662, Mar. 2000.
DOA1bit02
O. B. Shalom and A. J. Weiss, “DOA estimation using one-bit quantized measurements,” IEEE Trans. Aerosp. Electron. Syst., vol. 38, no. 3, pp. 868-884, Jul. 2002.
NingTAES22
N. Zhang, J. Zhu and Z. Xu, “Gridless multisnapshot variational line spectral estimation from coarsely quantized samples,” to appear in IEEE Transactions on Aerospace and Electronic Systems.
Yu
K. Yu, Y. Zhang, M. Bao, Y. Hu and Z. Wang, “DOA estimation from one-bit compressed array data via joint sparse representation,” IEEE Signal Process. Lett., vol. 23, no. 9, pp. 1279-1283, 2016.
MengZhu
X. Meng, J. Zhu, “A generalized sparse Bayesian learning algorithm for one-bit DOA estimation,” IEEE Commun. Lett., vol. 22, no. 7, pp. 1414-1417, 2018.
Gaoyu
Y. Gao, D. Hu, Y. Chen, Y. Ma, “Gridless 1-b DOA estimation exploiting SVM approach,” IEEE Commun. Lett., vol. 21, no. 10, pp. 2210-2213, 2017.
mismatch
Y. Chi, L. L. Scharf, A. Pezeshki and A. R. Calderbank, “Sensitivity to basis mismatch in compressed sensing,” IEEE Trans. Signal Process., vol. 59, no. 5, May 2011.
Yangzaireview
Z. Yang, J. Li, P. Stoica and L. Xie, “Sparse methods for direction-of-arrival estimation,” Academic Press Library in Signal Processing, vol. 7, pp. 509-581, 2018.
Zhang2019
R. Zhang, C. Li, J. Li and G. Wang, “Range estimation and range-Doppler imaging using signed measurements in LFMCW radar,” IEEE Trans. Aerosp. Electron. Syst., Early Access, pp. 1-19, 2019.
Dong2015
X. Dong, Y. Zhang, “A MAP approach for 1-bit compressive sensing in synthetic aperture radar imaging,” IEEE Geosci. Remote Sens. Lett., vol. 12, no. 6, pp. 1237-1241, 2015.
Richards
M. A. Richards, Fundamentals of Radar Signal Processing. New York: McGraw-Hill, 2005.
LiJian18SPL
C. Li, R. Zhang, J. Li, and P. Stoica, “Bayesian information criterion for
signed measurements with application to sinusoidal signals,” IEEE Signal
Process. Lett., vol. 25, no. 8, pp. 1251-1255, Aug. 2018.
LiJian19TSP
J. Ren, T. Zhang, J. Li and P. Stoica, “Sinusoidal parameter estimation from signed measurements via Majorization-Minimization based RELAX,”
IEEE Trans. Signal Process., vol. 67, no. 8, pp. 2173-2186, 2019.
KayEst
S. M. Kay, “Fundamentals of Statistical Signal Processing: Estimation Theory,” Prentice-Hall, Englewood Cliffs, N.J., 1993.
KayDet
S. M. Kay, “Fundamentals of Statistical Signal Processing: Detection Theory,” Prentice-Hall, Englewood Cliffs, N.J., p. 205, 1993.
Yang2
Z. Yang and L. Xie, “On gridless sparse methods for line spectral estimation from complete and incomplete data,” IEEE Trans. Signal Process., vol. 63, no. 12, pp. 3139-3153, 2015.
PC
S. Rangan, T. S. Rappaport and E. Erkip, “Millimeter-wave cellular wireless networks: potentials and challenges,” Proc.
IEEE, vol. 102, no. 3, pp. 366-385, 2014.
FS
O. Mehanna and N. Sidiropoulos, “Frugal sensing: Wideband power spectrum sensing from few bits,” IEEE Trans. on Signal Process., vol. 61, no. 10, pp. 2693-2703, 2013.
Fangsign
F. Li, J. Fang, H. Li and L. Huang, “Robust one-bit Bayesian compressed sensing with sign-flip errors,” IEEE Signal Process. Lett., vol. 22, no. 07, 2015.
VALSEEP
J. Zhu, Q. Zhang and X. Meng, “Gridless variational Bayesian inference of line spectral from quantized samples,” China Communications, vol. 18, no. 10, pp. 77-95, 2021.
Franceschetti
G. Franceschetti, V. Pascazio, G. Schirinzi, “Processing of signum coded SAR signal: theory and experiments,” IEE Proceedings F - Radar and Signal Processing, vol. 138, no. 3, pp. 192-198, 1991.
Lijian
J. Li, M. M. Naghsh, S. J. Zahabi, M. M. Hashemi, “Compressive radar sensing via one-bit sampling with time-varying thresholds,” ACSSC, Pacific Grove, CA, USA, 6-9 Nov. 2016.
Gianelli2
C. Gianelli, L. Xu, J. Li, P. Stoica, “One-bit compressive sampling with time-varying thresholds for multiple sinusoids,” CAMSAP, 10-13 Dec. 2017, Curacao, Netherlands Antilles.
LiC
C. Li, R. Zhang, J. Li and P. Stocia, “Bayesian information criterion for signed measurements with application to sinusoidal signals,” IEEE Signal Process. Lett., vol. 25, no. 8, pp. 1251-1255, 2018.
Fu
H. Fu and Y. Chi, “Quantized spectral compressed sensing: Cramér-Rao bounds and recovery algorithms,” IEEE Trans. Signal Process., vol. 66, no. 12, pp. 3268-3279, 2018.
onebitMIMOradaedet
L. Huang, “One-bit sampling based target detection in MIMO radar system,” Seminal Report, 2021, available: https://www.eet-china.com/mp/a101872.html.
CSquant
A. Zymnis, S. Boyd and E. Candès, “Compressed sensing with quantized measurements,” IEEE Signal Process. Lett., vol. 17, no. 2, pp. 149-152, 2010.
DWR2016
J. Tsui and C. H. Cheng, “Digital Techniques for Wideband Receivers,” 3rd Edition, Scitech Publishing, p. 280, 2015.
JianLiquant
Y. Cheng, X. Shang, J. Li and P. Stoica, “Interval Design for Signal Parameter Estimation From Quantized Data,” IEEE Trans. Signal Process., vol. 70, pp. 6011-6020, 2022.
guanyudet
G. Wang, J. Zhu and Z. Xu, “Asymptotically optimal one-bit quantizer design for weak-signal detection in generalized Gaussian noise and lossy binary communication channel,” Signal Process., vol. 154, pp. 207-216, 2018.
JiangzhuTSP15
J. Zhu, X. Lin, R. S. Blum and Y. Gu, “Parameter estimation from quantized observations in multiplicative noise environments,” IEEE Trans. Signal Process., vol. 63, no. 15, pp. 4037-4050, 2015.
NOMPCFAR
M. Xu, J. Zhu, J. Fang, N. Zhang and Z. Xu, “CFAR based NOMP for line spectral estimation and detection,” to appear in IEEE Trans. Aerosp. Electron. Syst..
|
http://arxiv.org/abs/2307.02045v1
|
20230705055956
|
Morphing of Quantum Phases When Hosting Current
|
[
"Mengmeng Wu",
"Xiao Liu",
"Renfei Wang",
"Yoon Jang Chung",
"Adbhut Gupta",
"Kirk W. Baldwin",
"Loren Pfeiffer",
"Xi Lin",
"Yang Liu"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.str-el"
] |
International Center for Quantum Materials,
Peking University, Haidian, Beijing, 100871, China
International Center for Quantum Materials,
Peking University, Haidian, Beijing, 100871, China
International Center for Quantum Materials,
Peking University, Haidian, Beijing, 100871, China
Department of Electrical Engineering,
Princeton University, Princeton, New Jersey, 08544, USA
Department of Electrical Engineering,
Princeton University, Princeton, New Jersey, 08544, USA
Department of Electrical Engineering,
Princeton University, Princeton, New Jersey, 08544, USA
Department of Electrical Engineering,
Princeton University, Princeton, New Jersey, 08544, USA
[email protected]
International Center for Quantum Materials,
Peking University, Haidian, Beijing, 100871, China
Interdisciplinary Institute of Light-Element Quantum Materials and Research Center for Light-Element Advanced Materials, Peking University, Haidian, Beijing, 100871, China
[email protected]
International Center for Quantum Materials,
Peking University, Haidian, Beijing, 100871, China
Measurement is the foundation of science, and is a subtle concept especially in quantum mechanics, where the action of detection interacts with the quantum system perturbatively. The property of a quantum system is captured from the stimulated evolution of either the system or the detecting reservoir. Transport measurement, which applies an electric field and studies the migration of charged particles, i.e. the current, is the most widely used technique. In ultra-high mobility two-dimensional systems, transport measurement reveals fruitful quantum phenomena such as the quantum Hall effect, the Aharonov-Bohm oscillation and ballistic trajectory of quasiparticles, the microwave induced zero resistance, the interference of quasiparticles, etc. The general assumption that the quantum phase remains unchanged with a sufficiently small probing current, unfortunately, is rarely examined experimentally. In this work, we probe the ultra-high mobility two-dimensional electron system via its interaction with a propagating surface acoustic wave and observe that the system becomes more incompressible when hosting a current.
Morphing of quantum phases when hosting current
Yang Liu
===============================================
Two-dimensional electron systems (2DES) with extremely low disorder
host a plethora of exotic quantum many-body states when
subjected to a strong perpendicular magnetic field B
<cit.>. The quantum Hall state is
an incompressible quantum liquid signaled by vanishing longitudinal
resistance and quantized Hall resistance at extremely low temperature
T <cit.>. At high Landau level fillings factors
ν>4, various non-uniform charge density waves such as stripe
phases are stabilized by the large extent of the electron wavefunction
<cit.>. The enigmatic 5/2 fractional quantum Hall state attracts tremendous interest <cit.>
because its quasi-particles might obey non-Abelian statistics and be
useful for topological quantum computing <cit.>.
Varies of experimental studies are employed to study its topological
properties and quasi-particle statistics, such as weak
tunneling<cit.>, interferometry<cit.>, shot noise<cit.> and thermal
transport<cit.>. Most of these studies rely upon the
hypothesis that a quantum state is unperturbed by the tiny probing
current passing through the μm size device.
Surface acoustic wave (SAW) is a useful current-free technique
to investigate the property of 2DES <cit.>. The propagating piezo-electric field accompanying the SAW
interacts with the charge carriers, which in turn affects its velocity
(v) and attenuation. Qualitatively, this interaction is related to
the compressibility of 2DES: v increases when the 2DES becomes
incompressible and thus unable to respond to the SAW [Previous
works in Ref. <cit.> explains the velocity shift
with the conductivity. Such an analysis may not be suitable here
because our ultra-high mobility 2DES has a very long transport
scattering time τ_tr≃ 0.7 ns comparable to the SAW
frequency.]. In this work, we probe the 2DES using a pW-level,
continuous-wave SAW and discover that the ∼ 100 nA current
flowing through the ∼ 1 mm size sample causes a ∼ 0.1 ppm
(parts per million, 10^-6) increase of the SAW velocity at very
low T≲ 250 mK. Such a current-induced
SAW velocity shift illustrates that a close and careful
examination on the charge transport mechanism is essential and
imperative.
Our sample is made from a GaAs/AlGaAs wafer grown by molecular beam
epitaxy. The 2DES is confined in a 30-nm-wide quantum well, whose
electron density is 2.91×10^11 cm^-2 and low-temperature
mobility is about 2×10^7cm^2/(V·s). We make a Van der Pauw mesa (d_m = 1.2 mm) by wet etching, and then evaporate 5-μm-period interdigital transducers (IDTs) on each side of the mesa. 50 Ω resistance is connected in parallel to each IDT for broadband
impedance matching. When applied with an AC voltage whose
frequency matches the resonance condition, the IDT generates a
propagating SAW. The SAW will be captured by the IDT on the opposite side
of the sample as a voltage output through the piezoelectric effect, see Fig. 1(a). We use a
custom-built RF lock-in amplifier to analyze the amplitude and phase delay Φ of the output signal. The typical input RF power in this study is 1 nW (-61 dBmW), and only a tenth of which turns into SAW considering the attenuation of cables and the efficiency of the IDT. The SAW induced potential on the 2DES is only ∼ 10 μ eV, leading to ≲ 10^4 cm^-2 electron density fluctuation [See supplementary material for more information.]. We observe no difference in the measurement result using 3-orders-of-magnitude smaller input power <cit.>. The experiment is carried out in a dilution refrigerator whose base temperature is around 10 mK.
Figure. 2(a) and (b) shows the magneto-resistance (R_xx, R_xy)
and the measured relative SAW velocity shift η=Δ v/v_0. The
reference SAW velocity at low field v_0 (≃ 2950 m/s) is
calculated from the IDT period (5 μm) and the measured
resonant frequency f_c (589.5 MHz). We can derive the delay time
∂Φ/∂ (2π f) = 1.1 μ s and 54 ns near and away from
the SAW resonance peak from Fig. 1(b), consistent with the d ∼ 3
mm SAW travel distance and ∼ 11-meter-long coaxial cable (5.5 m each
way). A positive (negative) velocity shift results in a decrease (increase) in the delay time detected by the phase shift Φ of the received signal. We then directly deduce η≃Φ/(2π f_cτ)
from the measured SAW phase shift Φ, where
τ≃ (1.1 μs - 54 ns)· d_m/d is the SAW's
propagating time through the 2DES. At high B fields, the η
trace exhibits minima (corresponding to enhanced SAW velocity) when
the 2DES forms an incompressible quantum Hall state and its screening
capability vanishes <cit.>, see Fig. 2(b). Shortly speaking, η is a measurement of the 2DES compressibility. η at integer fillings
increases linearly with decreasing ν and reaches it's maximum
η_m ≃ 124 ppm at the extreme quantum limit
ν=0 <cit.>.
Unlike the vanishing plateau seen in R_xx, we observe
“V”-shape minima in η. At the vicinity of integer filling
factors ν=N+ν^*,
N is an integer, the 2DES consists of an
incompressible quantum Hall liquid and additional
quasiparticles/quasiholes whose filling factor |ν^*|< 1. The
fact that η has a linear dependence on the
quasiparticles/quasiholes density n^*=n|ν^*|/ν suggests that the
quantum phase formed by these dilute quasiparticles/quasiholes is
compressible <cit.>. The SAW velocity enhancement is also seen
as clear “V”-shape minimum at ν =4/3, 5/3, 6/5, etc., as well as
developing minima at ν =5/2, 7/3, and 11/5 where fractional
quantum Hall states develop. η enhancement is seen when the SAW
propagates along the hard axis of the stripe phase formed at ν=9/2,
11/2, etc., consistent with previous reports
<cit.>. Interestingly, η is quite large near
ν=3/2 where the 2DES forms a compressible composite Fermion Fermi
sea, possibly because the composite Fermions with extremely large effective
mass are inert to the SAW-induced field [Such behavior is
surprising but not inconsistent with the previous report, since our frequency is much lower than the
geometric resonance condition <cit.>.].
Our setup has an extremely low noise background (< -160 dBm/Hz),
leading to a resolution of 0.1 ppm in η at -61 dBm input
power. We are able to resolve very delicate response of 2DES while
preserving the fragile many-body states. When a 500 nA (rms amplitude)
AC current passing through the 2DES, we observe an about 4-s
period, ∼ 2 ppm amplitude oscillation in η, see the expanded
inset in the Fig. 2(b). We apply a digital band-pass filter to the
Fig. 2(b) data and plot the oscillation (pink shade) and its amplitude
(red trace) in Fig. 2(c). Alternatively, we can use a lock-in
amplifier to measure the amplitude of this oscillation (black trace).
The oscillation in Fig. 2(c) clearly evidences an aberration of the quantum phase when current passing through the 2DES. We notice that the oscillation frequency is twice as much as the current frequency
(f_0= 0.125 Hz), see the power spectrum in Fig. 2(c) inset. In order to explain this observation, we investigate the current induced velocity shift (CIVS) δη=η(I)-η(0) using DC current in Fig. 3(a). δη is an even function of I, and increases nearly linearly by 8 ppm when |I| increases from 0 to 1 μA. If we sweep the current from -0.5 to 0.5 μA, η displays a triangle waveform indicating δη∝ |I|. Therefore if the input current is sinusoidal at frequency f_0, the leading component of δη would be the second harmonic at frequency 2f_0, see the Fig. 3(c) inset. We can then define a parameter κ=η_m^-1·(∂η/∂ |I|) to describe the effect of current, which is nearly unchanged when we rotate the current direction to be parallel to SAW, see Fig. 3(e). Therefore, we are tentative to conclude that, to the leading order, the SAW velocity has a linear dependence on the amplitude of
current passing through the 2DES, no matter which direction the
current flows.
At integer filling factors, unlike the “V"-shape minima in the η
trace and the plateau in the R_xx trace, κ presents a
“W"-shape minimum – it has a positive peak at exact integer ν = 1, 2, 3, etc. and
reduces to zero on both sides before increasing. Between ν=1
and 2, κ exhibits clear minima at ν=4/3, 5/3, 7/5, 8/5 and
6/5 when fractional quantum Hall states form, similar to the η
and R_xx traces. Surprisingly, clear minimum can be seen in the
κ trace corresponding to the fragile fractional quantum Hall states at ν=5/2, 7/3, 8/3, 11/5 and 14/5 while the η trace
only shows a glimmer of minima.
We measure the CIVS amplitude δη_p at different filling
factors as a function of the AC current amplitude I_p in Fig.
3(c). At the transition between fractional quantum Hall states, δη_p increases linearly and then saturates at large
current amplitude, consistent with a constant
κ≃η_m^-1· (δη_p/δ I_p). At fillings where the fractional
quantum Hall states are stable, we discover a clear threshold behavior
where δη_p remains almost zero until I_p reaches about
600 nA. We also observe a small but positive κ at ν=3/2 where 2DES forms the compressible Fermi sea.
We can rule out the possibility that finite κ is caused
by the heating effect. Firstly, δη is proportional to |I|
in Fig. 3(a) instead of I^2. Secondly, κ dip at a fragile quantum Hall state such as ν=5/2 is much more obvious than the composite Fermion Fermi sea at ν=3/2, although the former is more sensitive to the temperature. Besides, we note in Fig. 2(c) that the measured κ is almost always positive, indicating an increased SAW velocity when the current increases. It is surprising to conclude that the 2DES becomes more incompressible when carrying current. Intuitively, the current cripples the incompressible phases by introducing more
defects/inhomogeneities and broadening the domain walls, so that the
2DES are expected to be more compressible and conductive.
Unfortunately, there's very little investigation on the morphing of
the quantum phase when carrying a non-destructive current. Meanwhile, the large κ is seen at the transition between neighboring quantum Hall states, where a rigorous description of charge transport must involve the quasiparticle localization and percolation.
Figure. 4(b) shows κ measured at different T. At all
fields, positive κ decreases as T increases, and eventually vanishes when T≃ 250 mK. The summarized κ vs. T data at
different fields in Fig. 4(c) suggests an exponential
dependence κ∝exp(-T/T_C) where the characteristic
temperature T_C is about 50 mK at 2<ν<3 and 70 mK at
1<ν<2. More data show that the T_C is insensitive to the
probing SAW frequencies/wavelengths <cit.>. It is important to
mention that the vanishing of κ is unlikely a direct result of
reduced quantum Hall stability, since the quantum Hall state around
3/2 remains quite strong at T≃ 250 mK when κ vanishes.
We propose a simple schematic model to understand the positive
κ in Fig. 4(a). At ν = 4/3 and 7/5, the electrons in the
partially filled Landau level form ν=1/3 and 2/5 fractional
qauntum Hall states, respectively, if the 2DES is fully-spin-polarized.
These two states can be explained as the ν_CF=1 and 2
integer quantum Hall states of composite Fermions, and the phase
transition happens at ν = 11/8 when the average composite Fermion
filling factor ν_CF = 1.5. Because of the density
fluctuation, the regions with ν_CF < 1.5 (ν_CF > 1.5)
consist of an incompressible ν = 4/3 (ν = 7/5) quantum Hall state and additional movable negative-charged quasiparticles
(positive-charged quasiholes), see Fig. 4(a). When a current passes
through the sample, e.g. from left to right, quasiparticles move
leftward and quasiholes move rightward. The effective magnetic field
poses a Lorentz force, leading to the accumulation and depletion of
quasiparticles/quasiholes at the phase boundary. The depletion
(accumulation) of quasiholes and accumulation (depletion) of
quasiparticles occur at the same boundary, leading to an increase
(decrease) in the local density and the formation of incompressible
quantum Hall states with ν_CF=2 (ν_CF=1). In
short, at the quantum Hall transition, the current passing through the
disordered 2DES induces incompressible phases at the domain
boundaries. A Similar discussion can be easily extended to quantum Hall states, where current flow through the 2DES can drive the sparsely distributed, disorder-pinned quasiparticles/quasiholes out of their
equilibrium positions, and piles them at the boundary of the incompressible liquid phase.
In conclusion, we use the interaction between SAW and electrons to
study the morphing of quantum phases in ultra-high mobility
2DES. We discover that the SAW velocity increases, suggesting that the
2DES becomes more incompressible when a non-destructive current
flows through the 2DES. This effect is only seen with a revolutionarily enhanced sound velocity resolution at very low temperatures and disappears at T≳ 250mK.
We acknowledge support by the National Natural Science Foundation of China (Grant No. 92065104 and 12074010), the National Key Research and Development Program of China (2021YFA1401902) and the National Basic Research Program of China (Grant No. 2019YFA0308403) for sample fabrication and measurement. This research is funded in part by the Gordon and Betty Moore Foundation’s EPiQS Initiative, Grant GBMF9615 to L. N. Pfeiffer, and by the National Science Foundation MRSEC grant DMR 2011750 to Princeton University. We thank Xin Wan, Zhao Liu and Bo Yang for valuable discussions.
apsrev4-1
§ SUPPLEMENTARY MATERIALS
§.§ Electron density fluctuation induced by SAW's piezoelectric field
In this section, we provide a rough estimation of the electron density
fluctuation of 2DES induced by SAW. The SAW speed v_0≃ 2950 m/s
is much smaller than the Fermi velocity of charged quasiparticles
(v_F≃ 3× 10^5 m/s), and the transport
scattering time is of the same order as the SAW period. We can assume
that the response of 2DES to the SAW-induced electric field is fast
enough so that the 2DES is almost always at equilibrium and the
problem can be solved statically. The propagating piezoelectric field
introduced by SAW at a given time (e. g. t=0) is
E⃗_ext = E⃗·cos(-ω t+kx)|_t=0=E⃗·cos(kx)
where k denotes the wavevector of the SAW. We then have the
SAW-induced external potential is
Φ_ext =-∫E⃗_ext(x)dx = -E/ksin(kx)
Using Thomas-Fermi screening model, the induced charge is
ρ_ind = -e^2 ·dn/dμΦ_scr/D
where dn/dμ is the 2DES's compressibility, i.e. the density of state at Fermi energy for
Fermi sea. And we assume the 2DES as a flat plate of uniform thickness D = 20 nm and neglect the stray field.
The screened potential Φ_scr is the sum of the external
potential and the induced potential
Φ_scr = (Φ_ind+Φ_ext)
Φ_ind can be calculated from ρ_ind
using the Gaussian law
∇ ^2 Φ _ind = -ρ _ind/ ϵ_0 ϵ_r
Combining the above relations, the induced potential can be deduced by
the following equation self-consistently.
∇ ^2 Φ_ind = e^2 dn/dμ1/D ϵ_0 ϵ_r (Φ_ind+Φ_ext)
It is easy to see that
Φ_ind = -Φ_ext/1+D ϵ_0 ϵ_r k^2/e^2 dn/dμ
In GaAs ϵ_r = 13.1, the density of state in 2DES at zero
field is dn/dμ = m_e/2 πħ^2. The induced
potential cancels the external potential if
Dϵ_0 ϵ_r k^2/e^2 dn/dμ << 1.
We can deduce the induced electron density.
n_ind = -ρ _ind· D/e = ϵ_0 ϵ_r∇ ^2 Φ _ind· D/e =D ϵ_0 ϵ_r k^2 Φ_ext/e
If the 2DES is incompressible, i.e. dn/dμ=0, Φ_ind and
n_ind are both zero and the interaction between 2DES and SAW vanishes. The induced electrons with n_ind propagate along with the SAW. The power dissipation and
phase delay of SAW occur when induced electrons are scattered by impurities or
the quasiparticle velocity is comparable or less than the SAW
velocity.
Φ_ext can be estimated from the signal amplitude received by
the output IDT. In our experiment as shown by Fig. S1, the typical input RF power is -61dBm (about 1 nW) and the output is about -97 dBm. The cable
attenuation is about -5dB at 600MHz so that the power at the output
IDT is -92dBm. The IDT capacitance C_IF ∼ 20 pF corresponds to
∼ 10 Ω impedance at 600 MHz, sufficiently small to be
neglected. We can estimate that the voltage at the receiving IDT is about 10μV, leading to ∼ 10^4 cm^-2 electron density fluctuation,
orders of magnitude smaller than the density fluctuation of the 2DES itself (∼ 10^9 cm^-2 even in very high-mobility samples <cit.>).
In the experiment, the SAW excitation power (-61dBmW) is a trade-off
between the principle of “non-perturbative” measurement and the
signal-to-noise ratio (SNR). Namely, the power should be sufficiently
low so that n_ind is negligible, while it should be large enough
to resolve interesting phenomena. The proper input power is chosen by
the following rationale. We first measure η using the lowest
possible power, e.g. -91 dBm which corresponds to n_ind∼ 300
cm^-2. We then increase the power to -61 dBm. We find that the
measured two η are almost identical, see Fig. S2, suggesting that -61 dBm is sufficiently low. The power received by our lock-in amplifier is about
-92dBm. Our setup has -170 dBm noise background when using 300 ms time constant
(corresponds to about 0.3 Hz effective noise bandwidth). The 80 dB SNR ratio leads to about 0.1 ppm resolution in η, sufficiently high for this study.
§.§ SAW velocity shift near integer fillings
Near the integer Landau level filling factors ν=N, we observe “V”-shape minima instead of plateaus in η. This might be related to the finite compressibility of Wigner crystal formed by the dilute quasiparticles/quasiholes whose effective filling factor ν^*=|ν-N| is small. In Fig. S3(a),
we observe a linear dependence between η and the
effective quasi-particle density n^* = n|ν^*|/ν.
η at exact integer fillings ν = N increases when the quantum
Hall state becomes stronger. We summarize η at different integer
filling in Fig. S3(b). η has a rather linear dependence on ν
at ν < 10, whose intercept at ν=0 is about η_m = 124.2
ppm. η_m marks the maximum effect the 2DES poses on the SAW and
can be used to normalization κ.
§.§ The origin of the second harmonics
As shown in Fig. 3 of the manuscript, the SAW velocity shift is
proportional to the amplitude of current. Therefore, an AC
current passing through the sample
I = I_psin(ω t)
induces a SAW velocity shift
δη = δη_p|sin(ω t)|=η_mκ I_p|sin(ω t)|
The Fourier series of δη can be expanded as
δη = δη_p(4/π-4/3πcos(2ω t)- ⋯ -2(1+(-1)^n)/(n^2-1)πcos(nω t))
where n is a non-negative even integer. The dominate component of
the signal is second harmonics, whose amplitude is
δη_2f = δη_p·4/3π
Therefore, we can measure δη_2f and deduce δη_p.
We can also deduce κ by
κ = 1/η_m∂η_p/∂ I_p = 3π/4η_m∂η_2f/∂ I_p
§.§ Data using lower SAW frequency
We repeat the acoustic study in another sample. The IDT period λ is 12
μm and the SAW resonance frequency is f_c = 243.1 MHz. The primary feature of κ in this device (see Fig. S4) is similar
to the data reported in the main
text. Similarly, κ decreases with increasing temperature and
finally vanishes when T≃ 250 mK. We conclude that the
phenomena we reported in the manuscript have no dependence on the SAW
frequency and wavelength.
§.§ Device fabrication
The samples are 5 × 5 mm square cleaved directly from a GaAs/AlGaAs
wafer grown via molecular beam epitaxy. The 30-nm-wide quantum well
locates at 390 nm below the surface. We use three δ-doping
layers, the deepest one is about 80 nm below the quantum well. The low
temperature mobility of the 2DES is about 2×10^7 cm^-2/(V · s).
We make ohmic contacts at the four corners of the sample by depositing
Ge/Au/Ni/Au alloy and annealing at 440 °C for 30 minutes. A d_m = 1.2 mm
square Van der Pauw mesa is then created by wet etching using a H_2O:H_2O_2:H_2SO_4 solution (240:8:1) for 4
minutes. The etching depth is approximately 800nm. We pattern the
interdigital transducers (IDTs) with evaporated 400 ÅAl using a
maskless laser lithography system and lift-off process. Each IDT
has 170 pairs of fingers with a periodicity of λ = 5 μm. The center-to-center spacing of opposite pair of IDTs is 2450 μm.
|
http://arxiv.org/abs/2307.03128v1
|
20230706165521
|
Principal subbundles for dimension reduction
|
[
"Morten Akhøj",
"James Benn",
"Erlend Grong",
"Stefan Sommer",
"Xavier Pennec"
] |
stat.ME
|
[
"stat.ME",
"cs.CV",
"cs.LG",
"math.DG",
"math.ST",
"stat.TH"
] |
M. Akhøj et al.
^∗ Corresponding author
^1 Université Côte d'Azur and INRIA, Sophia Antipolis, France
^2 Department of Computer Science, University of Copenhagen, Denmark
^3 Department of mathematics, University of Bergen, Norway
{morten.pedersen, james.benn, xavier.pennec}@inria.fr, [email protected], [email protected]
Principal subbundles for dimension reduction
Morten Akhøj^1,2, ∗ James Benn^1
Erlend Grong^3 Stefan Sommer^2 Xavier Pennec^1
Submitted July 6, 2023
======================================================================================
In this paper we demonstrate how sub-Riemannian geometry can be used for manifold learning and surface reconstruction by combining local linear approximations of a point cloud to obtain lower dimensional bundles.
Local approximations obtained by local PCAs are collected into a rank k tangent subbundle on ℝ^d, k<d, which we call a principal subbundle. This determines a sub-Riemannian metric on ^d. We show that sub-Riemannian geodesics with respect to this metric can successfully be applied to a number of important problems, such as: explicit construction of an approximating submanifold M, construction of a representation of the point-cloud in ^k, and computation of distances between observations, taking the learned geometry into account. The reconstruction is guaranteed to equal the true submanifold in the limit case where tangent spaces are estimated exactly. Via simulations, we show that the framework is robust when applied to noisy data. Furthermore, the framework generalizes to observations on an a priori known Riemannian manifold.
§ INTRODUCTION
This paper presents a framework for learning an unknown, lower dimensional geometry from a set of observations {x_1, …, x_N} in ^d, or more generally on a Riemannian manifold. In our presentation we will assume ^d-valued data unless otherwise specified - the case of manifold-valued data is presented in Section <ref>. The framework provides concrete methods for solving the following three problems,
* Metric learning, i.e. learning a distance metric, d(·, ·) : ^d ×^d →_≥0, expressing the unknown underlying geometry (see <cit.> for an overview).
* Manifold reconstruction, i.e. estimating a k-dimensional smooth submanifold M⊂^d around which the data might be assumed to be distributed, an assumption known as the manifold hypothesis <cit.>. This includes surface reconstruction for observations in ^3 <cit.>.
* Dimension reduction, in the specific sense of learning a representation of the data in ^k, k < d, that preserves various chosen local properties, e.g. pairwise distances and angles between neighbouring points. This problem is often called manifold learning <cit.>, referring to the fact that the manifold hypothesis is often assumed, although most such methods do not reconstruct the manifold in ^d.
Each of these problems constitutes a whole field of research in itself. Indeed, their assumptions on the data can differ; while methods for (B) and (C) assume a lower dimensional structure of the data, this is not necessarily the case in (A). However, the framework described in this paper can be used to do both (A), (B) and (C). Our basic assumption is that the data is locally linear, i.e. locally well approximated by k-dimensional affine linear subspaces. This assumption holds under the manifold hypothesis, where the tangent space at each point is a good approximation.
However, the assumption may also hold even if the manifold hypothesis fails, due to the phenomenon of non-integrability (see Section <ref>). In this sense, the framework of principal subbundles relaxes the manifold assumption.
At each point in ^d we estimate a k-dimensional linear approximation of the data by an eigenspace of a local principal component analysis (PCA). Technically, the collection of these eigenspaces forms a subbundle on ^d. In this work we exploit the fact that such a subbundle determines a sub-Riemannian metric on ^d. Under such a metric a curve in ^d has finite length if and only if it is horizontal, i.e. if its velocity vector lies within the subbundle at all time points. Due to the nature of the chosen subbundle, a horizontal curve initialized within the point cloud is expected to evolve along the point cloud. Thus, our framework provides a method for metric learning (A) in the sense that it estimates a sub-Riemannian metric on ^d, which, under certain assumptions, induces a distance metric on ^d. In particular, it is a geodesic distance, meaning that the distance between p,q∈^d equals the length of the shortest horizontal curve connecting p and q. A sub-Riemannian metric can be thought of as a Riemannian metric of lower rank k≤ d. To the best of our knowledge, the low-rank (i.e. sub-Riemannian) case has not yet been explored in Riemannian approaches to metric learning (e.g. <cit.>, <cit.>). But it is exactly this property that enables the metric to also provide solutions to problems (B) and (C). It yields a method for manifold reconstruction (B) since the sub-Riemannian metric induces a diffeomorphism, ϕ_μ : ^k ⊃ U →ϕ_μ(U) ⊂^d, whose image is a smooth k-dimensional submanifold M^k approximating the data around a chosen base point μ∈^d. Technically, ϕ_μ is a restriction of the sub-Riemannian exponential map at μ. Finally, the framework yields a method for dimension reduction (C) since U ⊂^k is a coordinate chart for the manifold, so that, after projection of the observations to M^k, each projected observation x_i can be represented as ϕ_μ^-1(x_i) ∈^k.
Methods for manifold reconstruction (B) and dimension reduction (C)) often deal with the problem of how to combine local linear approximations into a global, non-linear representation. In the field of surface reconstruction from 3D point clouds, state-of-the-art methods such as Poisson surface reconstruction (PSR) <cit.> and Implicit Geometric Regularization (IGR) <cit.> are based on estimation of tangent spaces, which is done via estimation of normals (see <cit.> for a survey and benchmarking). A fundamental obstacle to this strategy of reconstructing a submanifold from tangent space approximations, e.g. reconstructing a surface from a normal field, is that the subspaces determine a submanifold if and only if they form an integrable subbundle, cf. the Frobenius theorem (see Section <ref> below). If the subspaces are estimated from a (finite) set of observations, integrability cannot be assumed to hold, even in the absence of noise.
PSR and IGP deal with this problem by finding a surface whose normals minimize the distance to the empirical (noisy) normals. This surface is constructed by solving a Poisson equation (PSR) or by fitting a neural network (IGR). However, the approach of fitting normals does not generalize to the case of codimension greater than one, since normals are not defined in this case. Likewise, within manifold learning, methods based on alignments of local linear approximations (e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), can be thought of as different ways to deal with non-integrability. Such methods are often based on eigendecomposition of a kernel-type matrix, or other linear-algebraic computations. This strategy is useful for finding a representation in ^k (problem (C)) but not for reconstructing an underlying manifold (problem (B)). The approach presented in this paper is different. We combine the local linear approximations into a global representation by integrating a system of second-order ordinary differential equations, the sub-Riemannian geodesic equations. For k=1, this integration yields the flow of the first eigenvector field, called the principal flow in <cit.>. There are, however, important differences between principal flows and our framework for k=1, see the discussion in Section <ref> and numerical results in Section <ref>. A follow-up work to the principal flows is the Principal submanifolds <cit.>, where the aim is to leverage k eigenvectors to construct a k-dimensional submanifold approximating the data. This method is closely related to ours, in that it is based on horizontal curves. A crucial difference, however, is that the curves in <cit.> are defined by an algorithmic procedure with no theoretical guarantees and the output of the method is a subset of the ambient space whose properties are largely unknown, such as whether it is in fact a submanifold.
A basic motivation and justification for our method is the following observation: if one had access to the true tangent spaces, e.g. via a frame of vector fields spanning them, then the Riemannian geodesic equation w.r.t. the corresponding Riemannian metric will generate an open subset (a normal chart) of the true manifold. I.e. it will generate an exact reconstruction, locally. When the frame is non-integrable, which is likely the case when it is estimated from data, the more general sub-Riemannian framework is needed. We show that, surprisingly, we can still generate a submanifold in this setting, and thereby give solutions to problems (B) and (C). Our framework thus offers a new way to form a global representation from local linear ones that seems natural from the point of view of differential geometry.
Contributions and overview of the paper
Our main contribution is the idea of collecting local PCA's subspaces into a tangent subbundle and showing how the induced sub-Riemannian structure can be used to model the data. In Section <ref>, we define principal subbundles on ^d and prove smoothness properties. In Section <ref> we present sub-Riemannian geometry on ^d. A large part of this section is devoted to background theory, with some exceptions, e.g. subsection <ref> where we prove that a certain restriction of the sub-Riemannian exponential map is a diffeomorphism, thus generating a submanifold even if the subbundle is non-integrable. This is the crucial result showing the usefulness of sub-Riemannian geometry for manifold reconstruction. In Section <ref> we discuss the particular sub-Riemannian geometry induced by the principal subbundle. In Section <ref> we show that the framework generalizes to the case of observations on an a priori known Riemannian manifold. Section <ref> presents numerical solutions to examples of problems (A) (metric learning), (B) (manifold reconstruction) and (C) (dimension reduction) for observations in ^d and on the sphere.
§ PRINCIPAL SUBBUNDLES
In this section, we define the principal subbundle as a collection of eigenspaces of local PCAs. Recall that the tangent bundle on ^d, T^d, can be identified with ^d ×^d. For some subset U ⊂^d, the tangent bundle on U, TU ⊂ T^d, can be identified with U ×^d. A rank k subbundle of TU is a collection of k-dimensional subspaces associated to points in U, that is
= {(x,v) | x∈ U, v∈_x },
where each _x is a k-dimensional subspace of ^d. Given a data set {x_i}_i=1..N⊂^d, we will define the principal subbundle as the subbundle for which each _x is the span of the first k eigenvectors of a centered local PCA computed at x ∈^d. We detail this construction below.
§.§ Local PCA at the local mean
Let x_1, …, x_N be observations in ℝ^d. By local PCA at p∈^d we mean the extraction of eigenvectors of the following weighted and centered second moment.
Let K_α : ℝ_≥ 0→ℝ_≥ 0 be a smooth, decaying
kernel function with range parameter α > 0.
At a point p ∈ℝ^d, the normalized weight of observation x_i is
w_i(p) := K_α(‖ x_i - p ‖)/∑_j=1^N K_α(‖ x_i - p ‖) ,
where ‖·‖ is the standard norm on ^d. The weighted first moment (the local mean) and the centered weighted second moment (the local covariance matrix) are then:
m(p) = ∑_i=1^N w_i(p) x_i , Σ_α(p) := ∑_i=1^N w_i(m(p)) (x_i - m(p)) (x_i - m(p))^T ∈ℝ^d × d.
To save computational time, instead of using w_i(m(p)) in Σ_α(p) we suggest to use w_i(p), i.e. not recomputing the weights at m(p). This cheaper version is used for the experiments in Sections <ref>-<ref>.
For K_α constantly equal to 1 (or α = ∞), Σ_α(p) is the ordinary mean-centered covariance matrix, independent of p. In our experiments we use a gaussian kernel with standard deviation α. A motivation for using local PCA's is the following. Under the manifold hypothesis, with an underlying manifold of dimension k, the k-dimensional eigenspace of a local PCA at an observation x_i converges to the true tangent space of that submanifold at x_i in the limit of zero noise and the number of observations going to infinity (see e.g. <cit.>, Theorem B.1, for a convergence result).
§.§ Eigenvector fields and the principal subbundle
We define the principal subbundle at p ∈^d as a k-dimensional eigenspace of the weighted second moment at p. For it to be well-defined at p, the k'th and k+1'th eigenvalues of the second moment at p should be different. I.e. the subbundle is defined only outside the following set of points, which we will call singular,
𝒮_α, k{p ∈^d |λ_k(p) = λ_k+1(p) }, 1 ≤ k ≤ d
where λ_1(p) ≥…≥λ_d(p) are the eigenvalues of Σ_α(p) ∈^d× d.
Let λ_1(p) ≥…≥λ_d(p) be the eigenvalues of Σ_α(p) ∈^d× d
with associated eigenvectors e_1(p),…, e_d(p).
Let 𝒮_α, k be the set of singular points (Eq. (<ref>)).
Then the principal subbundle on ^d ∖𝒮_α, k is defined as
ℰ^k, α = {(p,v) | p∈^d ∖𝒮_α, k, v∈span{e_1(p),…, e_k(p)}}⊂ T (^d ∖𝒮_α, k).
We consider it an assumption on the data, and the chosen parameters, that λ_k(p) ≠λ_k+1(p) at all points where we want to evaluate the principal subbundle. In our computations we have not encountered points where the assumption was violated.
Cf. the proof of Proposition <ref> (below), if λ_k(p) > λ_k+1(p) at some p ∈^d, then this property holds on an open set around p.
Note that the principal subbundle only depends on the eigenspaces, not the choice of eigenvectors. The latter are not uniquely determined, they depend on a choice of sign and, in the case of repeated eigenvalues, a rotation within a subspace. In order to define a sub-Riemannian structure from this subbundle it needs to be smooth, which is satisfied cf. Proposition <ref> below. A closely related result, Lemma <ref> below, states that if an eigenvalue λ' at p∈^d has multiplicity 1, then there exists a smooth vector field on an open subset O ⊂^d around p which is an eigenvector for Σ_α(x) at each x ∈ O. We call this vector field an eigenvector field.
Let e' be an eigenvector of Σ_α(p) at p ∈^d with eigenvalue λ' of multiplicity 1. Then there exists an open subset O(p) ⊂^d around p and smooth maps e : O(p) →^d and λ : O(p) →_≥ 0 satisfying e(p) = e', λ(p) = λ', ‖ e(x) ‖ = 1 and Σ_α(x) e(x)= λ(x) e(x) for all x ∈ O(p).
This result follows directly from <cit.>, Theorem 2.3, since Σ_α is a smooth map. From this result on eigenvectors, one can conclude that the eigenspaces are smooth at p if either the eigenvalues λ_1(p), …, λ_k+1(p) are distinct, or λ_k(p), …, λ_d(p) are distinct at p ∈^d. However, we can in fact show smoothness of the subbundle under the milder, indeed minimal, condition that λ_k(p) > λ_k+1(p) (Proposition <ref>). Appendix <ref> contains the proof of this and all other results in the paper.
propositionpropSmoothPSrd
The principal subbundle, defined on ℝ^d∖𝒮_α, k, is smooth.
Figure <ref> illustrates the principal subbundle (blue arrows) induced by point clouds in ^2 and ^3, including the effect of centering the second moment at the local mean.
We are interested in studying curves whose velocity vectors are constrained to lie in the principal subbundle (i.e. eigenspaces of local PCA's). This can be done using sub-Riemannian geometry, which we introduce next.
§ SUB-RIEMANNIAN GEOMETRY ON ^D
We now introduce basic notions of sub-Riemannian geometry on ^d. We focus on the special case that we need, where the sub-Riemannian metric is a restriction of the standard Euclidean inner product. This viewpoint is not presented in sources that we know of, so we devote some space to it. For more comprehensive introductions see e.g. <cit.> or <cit.>. We strive to make the presentation accessible to someone with only a slight knowledge of differential geometry.
§.§ Horizontal curves and the sub-Riemannian distance
In the special case that we consider, a sub-Riemannian structure on ^d is fully determined by a rank k subbundle 𝒟⊂ T^d. The subbundle can be represented as a smoothly varying orthogonal projection matrix,
g^⋆ : ^d →^d× d : p ↦ F(p)F^T(p),
where F : ^d →^d× k is a smooth map s.t. F(p) is a rank k matrix whose columns form an orthonormal basis for _p at any p∈^d. The map g^⋆ is called the cometric. If g^⋆(p) has full rank d at every p∈^d, then the map p↦ g^⋆(p)^-1 is called a Riemannian metric. We discuss relations between Riemannian and sub-Riemannian geometries below.
A basic intuition behind sub-Riemannian geometry is that, at each point p∈^d, 𝒟_p contains the allowed velocity vectors of a curve passing through p. If a curve γ : [0,1]→^d satisfies
d/dtγ(t) γ̇(t) ∈𝒟_γ(t)
for almost all t ∈ [0,1] it is called horizontal. This class of curves induces a distance metric on ^d, the Carnot-Carathéodory metric,
d^𝒟(p,q) = inf{ L(γ) |[ γ:[0,1] →^d is horizontal; γ(0) =p, γ(1) = q ]}∈_≥ 0∪{∞} ,
for any p,q ∈^d, where L(γ) ∫_0^1 ‖γ̇(t) ‖ dt is the curve length functional. An important property of a sub-Riemannian geometry is whether any two points p,q can be connected by a horizontal curve, or, equivalently, whether d(p,q) is finite for all p,q∈^d. A sufficient condition for this is that 𝒟 is bracket-generating (cf. the Chow-Rashevski theorem, <cit.>, <cit.>). This means that, for all p∈^d, 𝒟_p equals ^d, where 𝒟_p consists of the span of all 𝒟-valued vector fields and all of their iterated Lie brackets (see e.g. <cit.>). In this case, d^𝒟 induces the standard topology on ^d.
§.§ Sub-Riemannian geodesics
We now turn to horizontal curves that are 'locally length-minimizing', i.e. any local perturbation of the curve increases its length. For our purposes, the most important class of such curves is called normal sub-Riemannian geodesics.
Normal geodesics are solutions to a system of equations on the cotangent bundle T^⋆^d, which, in our setting, can be identified with ^d×^d. A curve γ : [0, T] →^d is a normal geodesic if and only if it is the projection to ^d of a curve in T^⋆^d, ψ : [0,1] → T^⋆^d, that satisfies the sub-Riemannian Hamiltonian equations. Let H denote the sub-Riemannian Hamiltonian,
H : T^⋆^d →ℝ_≥ 0 : (p,η) ↦1/2η^T g_p^⋆η.
We will write H_p if we consider it as a function on T^⋆_p ^d only. The Hamiltonian equations are then given by
ṗ = ∂ H/∂η(p,η) = g_p^⋆η,
η̇ = -∂ H/∂ p(p,η).
A solution ψ(t) (p(t), η(t)) with initial value (p_0, η_0) is called a normal extremal. The associated normal geodesic is the curve γ_p_0^η_0(t) π(p(t), η(t)) p(t), i.e. the projection of ψ to the first component ^d. Notice that the horizontality of γ is apparent from the fact that g^⋆_p projects η to _p in (<ref>). In the Riemannian case the Hamiltonian equations are equivalent to a system of ODE's on the tangent bundle called the geodesic equations. This parameterizes geodesics by their initial tangent vector instead of, as in the sub-Riemannian case, the initial cotangent vector. We end this section with a few facts about solutions to Hamilton's equations that we will need later on. Firstly, the Hamiltonian is conserved along solutions,
i.e. H(p_t,η_t) = H(p_0,η_0) for all t ∈ [0,T] (see e.g. <cit.>, Section 4.2.1). This implies
that a normal geodesic γ is a constant speed curve, since
‖γ̇(t) ‖ = ‖ g^⋆_p_tη_t‖ = √(2H(p_t,η_t)).
This further implies that
γ^η_0_p_0 has unit speed if η_0∈ H_p_0^-1(1/2), and therefore that its length is given by the duration of integration T.
Lastly, we will need the fact that the Hamiltonian equations are time-homogenous in the sense that, for any η_0 ∈ H^-1(1/2) and α > 0, γ^αη_0_p_0(t) = γ^η_0_p_0(α t) (<cit.>, Section 8.6).
§.§ The sub-Riemannian exp and log
The sub-Riemannian exponential map at p ∈ℝ^d maps a cotangent η∈ T_p^⋆^d ≅^d to the position at time 1 of the normal geodesic initialized by (p, η), i.e.
exp^_p : T^⋆_p ^d →^d : η↦exp^_p(η) γ_p^η(1).
The exponential map will also be denoted simply by exp. The time-homogeneity of the Hamiltonian equations mentioned in the previous section has two important consequences. Firstly, for α>0, it holds that exp_p(αη) = γ_p^η(α), so scaling η amounts to moving along a single normal geodesic; secondly, γ can be assumed to be unit speed parameterized, and therefore the length of the normal geodesic α↦exp_p(αη), α∈ [0,1], is given by √(2H(η)). In the case where this normal geodesic is a global, not just local, length minimizer between its endpoints p and y exp_p(η), we get the formula
d^(p,y) = √(2H(p, η)).
§.§.§ Optimizing for the log
To compute the sub-Riemannian distance between two points, eq. (<ref>) suggests that one should invert the exponential map. If the exponential map at p is a diffeomorphism (thus invertible) around 0 ∈ T^⋆_p^d, its inverse is called the logarithmic map, defined by
log^_p : ^d ⊃ U → O ⊂ T^⋆_p ^d satisfying γ^log^_p(y)_p(1) = y
for some open sets U and O with p∈ U. However, such an open set U on which exp_p is a diffeomorphism only exists if rank(𝒟) = d (see <cit.> Prop. 8.40), in which case the geometry is Riemannian. A simple way to see this is that exp_p(H_p^-1(0)) = 0, where H_p^-1(0) = _p^⊥. In the sub-Riemannian case of rank(𝒟) < d we propose an approximate log map given as a solution to the following optimization problem, for p,y∈^d,
log_p(y) ∈η∈𝔸‖exp_p(η) - y ‖^2 + H(p, η),
where 𝔸 = T_p^⋆^d. This problem searches for the shortest normal geodesic between p and y. For reasons that will be explained in Section <ref>, we will also be interested in the case of 𝔸 = _p^⋆⊂ T_p^⋆^d, the metric dual of _p (Equation <ref> below). Under certain assumptions on , notably bracket-generatingness, the image set exp_p(T^⋆^d) is dense in ^d even when 𝒟 < d <cit.>, implying that the error in (<ref>) can be made arbitrarily small. The problem of finding shortest horizontal curves between points is studied in non-holonomic control theory (see e.g. <cit.>). In our current implementations, however, we find (local) solutions via a minimization algorithm based on BFGS <cit.> and automatic differentation of the exponential map, which is possible using e.g. the python library Jax <cit.>.
§.§ The subbundle induces a foliation
If a bracket generating subbundle (i.e. = T^d) represents one extreme for subbundles on ^d then its opposite is that of a constant rank integrable subbundle; that is, a constant rank subbundle satisfying =. An important property of integrable subbundles is that they posses integral manifolds which are immersed submanifolds ⊂^d such that T_p = _p for all points p∈. Given a constant rank integrable subbundle , the global Frobenius Theorem tells us that ^d is foliated, or partitioned, by the collection of all maximal integral manifolds of - each integral manifold is called a leaf and has dimension equal to the rank of (see Lee, Chapter 19 for full details on integrable subbundles, there called involutive distributions, and the Frobenius Theorem). The geometry induced by on ⊂^d is Riemannian since _p is the full tangent space at each point p ∈, implying that all curves on are horizontal; therefore the sub-Riemannian geodesic equations are identical to the Riemannian geodesic equations. If a subbundle
is neither bracket generating (=T^d) nor integrable (=) then the subbundle ⊂ T^d is integrable and foliates ^d by its integral manifolds , each of dimension (). The induced geometry on each integral manifold is sub-Riemannian (not all curves are horizontal).
In relation to problem A, mentioned in the introduction, the previous discussion implies that the induced distance metric is finite, d^(p,q) < ∞, for all points p,q in the same leaf, whereas it is infinite for points belonging to different leaves - a horizontal curve is constrained to move within a single leaf.
In relation to problem B, we are interested in generating a
k-dimensional submanifold of ^d from a rank k subbundle whose integrability
or bracket generation is a priori unknown. In Proposition 3.1 below we show how
this can be done via sub-Riemannian geometry. The generated submanifold is tangent to in 'radial' directions, but not in all directions, as will be explained
below.
§.§ The exponential image of the dual subbundle
The content of the previous sections implies the following. If is integrable, then there exists an open set U⊂_p s.t. M exp_p(U) is a k-dimensional embedded submanifold of ^d whose tangent space a every point q∈ M equals _q. In this case, exp_p is a diffeomorphism from U to this submanifold. On the other hand, if is not integrable, there exists no submanifold that is tangent to , in particular exp_p(U) does not satisfy this. However, in the following we show that exp_p(U) is still a k-dimensional embedded submanifold.
Let
^⋆_p {⟨ v, ·⟩| v∈_p}⊂ T^⋆_p^d
be the dual space of _p w.r.t. the standard inner product ⟨ v, u⟩ v^Tu. This simply means that ^⋆_p consists of the tangent vectors (column vectors) in _p considered as covectors (row vectors). Thus, ^⋆_p is a k dimensional subspace of T^⋆_p^d which can be identified with _p⊂ T^d.
Let μ∈^d be arbitrary.
There exists an open subset C_μ⊂𝒟_μ^⋆ containing 0 such that exp_μ^𝒟 restricted to C_μ is a diffeomorphism onto its image. That is,
M_μ^𝒟exp_μ^𝒟(C_μ) ⊂^d
is a smooth
k-dimensional embedded submanifold of ℝ^d containing μ.
It holds that T_p(M^_μ) = _p at p=μ, but at a general p ∈ M^_μ these spaces are different if is not integrable. They need not even be 'close', as can be seen in e.g. the Heisenberg group where exp_0^𝒟(C_0) is the xy-plane, to which the Heisenberg subbundle is almost orthogonal at certain points p. But M^_μ is 'radially horizontal', in the sense that it is the union of normal geodesics from μ each of which is horizontal w.r.t. . In particular, if we assume that C_μ is convex and let ∂C_μ⊂^⋆_p denote its boundary, then
exp_μ^𝒟(C_μ) = {γ_p^η(t) |η∈∂C_μ, t∈ [0,1])},
where each geodesic t ↦γ_p^η(t) is tangent to .
Note that, since the exponential map restricted to C_p is a diffeomorphism, the log-optimization problem (<ref>) with 𝔸 = _p^⋆ has a unique solution for p = μ and any y ∈ M^_μ.
§ SUB-RIEMANNIAN GEOMETRY OF THE PRINCIPAL SUBBUNDLE
In this section, we present a sub-Riemannian (SR) structure on ^d based on local PCA's, namely, the SR structure determined by the principal subbundle.
Moving horizontally with respect to the principal subbundle means to move within a k-dimensional subspace of maximum local variation at each
step. Therefore, geodesics that are horizontal w.r.t. this structure follow the point cloud, and the associated exp and log maps can be used for representing the
data. The image of the dual subbundle under the exponential map, described in Proposition
<ref> above, will be called a principal submanifold when the principal subbundle is used. Such a submanifold approximates the data for well-chosen hyperparameters. This is described in Section <ref> where we also give an algorithm to compute it. Furthermore, we discuss the use of the log optimization problem (<ref>) for giving a representation of the observations in ^k (Section <ref>) and for computing distances between observations (Section <ref>).
§.§ Properties of the sub-Riemannian structure
The sub-Riemannian structure that we will use to model the data is the one determined by the principal subbundle ^k, α, also denoted simply by .
Proposition <ref> about smoothness of the subbundle implies smoothness of the cometric g^⋆. For any p∈^d ∖𝒮_α,k the cometric can be represented as g_p^⋆ = F(p)F(p)^T ∈^d× d, where F = [e_1(p), …, e_k(p)] is a matrix whose columns are the first k eigenvectors of the weighted second moment Σ_α(p) (Definition <ref>).
We know that ℰ is of constant rank k, but we do not know if ℰ is of constant rank, let alone if it is bracket-generating (i.e. (ℰ) = d).
Under the manifold hypothesis, in the limit of zero noise and the number of observations going to infinity, the convergence result of <cit.> (Theorem B.1) suggests that the subbundle is everywhere tangent to a submanifold and thus integrable.
§.§ Computing geodesics
We compute geodesics w.r.t. the chosen sub-Riemannian structure by numerically integrating the sub-Riemannian Hamiltonian equations (<ref>), see Appendix <ref> for notes on the implementation. In <cit.>, Theorem 2.4, formulas are given for the derivatives of eigenvector fields. This enables computation of derivatives of the Hamiltonian,
H(p,η) = 1/2η^T g_p^⋆η
= 1/2η^T F(p)F(p)^T η
= 1/2η^T [e_1(p), …, e_k(p)] [e_1(p), …, e_k(p)]^T η,
via automatic differentiation libraries such as Jax <cit.>. The formulas in <cit.> hold under the assumption that the first k+1 eigenvalues, λ_1(p), …, λ_k+1(p), are distinct (cf. Lemma <ref>). Note that our basic assumption on the observations is that they are well approximated locally by a k-dimensional linear space, implying that the first k eigenvalues are relatively close, possibly equal. Two comments on this: 1. Using the results in <cit.> (see also Proposition <ref> and its proof), it is possible to compute derivatives of the Hamiltonian under the milder assumption of only λ_k(p) and λ_k+1(p) being distinct - however, in practice we have not had the need to pursue this. 2. Since the differences between λ_1,…,λ_k are likely to be relatively small, the ordering and rotation of the eigenvectors is effectively random. However, this does not affect the Hamiltonian equations, since the Hamiltonian depends only on the cometric, a projection matrix, which is invariant to rotations and permutations of the basis F(p) within _p.
Figure <ref> illustrates sub-Riemannian geodesics with respect to the metric induced by two different point clouds. The surfaces (principal submanifolds) presented in figures <ref> and <ref> are likewise composed of many such geodesics, cf. the next section.
§.§ Principal submanifolds (Problem B)
As the first use of principal subbundles, we define the principal submanifold from a base point μ∈^d ∖𝒮_α, k, given a set of observations in ^d. This choice of data representation implicitly assumes that the data is locally well-described by a submanifold, i.e. the 'manifold hypothesis'.
Let {x_1, …, x_N}⊂^d be a set of observations. Let μ∈^d ∖𝒮_α, k be a chosen base point, let α be the kernel range and let k ∈{1, …, d-1} be the rank of the principal subbundle, ℰ = ℰ^k, α⊂ T^d. Let ℰ^⋆_μ be the dual subbundle at μ, and B_r ⊂ℰ^⋆_μ a k-dimensional open ball of radius r containing 0. The principal submanifold of radius r is given by
M^k_μ(r) exp^ℰ_μ(B_r) ⊂^d,
We will assume that r is sufficiently small for M^k_μ(r) to actually be a submanifold, cf. Proposition <ref>.
If we write simply M^k_μ, we will assume that r takes the largest such value.
Algorithm <ref> describes how to compute a point set
representation of a principal submanifold, up to arbitrary resolution. For hyperparameters, μ, k, α
(the base point, dimension and range, respectively),
the principal submanifold, M^k_μ, is an estimate of the true underlying submanifold, M, locally around μ. As described in Section <ref>, M^k_μ cannot be expected to be exactly tangent to since might not be integrable. However, since ℰ^k, α approximates the tangent spaces of the true submanifold our expectation is that the subbundle is 'close' to being integrable and therefore that the difference between ℰ_p and T_p(M^k_μ) is small for p∈ M^k_μ. The approximation M^k_μ≈ M comes with the following guarantee: if μ∈ M and the principal subbundle contains the true tangent spaces to M around μ, then the principal submanifold is an open subset of the true submanifold M. In particular, the ball B_r ⊂^⋆_μ⊂ T^⋆_p M ≅^k is a (normal) coordinate chart for M. Figure <ref> illustrates the effect of noise on the geodesics, and therefore on the principal submanifold, for points distributed around the unit sphere. In the noiseless case, Figure <ref> a), the computed geodesic paths are identical to the exact Riemannian geodesics on the sphere, up to numerical error, and the resulting principal submanifold is thus identical to the sphere (the mean norm of each generated point is 0.9992 with standard deviation 0.0014). In Figure <ref> b) the observations on the sphere have been added isotropic Gaussian noise in ^3 with marginal standard deviation σ = 0.1. In this case the geodesics still evolve very close to the sphere (the mean norm of each generated point is 1.0299 with standard deviation 0.0162), but they start to cross after some integration steps, so that the manifold property of M^k_μ(r) seems to hold for a smaller value of the radius r compared to the noiseless case.
§.§.§ Relation to principal flows
We end this subsection with a discussion on the relation between a principal submanifold for k = 1 and the principal flow, described in <cit.>. For k = 1, integrating the Hamiltonian equations (<ref>) yields the flow of the first eigenvector field e_1 starting from μ. This is called the principal flow in <cit.>, but the methods differ in important ways. Firstly, the principal flow at p is based on a second moment which is centered around p, not at the local mean around p. The span of the first eigenvector of such an uncentered second moment will be 'orthogonal' to the point cloud when evaluated at points outside of it. This causes the principal flow to stray away from the observations if it reaches such a point. As opposed to this, the first eigenvector of the centered second moment stays tangential to the point cloud when evaluated outside of it, as illustrated by the pink curve in Figure <ref> a). This behaviour arguably makes it more stable, see simulation results in section <ref> and Figure <ref>. Secondly, to handle the fact that eigenvectors are determined only up to their sign, the principal flow is computed by solving a variational problem and integrating an associated system of ODE's. This system of ODE's has to be integrated for a range of candidate values of a Lagrange multiplier, in the end choosing the value for which the corresponding curve minimizes an energy functional. As opposed to this, we formulate the problem as a Hamiltonian system of ODE's which is invariant to the sign of the vector field (only the corresponding rank 1 subbundle matters), removing the need for the variational formulation and the ODE integration for multiple values of a Lagrange multiplier. It is this reformulation of principal flows as solutions to a set of geodesic (Hamiltonian) equations that also allows us to generalize the concept to higher dimensions.
§.§.§ Projection to M_μ^k
An observation x_i ∈^d can be projected to M_μ^k by
π_M_μ^k(x_i) = exp^_μ(log_μ(x_i))
where log_μ(x_i) is a solution to (<ref>) with search space 𝔸 = _μ. Alternatively, given a discrete representation M_μ^k of M_μ^k, computed using Algorithm <ref>, one can use the discrete projection π_M_μ^k(x_i) _p ∈M_μ^k‖ x_i - p ‖, which can be solved numerically as a Euclidean 1-nearest neighbours problem.
§.§ Representation of observations in ^k (Problem C)
The ball B_r ⊂^⋆_μ≅^k forms a coordinate chart for the principal submanifold, i.e. any point p∈ M^k_μ(r) can be represented as p̅exp^-1_μ(p) ∈^k. It behaves like a socalled normal chart, in the sense that the SR distance between the base point μ and p∈ M^k_μ is preserved, d^ℰ(μ, p) = ‖p̅‖, while the distances between arbitrary points p, q ∈ M_μ^k are distorted in a way that depends on the curvature of M^_μ. If {x_1, …, x_N} are observations distributed around M^k_μ, then the projections π_M_μ^k(x_i) ∈ M^k_μ, i=1… N, can be represented in this chart by solving the log problem (<ref>) with 𝔸 = _μ, yielding lower dimensional representations π_M_μ^k(x_i)log_μ(π_M_μ^k(x_i))∈^k, i =1..N. Computing this is less complex than it looks; in fact, solving the projection problem (either the continuous or the discrete version, c.f. Section <ref>) already involves solving the log-problem, so computing a projection also yields the representation in ^k. See Figure <ref> and Section <ref> describing a 2D representation of the S-surface embedded in ^100.
§.§ Computing the SR distance between points (Problem A)
As discussed in Section <ref>, we can combine Equations (<ref>) and (<ref>) to approximate the SR distance between two points x,y ∈^d ∖𝒮_α, k by
d^ℰ(x, y) ≈√(2 H(log_x(y))),
with log search space 𝔸=T_μ^⋆^d. As mentioned, we cannot expect to find the exact SR distance, i.e. the length of the globally shortest curve joining x and y, even in the case of a bracket-generating subbundle for which d^ is in fact finite for all x,y. When the points are observations, i.e. x,y ∈{x_i}_i =1, …, N, this might not be desirable either since the error in the log minimization problem (<ref>) can be
interpreted as an effect of random noise.
See Section <ref> for a numerical evaluation of estimated distances d^ based on a dataset in ^50.
Figure <ref> illustrates a computation of log_x(y) based on a dataset distributed around the S-surface. The base point, x, is the blue dot and the target point, y, is the pink dot. The red curve is the geodesic t ↦exp^_x(t·log_x(y)), t∈ [0,1], the length of which constitutes our estimate of the distance between x and y. As expected, the endpoint exp^_p(log_x(y)) doesn't match y exactly. On Figure <ref>, the color gradient and concentric circles on the face illustrate the SR distance to the base point on the nose.
§.§ Hyperparameters
The kernel range α and the dimension k are hyperparameters that are common to many methods and there is a significant body of literature about how to select them. See Appendix <ref> for our comments and references. Regarding the base point μ∈^d of a principal submanifold, we suggest to use a local mean around a well-chosen observation x_0. Which particular x_0 will be application specific, but a general purpose option is a within-sample Fréchet mean,
μ̂∈μ∈{x_i}_i=1..N1/N∑_i=1^N d(μ, x_i),
where d is either the Euclidean distance or d^ of the principal subbundle.
§ GENERALIZATION TO OBSERVATIONS ON A RIEMANNIAN MANIFOLD
In this section, we generalize the framework of principal subbundles to the setting where the observations are points on an a priori known Riemannian manifold. A numerical application of the method to such data is presented in Section <ref>. These two sections assume a deeper knowledge of differential geometry than elsewhere, but they can be skipped without loss of continuity by the reader who wish to focus on the case of Euclidean valued data. It turns out that the formulation of principal subbundles for Euclidean valued data, given above, is based only on operations that generalize naturally to the setting of manifold valued data, as we show below.
§.§ Context: geometric statistics
We now assume that {x_i}_i=1… N are points on an a priori known smooth manifold 𝒩 of dimension d < ∞,
equipped with a known Riemannian metric h. This is a generalization of the theory presented above, where 𝒩 was ^d and h was the Euclidean metric. Our aim is now to find a lower dimensional geometric structure (e.g. a submanifold) within this given manifold 𝒩.
The field of statistics and machine learning for manifold-valued data is called geometric statistics <cit.>. An intuitive example of such data is observations on a surface in ^3, such as the sphere. More abstract examples are shapes represented as sets of landmarks, e.g. in Kendall's shape space (see <cit.> and <cit.> for an application) or an LDDMM landmark manifold <cit.>. Other examples are provided in the field of directional statistics <cit.> and image processing via the manifold of SPD matrices (e.g. <cit.>, Chapter 3).
Within the field of geometric statistics, several methods have been proposed to find a lower dimensional submanifold M⊂𝒩 approximating the observations in 𝒩. Important methods are Principal Geodesic Analysis (PGA) <cit.>, Principal Nested Spheres <cit.>) and Barycentric Subspace Analysis <cit.>. A basic method is tangent PCA, which consists of mapping the observations to a tangent space at a chosen base point μ∈𝒩 via the Riemannian logarithm and performing Euclidean PCA in this linear representation. This method is not sensitive to the curvature of neither 𝒩 nor of the dataset. Tangent PCA can be seen as a linear approximation of PGA (as discussed in <cit.>), a method which is more sensitive to the curvature of 𝒩 but still not sensitive to the curvature of the dataset, as we now describe. Let μ∈𝒩 be a well-chosen base point, e.g. the Fréchet mean. The collection of geodesics initialized by tangent vectors in a k-dimensional subspace Δ_k ⊂ T_μ𝒩 form a k-dimensional submanifold of 𝒩, given as exp^h_μ(Δ_k), where exp^h is the Riemannian exponential map of (𝒩,h). Starting from k=1 and adding subsequent dimensions one at a time, the optimal subspace Δ_k at each step is defined to be the minimizer of the geodesic distance from exp^h_μ(Δ_k) ⊂𝒩 to the observations. This optimization is computationally intensive, to the point of being infeasible for even fairly simple manifolds and datasets - no publicly available implementation of PGA exists to this date (for work in this direction, see e.g. <cit.>). Furthermore, geodesics of the ambient space (𝒩,h), which forms the approximating submanifold exp^h_μ(Δ_k), is a relatively inflexible family of curves - they are the generalization of straight lines to a manifold.
A principal submanifold constructed from a principal subbundle on 𝒩 can be seen as a locally data-adaptive combination of tangent PCA and PGA. We compute local tangent PCA's to construct the principal subbundle _α of the tangent bundle T𝒩. This determines a data-dependent sub-Riemannian metric and thus sub-Riemannian geodesics on 𝒩, with which we can approximate the data. That is, compared to PGA, the geodesics forming the principal submanifold are not those of the ambient Riemannian manifold (𝒩, g), but those of an estimated sub-Riemannian structure on 𝒩. Our approximating submanifold is exp^_μ(Δ_k)⊂𝒩, similar to PGA, except that the exponential is now the sub-Riemannian exponential determined by the principal subbundle and Δ_k is the metric dual of _μ, the principal subbundle at μ. Note that by doing local PCA's (i.e. solving many simple, local least-squared-error problems) we remove the need for the expensive 'global' optimization for the subspace Δ_k.
§.§ Sub-Riemannian structures on a general smooth manifold
This section introduces sub-Riemannian geometry on a smooth manifold 𝒩
of dimension d,
not necessarily ^d. A rank k sub-Riemannian structure on 𝒩 is determined by a rank k subbundle of T𝒩 and a metric tensor g on . We will assume that the sub-Riemannian metric tensor g is the restriction h|_ of a given Riemannian metric tensor h on T𝒩 to , i.e. g_x(u,v) = h_x(u,v) for all x ∈𝒩 and u,v ∈_x. The pair (, g) is equivalent to a rank k cometric tensor g^⋆ on T^⋆𝒩. The triple (𝒩, g, ), or equivalently the pair (𝒩, g^⋆), is called a sub-Riemannian manifold. The version of sub-Riemannian geometry we described and used in the previous sections corresponds to 𝒩 = ^d and the ambient Riemannian metric h being the Euclidean metric.
In this general setting, a curve γ : [0, T] →𝒩 is still called horizontal if its velocities satisfy γ̇_t ∈_γ_t⊂ T_γ_t𝒩 for all t ∈ [0,T]. And this again induces the Carnot-Carathéodory distance metric d^ (equation <ref>) on 𝒩. The discussion in Section <ref> about integrability and foliations carries over directly; the subbundle partitions 𝒩 into a foliation of submanifolds of dimension , and the distance metric d^(x,y) is finite only between points on the same leaf. The Hamiltonian equations, exp and log are also defined exactly as in Section <ref>, and the relationship between the sub-Riemannian distance and the Hamiltonian (Eq. (<ref>)) still holds. One difference from the previous Euclidean setting, however, is that the cometric cannot be expressed as a projection matrix, as we did in Equation (<ref>).
Therefore it is more convenient to represent the Hamiltonian in the following equivalent way (see <cit.>, Proposition 4.22 for a derivation,
H : T𝒩→_≥ 0 : H(x,η) = 1/2∑_i=1^d (η(f_i(x)))^2,
where {f_i}_i=1..k is an orthonormal frame for w.r.t. g and η(f_i(x)) denotes the cotangent η∈ T^⋆_x 𝒩 evaluated at the tangent f_i(x) ∈ T_x 𝒩. The derivatives of the Hamiltonian that enter into the Hamiltonian equations can be expanded in a way that is suitable for implementation (see Equation (4.38) in <cit.>).
To construct a k-dimensional submanifold from a k-dimensional non-integrable subbundle we still need a result such as Proposition <ref>, which luckily holds in this general setting - cf. the proof in Appendix <ref>.
The result carries over verbatim, with the dual subbundle now being the dual w.r.t. our (general) Riemannian metric h on 𝒩, i.e.
^⋆_x {h_x(v, ·) | v∈_x}⊂ T_x^⋆𝒩.
§.§ Principal subbundles on a Riemannian manifold
We now generalize local PCA to the setting of observations on a Riemannian manifold. In this setting, local PCA is exchanged for local tangent PCA, by which we mean the extraction of eigenvectors from the following second moment.
Let {x_1,…,x_N} be observations on a Riemannian manifold (𝒩, h). Let K_α : ℝ_≥ 0→ℝ_≥ 0 be a smooth, decaying kernel function with range parameter α > 0.
At a point p∈𝒩, we denote by log^h_p(x_i)
the Riemannian log of the observation point x_i w.r.t. metric h.
The weighted tangent second moment is defined by
Σ_α(p) =
∑_i=1^N w_i(p) (log^h_p(x_i) ⊗log^h_p(x_i)),
with normalized weight functions
w_i : 𝒩→_≥ 0 : p↦ w_i(p) = K_α(‖log^h_p(x_i) ‖_p)/∑_j=1^N K_α(‖log^h_p(x_j) ‖_p).
Recall that ‖log^h_p(x_i) ‖_p= d^h(p, x_i) since the length of the shortest geodesic from p to x_i is precisely the length of the vector in T_p ℳ that exponentiates to x_i.
For any v,u ∈ T_p 𝒩, the tensor product v ⊗ u can be identified with a linear map on T_p 𝒩 (an endomorphism), whose coordinate representation is a d × d matrix, see Lemma <ref>. There is some vagueness about the exact form of this coordinate representation in the geometric statistics literature, so we give a detailed proof in Appendix <ref>.
lemmalemTensorCoordinates
Let (𝒩,h) be a Riemannian manifold, and u,v∈ T_p𝒩. Given a choice of basis for T_p 𝒩, the tensor v ⊗ u ∈ T_p𝒩⊗ T_p𝒩 can be expressed in coordinates as
vu^T h_p ∈^d× d,
where u,v ∈^d× 1 are the vectors and h_p ∈^d× d is the Riemannian metric represented w.r.t. the chosen basis.
In various sources, the term h_p in Eq. (<ref>) is omitted without explanation. We stress that this is only correct if the chosen coordinate representation of the metric is the identity matrix, e.g. if the chart is a normal chart - which is not necessarily the case in numerical computations. Sometimes, this is ensured by changing the basis to an orthonormal one, found by e.g. Cholesky decomposition of the cometric, before computing vu^T.
This is, however, much more expensive than simply using the general, basis independent, expression (<ref>).
Thus, when computing the tangent second moment matrix (e.g. when computing tangent PCA), the covariance matrix w.r.t. some arbitrary basis a should be computed as
[Σ_α(p)]_a =
∑_i=1^N w_i(p) [log^h_p(x_i)]_a ( [log^h_p(x_i)]_a )^T [h_p]_a.
As in the case of Euclidean valued data, we want the principal subbundle of T𝒩 to be based on local PCA's centered around local means. For that purpose, the principal subbundle subspace at point p will be based on the eigendecomposition of the weighted second moment at the weighted mean m(p) defined below:
Let {x_1,…,x_N}⊂𝒩 be observations on a Riemannian manifold (𝒩, h), let the normalized weight functions w_i
be defined as in (<ref>), and let exp^h_p be the Riemannian exponential map at p w.r.t. metric h. The weighted tangent mean map is defined by
m : 𝒩→𝒩 : p ↦ m(p) = exp^h_p(
∑_i=1^N w_i(p) log^h_p(x_i)).
The eigenvectors of Σ_α(m(p)) belong to the tangent space at m(p), not the tangent space at p.
Thus, the extracted eigenvectors needs to be mapped back to the tangent space at p, which we do by parallel transport, as described in the definition below.
The principal subbundle on (𝒩, h) can only be defined at points p s.t. both p and m(p) is in the cut locus of every observation and of each other, since we need to compute the corresponding logarithms. We therefore define the set of singular points as follows,
S^'_α, k = {p ∈𝒩| p, m(p) ∈⋃_q ∈{x_1, …, x_N, p}Cut(q) orλ_k(m(p)) = λ_k+1(m(p))},
where λ_i(m(p)) is the i'th eigenvalue of Σ_α(m(p)) of Definition <ref>.
Let λ_1(q) ≥…≥λ_d(q) be the eigenvalues of Σ_α(q), at q ∈𝒩, with associated eigenvectors e_1(q),…, e_d(q).
Let Π_x^y(v) denote parallel transport of v ∈ T_x 𝒩 to T_y 𝒩 along the length-minimizing geodesic between x and y. Then the principal subbundle ℰ^k, α⊂ T 𝒩 is defined as
ℰ^k, α = {(p,v) | p ∈𝒩∖𝒮^'_α, k,
v∈span{Π_m(p)^p e_1(m(p)),…, Π_m(p)^p e_k(m(p))}}
If (𝒩, h) is Euclidean space, the above definition reduces to the Euclidean Definition <ref> since (log^h_q(x_i) ⊗log^h_q(x_i)) = (x_i - q)(x_i - q)^T and Π_q^p is the identity map for q ∈^d.
The above construction of the subbundle subspace at p can be approximated by using the Euclidean definition in the tangent space at p, i.e. by letting ℰ^k, α_p at p∈𝒩 be the span of eigenvectors of Σ_α(0) computed from vectors {log^h_p(x_i)}_i=1… N⊂ T_p 𝒩≅^d, where Σ_α is the Euclidean second moment from Definition <ref>. In this way, only N log's have to be computed, instead of 2N (see Algorithm <ref>), and the parallel transport operation is omitted. Note that the experiments in Section <ref> uses Definition <ref>, not the described approximation.
Algorithm <ref> describes how to compute the principal subbundle from data on a Riemannian manifold (𝒩,h).
As in the Euclidean case, we prove that the principal subbundle on (𝒩, h) is smooth at all points where it is defined.
propositionpropSmoothPsManif
The principal subbundle, defined on 𝒩∖ S^'_α, k, is smooth.
§.§ Computing with a principal subbundle on a Riemannian manifold
Given a dataset {x_1,…,x_N}⊂𝒩, the associated principal subbundle determines a sub-Riemannian structure on 𝒩, namely (𝒩, h|_, ). Using this structure, we can integrate the associated sub-Riemannian Hamiltonian equations in the same way as described in section <ref>, except that we use the expression (<ref>) for the Hamiltonian. This gives us sub-Riemannian exponential and logarithmic maps on 𝒩, so that problems A, B and C can be solved on a general Riemannian manifold, in exactly the same way as in the Euclidean case, described in sections <ref>-<ref>.
A principal submanifold is computed in the same way as in the Euclidean case (Algorithm <ref>). It assumes that
we have a representation of the manifold in a chart. See the pseudocode for our implementation on the sphere in Appendix.
Due to the centering step,
computing
the subbundle at a point p ∈ M requires solving the parallel transport equation and computing 2N log maps, N logs between the observations and point p (lines 1-3), and N logs between the observations and the local mean around p (lines 5-7). See remark <ref> for an approximation requiring only N log computations and no parallel transport. The run time of the algorithm thus depends heavily on the run time of the log map, or an approximation thereof, on the given Riemannian manifold. Examples of manifolds with computationally cheap log maps are hyperspheres, Kendall shape space, Grassmann manifolds, SPD matrices. See the Python library Geomstats <cit.> for implementations of various manifolds including efficient log maps.
§ APPLICATIONS
We now demonstrate how principal subbundles provide solutions to problems A, B, C, mentioned in the introduction. In particular, we reconstruct 2D submanifolds embedded in ^3 and ^100, respectively, and give a 2D tangent space representation of the latter. We furthermore evaluate a sub-Riemannian distance metric on ^50 learned from observations distributed around a 4-dimensional sphere embedded in ^50. In subsection <ref> we compute a 1D principal submanifold approximating data on the sphere (a Riemannian manifold).
§.§ Surface reconstruction in ^3 (problem B)
We reconstruct a 2D
surface, the 'head sculpture', based on a point cloud contained in the surface reconstruction benchmark dataset from <cit.>. According to the classification in <cit.>, the surface is of complexity level 2 out of 3, and the point cloud has been added noise of level 2 out of 3, see <cit.> for details. Note that the evaluations in the benchmarking paper was made after a preliminary denoising step, whereas our reconstruction was done on the raw point cloud. This is to illustrate the potential use of principal submanifolds for denoising. The hyperparameters we use for the principal subbundle are α = 0.001, and k = 2. See Appendix <ref> for a reconstruction of the face using observations distorted by noise level 3 out of 3.
Figure <ref> shows two principal submanifolds reconstructing the head sculpture locally: one is based around the tip of the nose (radius r=0.3) and one at the top left side of the head (r=0.25). Both base points are computed as the kernel-weighted mean around a chosen observation. The numerical parameters in Algorithm <ref>, determining the resolution, were L = 2500 (the number of geodesics) and Δ = 0.001 (the integration stepsize).
A principal submanifold corresponds to a chart on the surface; in particular, a normal chart. It is a basic fact of differential geometry that a complicated surface such as the head sculpture cannot be covered by a single such chart. One therefore needs to reconstruct the surface based on multiple principal submanifolds corresponding to different base points; however, principal submanifolds based at different points might not overlap in a smooth way due to noise. To construct a smooth surface covering the whole area, we thus need a scheme for combining different principal submanifolds M^ℰ_μ_1,M^ℰ_μ_2, …. Many such schemes are conceivable. In appendix <ref>, we propose one that combines submanifolds by weighing points according to their sub-Riemannian distance to a set of nearest base points. The discrepancy between submanifolds in the areas of overlap depends on the level of noise. In the experiment shown in Figure <ref> we did not find it necessary to use a weighing scheme - see Appendix <ref> for a close-up illustration of the overlap.
§.§ Unfolding the S-surface in ^100 (problem C)
In this experiment, we demonstrate the use of principal subbundles to contruct a representation of ^d-valued data in ^k, k < d. Let y_i ((y_i)_1, (y_i)_2, (y_i)_3)^T ∈^3, i=1..3000,
be points on the S-surface, scaled such that its height, width and depth is 1. We embed each point in ^d, d = 100, by adding zeros, ỹ_̃ĩ = ((y_i)_1, (y_i)_2, (y_i)_3, 0, …, 0)^T. The observations are then generated by adding Gaussian noise, x_i ∼ N(ỹ_̃ĩ, σ^2 I_d) ∈^d for σ = 0.025.
The upper part of Figure <ref> shows the observations {x_i}_i=1..N and an approximating principal submanifold, projected to ^3 for the purpose of visualization. The base point of the principal submanifold is the local mean around the within-sample Fréchet mean w.r.t. Euclidean distance, μ = (0.47, 0.47, 0.49). The lower part of Figure <ref> shows the log representation of the observations in _μ^⋆≅ T_μM_α^k. The kernel range is α =0.01 and the rank is k = 2.
§.§ Learning a distance metric on ^50 (problem A)
We sample N=10000
points, {y_i}_i=1..N, uniformly on the k-dimensional unit sphere embedded in ^d, for k = 4,
d=50.
For each of these points y_i ∈^d we generate an observation x_i ∈^d by adding d-dimensional Gaussian noise, x_i ∼ N(y_i, σ I_d), where σ=0.01.
We generate 20 such data sets with associated principal subbundles _j, j=1..20.
For each data set we compute the SR distance d^_j(p,q), j=1… 20, where p=(1,0,…,0)∈^d and q= (- √(1/2), - √(1/2), 0, …, 0)∈^d. We find the mean, μ_0, and standard deviation, σ_0, of these 20 computed distances to be μ_0 = 1/20∑_j=1^20 d^_j(p,q) = 3/4π + 0.023, σ_0 = 0.025. This result shows that the learned distances are close to true distance, d^𝕊^4(p,q) = 3/4π, on the 4-dimensional sphere.
§.§ Curve approximation on the sphere
In this experiment we randomly generate 20 datasets, each with N = 100 points distributed around a random curve on the sphere, 𝒮^2. The random curves are generated as follows. A 4'th order polynomial
f : → : t ↦ (t - a_1)(t - a_2)(t - a_3)(t - a_4)
is generated by sampling roots a_1, a_2 from a uniform distribution on (-1,0), and roots a_3, a_4 from a uniform distribution on (0,1). Using two such intervals yields polynomials with more complex curvature. The graph of the polynomial, P {t, f(t) | t ∈ [-1,1] }, is considered a subset of T_p_0^2 and mapped to 𝒮^2 by the Riemannian exponential, exp_p_0, where p_0 = (0, 0, 1) is the north pole (in extrinsic coordinates).
Let {t_i}_i=1..N⊂ [-1,1] be 100 evenly spaced points. Let z_i = exp_p_0((t_i, f(t_i))), i=1… N, be points on the curve on 𝕊^2. The noisy observations are generated as x_i = exp_z_i(v_i), where v_i∼N(0, I_2 ·σ), a 2D isotropic Gaussian with marginal variance σ, assuming a representation of T_z_i𝕊^2 in an orthonormal basis. In our experiments we used σ = 5· 10^-4. Note8 that the resulting observations on 𝕊^2 are non-uniformly sampled along the curve (making the problem more difficult). See Figure <ref> for an example of such a randomly generated dataset.
For each randomly generated dataset we estimate a base point as the within-sample Fréchet mean w.r.t. the geodesic distance on the sphere. We use as kernel function a Gaussian density with standard deviation α = 0.045. This value is hand picked since our aim is to compare the performance of different methods disregarding uncertainty due to estimation of hyperparameters. Using this kernel function, we compute 3 curve approximations of the data set. Firstly, we compute the principal submanifold using Algorithm <ref>. Secondly, we compute the Principal submanifold without the centering and parallel transport step, i.e. the Principal flow <cit.>. Thirdly, we compute as baseline model the first principal geodesic from tangent PCA. For each approximation we compute the sum of squared errors (SSE), where the errors are measured by the length of the geodesic joining observation x_i and its geodesic projection to the given curve. Figure <ref> shows an example data set and its 3 curve approximations. Figure <ref> shows boxplots summarizing the 20 SSE's computed for each approximation method.
The SSE's and visual inspection of the corresponding plots shows that the centered version of the Principal submanifold is significantly more stable than the uncentered version (the principal flow). The uncentered version tends to stray away from the data when it reaches positions slightly outside of the point cloud. This is as expected, c.f. our discussion in Section <ref>. The principal geodesic has the highest SSE, as expected for this type of data that is distributed around a curve with relatively high curvature.
§ DISCUSSION AND FURTHER WORK
We have introduced the idea of modelling a data set {x_1,…,x_N}⊂^d by a tangent subbundle consisting of affine subspaces of ^d, and the sub-Riemannian geometry that it induces. We have demonstrated that geodesics w.r.t. this sub-Riemannian structure can be used to solve a number of important problems in statistics and machine learning, such as: reconstruction of submanifolds approximating the observations, finding lower dimensional representations and computing geometry-aware distances. Furthermore, we have shown that the framework generalizes to datasets on a given Riemannian manifold.
It can be considered a drawback of the framework that the point cloud must be relatively well connected, in the sense of not having large 'holes' or disconnected parts, relative to the kernel range. However, we conjecture that this can be somewhat alleviated by introducing a position-dependent range parameter.
§ ACKNOWLEDGEMENTS
M.A., J.B. and X.P. are supported by the European Research Council (ERC) under the EU Horizon 2020 research and innovation program (grant agreement G-Statistics No. 786854). S.S. is partly supported by Novo Nordisk Foundation grant NNF18OC0052000 as well as Villum Foundation research grant 40582 and UCPH Data+ Strategy 2023 funds for interdisciplinary research. E.G. is supported by project GeoProCo from the Trond Mohn Foundation - Grant TMS2021STG02.
§ PROOFS
§.§ Smoothness of the principal subbundle
We show smoothness first on ^d and then on a Riemannian manifold (𝒩, h). The proof of the latter utilizes the former result in a chart, as well as smoothness results for the involved maps, which are only non-trivial in the manifold case.
*
Let p ∈^d ∖𝒮_α,k be arbitrary. We will show that there exists a local frame of smooth vector fields spanning the subspace ℰ^α,k_p' at every point p' on an open set 𝒰 around p. By Lemma 10.32 in <cit.>, this is equivalent to the subbundle being smooth on ^d∖𝒮_α,k.
The eigenvalues of Σ_α(p) at p are
λ_1(p) ≥…≥λ_k(p)
> λ_k+1(p)
≥…≥λ_d(p),
where only λ_k and λ_k+1 are assumed to be different. Since Σ_α : ^d →^d× d is a smooth map, Theorem 3.1 of <cit.> implies that there exists an open set ℬ(p) ⊂^d around p and d continuous functions λ̅_i(·) : ℬ(p) → satisfying that λ̅_i(p') is an eigenvalue of Σ_α(p') for all p' ∈ℬ and λ̅_i(p) = λ_i(p), i = 1… d.
Since each λ̅_i is continuous, there exists an open subset 𝒰⊂ℬ on which the ordering λ̅_1(p') ≥…≥λ̅_d(p') holds for all p' ∈𝒰, and where λ̅_i(p') = λ̅_j(p') is only possible for i,j s.t. λ̅_i(p) = λ̅_j(p). In particular λ̅_i(p') < λ̅_k+1(p') for all i < k + 1 and p' ∈𝒰.
Theorem 3.2 of <cit.> now says that there exists a frame of analytic vector fields p ↦{X_1(p), …, X_k(p)} such that, for all p'∈𝒰,
span{X_1(p'), …, X_k(p')} = V_λ̅_1(p'), …, λ̅_k(p')(Σ_α(p'))
where V_λ̅_1(p'), …, λ̅_k(p')(Σ_α(p')) denotes the eigenspace of Σ_α(p') corresponding to eigenvalues λ̅_1(p'), …, λ̅_k(p'), which is exactly the principal subbundle subspace ℰ_p'^α, k.
To show that the principal subbundle on a Riemannian manifold is smooth, we need a result on smoothness of a certain map involving parallel transport.
Let the map f : 𝒩→𝒩 and the vector field O on 𝒩 be smooth. Let Π_x^y : T_x 𝒩→ T_y 𝒩 denote parallel transport along the (assumed unique) length-minimizing geodesic from x to y. Then the vector field
p↦Π_f(p)^pO(p) ∈ T_p 𝒩
is smooth for every p ∉Cut(f(p)).
For x,y ∈𝒩, the parallel transported vector Π_x^y W ∈ T_y 𝒩 of W ∈ T_x 𝒩 along a curve γ : (0,1) →𝒩 is the value at time 1 of a vector field V along γ satisfying the linear initial value problem (an ODE)
V̇^k(t) = -V^j(t)γ̇^i(t)Γ^k_ij(γ(t))
V(0) = W,
where Γ^k_ij, i,j,k ∈{1,…,d} are the Christoffel symbols determined by the metric h. See <cit.>, Section 4, for details.
If γ is a geodesic with initial velocity Q ∈ T_x 𝒩 then it is a solution to the geodesic equations (equations (<ref>) and (<ref>), below). In this case, we can write the parallel transport equation and the geodesic equations as a single, coupled, ODE:
V̇^k(t) = -V^j(t)γ̇^i(t)Γ^k_ij(γ(t))
γ̇^k(t) = U^k(t)
U̇^k(t) = -U^i(t)U^j(t)Γ^k_ij(γ(t))
U(0) = Q
V(0) = W
γ(0) = x.
Note that the equation for V is coupled with the equations for γ and U, but not vice versa, so that, in practice, the whole path γ can be computed first, and then subsequently V.
This is again a linear initial value problem, and the fundamental theorem for ODE's states that solutions exist, and depend smoothly on the initial conditions Q,W,x. This shows smoothness of the parallel transport operator in the case where γ((0,1)) is contained in a single chart. For the more general case, we refer to the technique used in the proof of Proposition 4.32 in <cit.> for showing that solutions found on individual charts overlap smoothly.
The map (<ref>) takes a point p ∈𝒩 to a vector field at time 1 satisfying equations (<ref>)-(<ref>). For each p, the initial conditions are
x = f(p)
Q = log^h_f(p)(p)
W = O(p)
all of which depend smoothly on p, if p ∉Cut(f(p)). Since the solution to the ODE depends smoothly on the initial conditions, and since the initial conditions depends smoothly on p, the vector field (<ref>) is smooth.
*
As in the Euclidean case, we want to prove the existence of a smooth frame around every point p ∈𝒮^'_α, k spanning the subbundle locally around p. We will make use of the corresponding result for 𝒩 = ^d, in a chart. In order to do this, we need to make sure that all of the involved maps are smooth as a function of p.
The tangent mean map m : 𝒩→𝒩 and the tensor field p ↦Σ_α(p) ∈ T_p 𝒩⊗ T_p 𝒩 is smooth if each logarithm log^h_p(x_i), i=1… N, is smooth as a function of the base point p ∈𝒩. This is ensured by the cut locus conditions in 𝒮^'_α,k.
Assuming smoothness of Σ_α, we now consider charts (U, φ) on 𝒩 and (O, ϕ) on T𝒩⊗ T𝒩, U⊂^d, φ : U →φ(U) ⊂𝒩, respectively O⊂^d× d, ϕ : O →φ(O) ⊂ T𝒩⊗ T𝒩 (identifying each T_p𝒩⊗ T_p𝒩 with the space of endomorphisms on T_p𝒩, cf. Section <ref>), around a point p∈𝒩 and φ(p) ∈ T𝒩⊗ T𝒩. In this chart,
f := ϕ^-1∘Σ_α, k∘ m ∘φ
is a smooth map from ^d to ^d × d. Eigendecomposition of the matrix f(p'), p' ∈ U, is independent of the basis and thus of the choice of charts. As shown in the proof of Proposition <ref>, there exists a smooth frame p' ↦{X_1(p'), …, X_k(p')}, X_i(p')∈^d, defined on some open subset 𝒰⊂^d around φ^-1(p) s.t.
span{X_1(p'), …, X_k(p')} = V_k(f(p')), ∀ p' ∈𝒰,
where the right hand side is the eigenspace of f(p') corresponding to the largest k eigenvalues. We have thus shown the existence of a smooth frame on φ(U) ⊂𝒩 spanning the corresponding eigenspaces of Σ_α∘ m at every point of φ(U).
The last thing we need to take account of is the parallel transport map. Since parallel transport is an isometry, it holds that
span{Π_p'^y X_1(p'), …, Π_p'^y X_k(p')} = span{Π_p'^y F_1(p'), …, Π_p'^y F_k(p')}⊂ T_y𝒩,
where {F}_i=1..k is any other frame spanning the same subspace as {X}_i=1..k at p'. Thus, the parallel transported frame X spans the same subspace as the parallel transported eigenvectors {e_i}_i=1… k at p' (the X_i's are not necessarily eigenvectors, as explained in <cit.>). By Lemma <ref>, the map p ↦Π_m(p)^p V(p) is smooth, for a smooth vector field V. We have thus shown that the principal subbundle at p is spanned by a smooth frame around p.
§.§ Proof of the sub-Riemannian exponential being a local diffeomorphism on the dual subbundle
We prove the result for a sub-Riemannian structure on a manifold 𝒩. The reader may substitute 𝒩 = ^d if they wish.
Let p∈𝒩 be arbitrary.
There exists an open subset C_p ⊂𝒟^⋆ containing 0 such that exp_p^𝒟|_C_p is a diffeomorphism onto its image. That is,
M_p^𝒟exp_p^𝒟(C_p) ⊂𝒩
is a smooth
k-dimensional embedded submanifold of ℝ^d containing p.
We will show that exp_p^𝒟 is a local immersion by showing that d_0 exp_p^𝒟 is injective (<cit.>, Proposition 4.1). For any η∈ T_0 𝒟≅𝒟 it holds that
d_0 (exp_p^𝒟) ∘η = .d/ds|_s=0exp_p^𝒟(0 + s η)
= .d/ds|_s=0γ_p^η(s)
= g^⋆(p) η,
where the second equality uses the fact that the sub-Riemannian exponential satisfies
exp_p^𝒟(s η) = γ_p^η(s), see corollary 8.36 in
<cit.>. Viewed as a map g_p^⋆: 𝒟^⋆→𝒟_p ⊂𝒩 (i.e. as the sub-Riemannian sharp map), g_p^⋆ is injective on 𝒟_p^⋆ by construction of 𝒟_p^⋆. Thus exp_p^𝒟 is an immersion. This implies the existence of a set C_p ⊂𝒟_p^⋆ containing 0 s.t.
.exp_p^𝒟|_C_p is an embedding (<cit.> Proposition 4.25). Which implies that M_p^𝒟exp_p^𝒟(C_p) is an embedded k-dimensional submanifold of 𝒩.
p ∈ M_p^𝒟 since exp_p^𝒟(0) = p, by definition.
§.§ Expressing the second moment in coordinates
For some v,u ∈ T_p 𝒩, the expression v ⊗ u can be identified with an endomorphism on T_p 𝒩. Its coordinate representation is thus a d × d matrix. There seems to be some confusion about this in the geometric statisics literature, so we give details below. We first repeat Lemma
*
The tensor v ⊗ u is an element of the tensor product space T_p 𝒩⊗ T_p 𝒩. After choosing a Riemannian metric, there is a canonical isomorphism between T_p 𝒩 and its dual space, T^⋆_p 𝒩, given by the Riemannian flat map,
: T_p 𝒩→ T^⋆_p 𝒩 : u ↦ h_p(u,·) := u^.
Thus
T_p 𝒩⊗ T_p 𝒩≅ T_p 𝒩⊗ T^⋆_p 𝒩,
where elements of the latter space are denoted (1,1) tensors. Furthermore, there is a canonical isomorphism, independent of a Riemannian metric,
T_p 𝒩⊗ T^⋆_p 𝒩≅End(T_p 𝒩),
where End(T_p 𝒩) is the space of endomorphisms on T_p 𝒩. This isomorphism is given by the map Φ which takes an endomorphism A to the (1,1) tensor Φ(A) that acts on w ∈ T_p 𝒩 and η∈ T^⋆_p 𝒩 by Φ(A)(w,η) = η(Aw). The linear map corresponding to a (1,1) tensor of the form v ⊗ u^⋆, v∈ T_p 𝒩, u^⋆∈ T^⋆_p𝒩, is w ↦Φ^-1(v ⊗ u^⋆)(w)
= v · u^⋆(w), i.e. a scaling of v by u^⋆(w) ∈.
After choosing a basis for T_p 𝒩, the tangent vectors v, w can be represented as column vectors v, w ∈^d × 1. The flat map can be represented by the matrix h_p, which is the matrix representation of the Riemannian metric at p. After identifying covectors with row vectors (i.e. coordinate representations of linear maps from T_p 𝒩 to ), u^ can be represented as the row vector u^ = (h_p u)^T ∈^1 × d. This acts on w by u^(w) = (h_p u)^T w. Thus, w.r.t. some chosen basis, the matrix representation of our desired endomorphism is given by
Φ^-1(v ⊗ u^)= v (h_p u)^T = vu^T h_p.
§.§.§ Verifying independence of the coordinate system
Let Q be the change-of-basis matrix from basis a of T_p 𝒩 to basis b. Then Q^⋆ = (Q^T)^-1 is the corresponding change-of-basis matrix from basis a^⋆ to b^⋆ for T^⋆_p 𝒩, where these bases are dual to a, b. Thus, the change of basis of tangent vector v from a to b is computed as v_b = Q_ab v_a. The flat map is a linear map from T_p 𝒩 to T^⋆_p 𝒩, so if (h_p)_a is its representation w.r.t. bases a and a^⋆, then its representation w.r.t. bases b and b^⋆ is computed as
(h_p)_b = Q^⋆(h_p)_a Q^-1 = (Q^T)^-1(h_p)_a Q^-1.
We verify that the change of basis of the individual elements u,v,h_p matches the change of basis of
the matrix (as a linear map) (<ref>):
v_b u_b^T (h_p)_b = Q v_a (Q u_a)^T (Q^T)^-1(h_p)_a Q^-1
= Q v_a u_a^T (h_p)_a Q^-1.
As opposed to this, the expression v_b u_b^T does not transform properly under basis change: v_b u_b^T = Q v_a (Q u_a)^T = Q v_a u_a^T Q^T is only equal to Q v_a u_a^T Q^-1 if Q^T = Q^-1, i.e. if the basis change matrix is orthogonal, meaning that it only rotates the basis.
§ NOTES ON IMPLEMENTATION
At each step of the integration of a geodesic, eigenvectors needs to be computed at the current position p. This involves evaluating the kernel K_α(| x_i - p |) for all i = 1..N. For large datasets, we suggest to do this using libraries specialized at such kernel-operations, such as KEOPS, as well as automatically filtering out points far away from p whose weight will be close 0 anyway. We have not had the need to implement these optimizations in order to run the examples of Section <ref>.
The integration of the L geodesics in the algorithm for the principal submanifold can be parallelized; the computation of each one is independent from the rest. Again, we have not had the need to do this for running our experiments.
§.§ Choice of integration scheme
The integration of Hamilton's equations can be done using a symplectic integration scheme which aims at keeping the Hamiltonian constant. A constant hamiltonian is equivalent to constant speed, cf. eq. (<ref>). This is desired because the computation of curve length and distance via eq. (<ref>) assumes constant speed. We compared ordinary Euler integration to semi-implicit Euler (see e.g. <cit.>), a first-order symplectic integrator, and found the Hamiltonian to be better preserved using ordinary Euler integration in our experiments.
§ CHOOSING THE KERNEL RANGE Α AND BUNDLE RANK K
Firstly, note that these parameters can be considered to be a modelling choice, expressing the scale at which we want to analyze the data - what scale of variation to take into account. However, one can aim to find the 'lowest level of variation that is not due to random noise'. Secondly, note that the 'optimal' value of one hyperparameter depends on the value of the other. Since the rank k takes a finite number of values k ∈{1, …, d-1}, we suggest to start by estimating this. See <cit.> for a survey and benchmarking of different methods. Given an estimated k, we suggest to select a range parameter for which the separation between eigenvalues λ_k and λ_k+1 is the most clear on average. The optimal kernel range depends on the level of noise and the rate of change of the affine subspace _p as a function of p, which, in the case of the manifold hypothesis, is an expression of the curvature of the underlying manifold. A fast varying calls for a smaller α, while high levels of noise as well as a lower number of observations calls for a larger α.
§ ALGORITHM FOR COMBINING PRINCIPAL SUBMANIFOLDS FOR 2D SURFACE RECONSTRUCTION
In this section, we present an algorithm for combining principal submanifolds {M^k_μ_j(r_j)}_j=1..l based at different base points μ_j, j=1… l. In this case, k=2 and we'll write M_μ_j instead of M^2_μ_j. Given a point x∈^3, the algorithm first projects x to a set of nearest principal submanifolds, and then represents x as a weighted average of these projections, weighted by the SR distance between a projection and its corresponding base point. The point x can e.g. be an observation, x ∈{x_i}_i=1..N, or a point in a principal submanifold, x∈ M_μ_j. The algorithm can then be run for each point x in {x_i}_i=1..N or in M_μ_j, j = 1..l.
The point sets representing principal submanifolds M_μ_j(r_j), j=1… l, are generated by Algorithm 1. For each point p ∈ M_μ_j(r_j), we assume that the corresponding initial cotangent η(p) ∈_μ_j^⋆ has been stored.
Apart from the hyperparameters of the principal subbundle and submanifolds, the algorithm needs a 'threshold parameter' ϵ > 0. x will not be projected to principal submanifold M_μ_j if the distance between x and its projection x̂_j to M_μ_j is greater than ϵ. Thus, the size of ϵ should be comparable to an estimate of the noise-level in the point cloud.
The algorithm is the following.
* Project to each submanifold: project x to each M_μ_j(r_j), j =1..l, wrt. Euclidean distance, i.e. find the closest point in M_μ_j(r_j) w.r.t. Euclidean distance. Denote this projection of x to M_μ_j(r_j) by x̂_j. Denote the corresponding initial cotangent by η(x̂_j) and the distance by d_j d(μ_j, η(x̂_j)) = ‖η(x̂_j) ‖.
* Filter out projections: let B {j ∈{1, …, l}| |x - x̂_j| < ϵ} consist of indices of the basepoints satisfying that the projection of x to M_μ_j is sufficiently close to x.
* Rescale distances: set d̃_j d_j· 1/s_j(d_j), where s_j is a continuous, decaying bijection with domain and image given by s_j : [0, r_j] → [0,1]. We suggest to use the affine function satisfying these constraints.
* Compute weighted average: the weighted representation of x is now computed as
x̂_ = 1/∑_j ∈ B w_j∑_j ∈ B w_j x̂_j,
where (unnormalized) weights w_j are given by
w_j(x) = e^-(d̃_j - d̃_j^⋆)^2/(2 σ), j=1… |B|,
and j^⋆_j ∈ B d_j is the index of the principal submanifold that is closest w.r.t. SR distance. The standard deviation σ in w_j controls how fast the weights should go to zero. A general-purpose choice is σ = max_j ∈{1, .., l} r_j}.
§ SUPPLEMENTARY FIGURES
§.§ Illustration of overlapping submanifolds
Figure <ref> is a supplement to figure <ref>, zooming in on the region of overlap between the two principal submanifolds.
§.§ Reconstruction of head sculpture surface under noise level 3 out of 3
Figure <ref> illustrates the reconstruction of the face of the 'head sculpture' (from the benchmark dataset described in <cit.>), with noise level 3 out of 3. The parameters are the same as for the experiment described in section <ref> except for a slightly larger kernel range.
§.§ Illustration of the log map on a 4-dimensional sphere in ^50
Figure <ref> shows a single computed geodesic, found by solving the log problem log_p(q), for p,q and observations as described in section<ref>. The distance d^(p,q) is estimated as the length of the computed geodesic. The blue points are observations on the 4-dimensional sphere embedded in ^50 projected to ^3.
|
http://arxiv.org/abs/2307.01430v1
|
20230704014734
|
Continual Learning in Open-vocabulary Classification with Complementary Memory Systems
|
[
"Zhen Zhu",
"Weijie Lyu",
"Yao Xiao",
"Derek Hoiem"
] |
cs.CV
|
[
"cs.CV"
] |
Verifying the magnitude dependence in earthquake occurrence
Jiancang Zhuang
August 1, 2023
===========================================================
We introduce a method for flexible continual learning in open-vocabulary image classification, drawing inspiration from the complementary learning systems observed in human cognition. We propose a “tree probe” method, an adaption of lazy learning principles, which enables fast learning from new examples with competitive accuracy to batch-trained linear models. Further, we propose a method to combine predictions from a CLIP zero-shot model and the exemplar-based model, using the zero-shot estimated probability that a sample's class is within any of the exemplar classes. We test in data incremental, class incremental, and task incremental settings, as well as ability to perform flexible inference on varying subsets of zero-shot and learned categories. Our proposed method achieves a good balance of learning speed, target task effectiveness, and zero-shot effectiveness. Code will be available https://github.com/jessemelpolio/TreeProbehere.
§ INTRODUCTION
We would like image classification models that competently perform any arbitrary classification tasks and improve with each new example.
By learning to match images to corresponding text, open-vocabulary image classifiers such as CLIP <cit.> can perform arbitrary “zero-shot” tasks, assigning each image to the category that best matches from among a set of options. Performance for a target task can be improved, for example, by training a linear classifier (or “linear probe”) with the model's image features and new image/label pairs (“exemplars”). But it is not clear how to maintain the flexibility of the original model while learning from new examples.
We are inspired by the flexibility of human learning and inference. Humans consolidate vast experiences and observations but can also learn on the fly. For example, a child on a walk may initially identify a blue jay and robin and, after being shown a cardinal, identify one a few minutes later. Flexibility in human learning is enabled by complementary learning systems <cit.>: some slowly consolidate many experiences to enable fast inference without conscious effort, while others file individual observations and episodes for anytime retrieval and use. How can we make computer learning systems that likewise benefit from consolidated memory systems and exemplar-based memory to continually learn with zero-shot inference ability?
We investigate in the context of open-vocabulary image classification, using CLIP as the consolidated model. One challenge is to create exemplar-based memory systems that are performant in both accuracy and learning time (Sec. <ref>). A linear probe can achieve good accuracy, but learning from a new example requires 𝒪(n) time, where n represents the total number of exemplars. K-nearest neighbor can learn in 𝒪(1) time but tends to be less accurate than linear probe. Based on local linear models from the lazy learning literature <cit.>, we propose a “tree probe” method that hierarchically clusters examples and trains linear models for each cluster. The time to learn from a new example is 𝒪(log n), and the accuracy is close to linear probe.
A second challenge is to predict using both CLIP and exemplar-based models (Sec. <ref>). The exemplar-based model tends to perform well when an image's label is “exemplar-covered”, i.e. the exemplar set contains at least one instance with the same label, while the consolidated model can potentially predict any label. At test time, we may not know whether the label of a given test image is exemplar-covered.
Our idea is to use CLIP to estimate the probability that an image's label is exemplar-covered and use that probability to weight the predictions of the two models.
Our experiments (Sec. <ref>) test flexible continual learning in the forms of data-incremental, class-incremental, and task-incremental learning, as well as flexible inference with categorization tasks that involve some, all, or none of the exemplar-covered labels. Our proposed methods to predict from exemplars and combine exemplar-based and consolidated models are surprisingly effective. In summary, the reader may benefit from the following paper contributions:
* Tree-probe exemplar model: Our locally linear models using a hierarchical clustering can be considered a member of the long-studied lazy learning approaches <cit.>, but we are not aware of this specific method being proposed or used. Tree-probe has logarithmic-complexity training time in number of training samples and achieves better accuracy than nearest neighbor approaches. This can be attractive for interactive search, active learning, and other applications where annotated examples are received in a trickle and fast learning is required.
* Exemplar and consolidated model combination with embeddings: Our approach, to use the consolidated model to estimate applicability of the exemplar model and to combine model predictions in the label embedding space, enables effective continual open-vocabulary learning and performs significantly better than alternatives we tested.
* Flexible learning/inference experiments: Our experimental setup evaluates both the ability to continually learn from new samples and to flexibly apply those learnings to various category sets. This may provide a useful test framework for researchers seeking to further improve open-vocabulary continual learning.
§ RELATED WORKS
§.§ Instance-based learning (IBL)
Instance-based learning (IBL) <cit.> is a family of learning algorithms that construct a decision boundary using a memory of training instances, allowing for efficient and flexible adaptation to new data points. This learning paradigm relies on the principle of local approximation, where predictions are made based on the stored instances that are most similar to the query. One of the most well-known IBL methods is the k-Nearest Neighbors (KNN) algorithm, which has been extensively studied for its simplicity and effectiveness in various domains, including classification and regression tasks <cit.>.
Closely related to IBL is lazy learning, in which training data is organized for prediction at inference time instead of training time. This approach can be more practical than batch or “eager” learning in applications like online recommendation systems, when the training data is constantly evolving. Lazy learning approaches can include KNN or locally linear regression <cit.> or classification models <cit.>, where models are trained based on neighbors to the query.
We investigate KNN, a form of locally linear classifiers, and globally linear classifiers, exploring their trade-offs for training and inference time and accuracy on a growing set of exemplars, as well as how to combine their predictions with a static image-language foundation model.
§.§ Open-vocabulary classification
Open-vocabulary classification aims to categorize objects without being constrained by predefined categories or labels. The development of CLIP <cit.> has made this task achievable by leveraging its informative feature spaces. CLIP demonstrates the ability to output similar image embeddings for images belonging to the same category. Its effectiveness on various applications is vastly verified <cit.>. We utilize CLIP as our zero-shot model, complementing information lost by the memory model and maintaining open-vocabulary classification performance.
§.§ Continual learning
Continual learning strives to enable models to acquire new knowledge over time while preserving previously learned knowledge <cit.>. Approaches to continual learning can be broadly categorized into regularization <cit.>, parameter isolation <cit.>, and rehearsal methods <cit.>. Regularization techniques generally impose constraints on the learning process to alleviate forgetting. Parameter isolation methods maintain learning stability by fixing subsets of parameters <cit.> or extending model through adding new parameters <cit.>. Rehearsal methods involve storing and replaying past data samples during training <cit.>. Our method can be considered a variant of rehearsal methods; however, instead of storing the actual images, we store image embeddings to represent past data, which helps save storage space while retaining essential information.
While prior works explore how to adapt to increasing numbers of examples, classes, or tasks, or to domain shifts, ours is the first to our knowledge to address continual learning in the context of open-vocabulary image classification, extending the capabilities of a model capable of zero-shot prediction.
§ METHOD
Our approach is inspired from complimentary learning systems <cit.>. CLS suggests that the human brain comprises two complementary subsystems: a rapid learning system and a slow learning system, which together facilitate efficient memory storage, retrieval, and generalization. The rapid learning system forms new memories and associations quickly, allowing for the storage of unique and episodic information. The slow learning system is responsible for the gradual extraction of general knowledge and regularities across experiences. The interplay between these two complementary systems enables the brain to achieve a balance between rapid encoding of novel information and the gradual generalization of knowledge.
Analogous to CLS, our model comprises two modules: a CLIP-based base image encoder as the slow learning system, and a memory model storing extractions of images and annotations for rapid learning. We expect both modules to generate individual predictions and then fuse the outputs. In the following, we detail these modules and their fusion operations, aimed at enhancing performance for learned classes while maintaining zero-shot classification performance.
§.§ Open-vocabulary image classification
Image classification tasks in machine learning can be broadly bifurcated into closed-set and open-vocabulary scenarios. In closed-set classification, the model is trained on a finite set of known classes, and the goal is to classify new instances into one of these predefined categories. This paradigm, however, is incapable of recognizing or accommodating classes outside of the original training set. On the other hand, open-vocabulary classification allows for a more dynamic setting in which the set of classes can be defined at inference time. This flexibility introduces new challenges but enables the same model to more easily extend or blend its learned concepts and categories. Open-vocabulary classification is typically enabled by learning a mapping from a text label to a classification vector, e.g. using a language model as in CLIP <cit.>.
At inference time, both closed-set and open-vocabulary classification involve choosing the most probable class y_i from a candidate class set Y given an input image I. We can postulate a function f that maps I to an embedding vector e_I, represented as e_I= f(I).
Additionally, a per-class weight vector w_i is required to map e_I to a corresponding class logit l_i via an inner product, resulting in l_i= w_i· e_I.
These weight vectors can either be learned or contextually provided. A softmax function is subsequently used to convert logits into probabilities: p(y=y_i | I) = exp(l_i)/∑_j=1^nexp(l_j), i = 1, …, n where n is the number of classes.
The model finally yields the label ŷ with the highest probability: ŷ = max_i p(y=y_i | I).
§.§ Slow learning system: zero-shot model
Artificial neural networks, as exemplified by foundational models such as CLIP <cit.>, epitomize slow learning systems. Such networks learn iteratively, adjusting weights over multiple training epochs, aggregating over training data and developing increasingly abstract representations with increasing network depth.
Our approach leverages the CLIP image encoder as a slow learning mechanism due to its training on a massive image-text dataset and its ability for open-vocabulary classification by comparing image encodings to text encodings. We maintain fixed CLIP encoders throughout our project, since fine-tuning degrades the model's breadth of applicability.
Let denote the image encoder as f_img and the text encoder as f_txt. Upon receiving an input image I and a collection of textual labels T = t_1, t_2, …, t_n, the CLIP model maps I and T to their respective image and text embeddings, e_I and e_t_i, represented as e_I = f_img(I), e_t_i = f_txt(t_i).
Here, e_t_i serves a role analogous to the weight vector w_i from the previously defined classification model. The model computes logits for each class as cosine similarity between the image and text label, weighted by temperature τ (=100 in CLIP): s(e_I, e_t_i) = τ·e_I · e_t_i/|e_I||e_t_i|, i = 1, …, n.
Thus, cosine similarity substitutes for the inner product operation prevalent in closed-set models, and the model applies a softmax function
to transform logits into probabilities and selects label ŷ with highest probability, as in Sec <ref>.
§.§ Rapid learning system: exemplar-based memory model
For the rapid learning system, given one or more exemplars (image-label pairs), our goal is to maximize classification performance with minimal training and acceptable inference time.
We consider two approaches: instance-based and model-based prediction. Instance-based prediction leverages the exemplar set directly by retrieving and comparing samples. Model-based prediction seeks to capture the underlying structure of the data through a parameterized model.
Our memory module M stores encoded image embeddings along with their text labels T. Each entry in the memory can be denoted by M_j={e_I_j, t_j} where j represents the entry index, e_I_j represents the image embedding of I_j, and t_j is the corresponding label.
KNN:
Given e_I, the KNN memory module finds its most similar k entries in the memory through cosine similarities between e_I and all e_I_j in the memory. Let 𝒩_k(e_I) be the set of indices of the k highest cosine similarity scores to e_I. KNN classification for e_I can be performed by majority voting () from the values:
ŷ = max_y∑_j ∈𝒩_k(e_I)1(t_j = y).
Here, 1(·) is an indicator function. The probability of e_I being label y_i is
p_e(y=y_i|e_I) = ∑_j ∈𝒩_k(e_I)1(t_j = y_i)/k, and the label with maximum probability ŷ is predicted.
An alternative prediction approach is to produce a memory embedding, denoted as 𝐯_mem. For the retrieved instances, we encode the text labels into text embeddings using the CLIP text encoder. We could average the retrieved text embeddings ():
𝐯_mem = 1/k∑_j ∈𝒩_k(e_I) f_txt(t_j).
Related to attention in transformers, we could also use a weighted average based on similarity ():
𝐯_mem = ∑_j ∈𝒩_k(e_I)β_j· f_txt(t_j), where β_j = exp(s(e_I, e_I_j))/∑_j' ∈𝒩_k(e_I)exp(s(e_I, e_I_j')).
KNN takes virtually no time to train (𝒪(1)), and reasonably fast retrieval is possible with optimized libraries and parallel computing. However, accuracy tends to be lower than model-based methods.
Linear probe: Model prediction offers an alternative approach, learning a fixed set of parameters that generalize patterns and learn the underlying structure of the data to maximize classification performance.
Whenever new exemplars are added, the linear probe () method is to extract image embeddings and train linear classifiers on all accumulated exemplars. This relates to GDumb <cit.>, a simple experience replay method that retrains models from scratch each time new data is received, but, to enable rapid learning, we do not retrain or fine-tune the encoder.
As in CLIP <cit.>, we use the LogisticRegression class from the sklearn library as the linear classifier. The output probability can be written as:
p_e(y = y_i | e_I; θ) = exp(θ_y_i^T e_I)/∑_y_j∈𝐂exp(θ_y_j^T e_I),
where θ_y_i represents the learned model parameters for label y_i. The memory embedding is the text embedding of the label ŷ giving the maximum probability: 𝐯_mem=f_txt(t_ŷ).
Compared to KNN, the model is much slower to train, 𝒪(n) for n training samples assuming a constant number of epochs, which may be prohibitive when the model needs to be updated quickly based on few examples. However, inference is faster than KNN, and classification accuracy tends to be higher.
Tree probe: In a continual learning setting, we would ideally have fast training time of KNN with the relatively good accuracy of . Dusting off the older literature, we take inspiration from the instance-based and lazy learning <cit.>, particularly locally linear models <cit.>. These methods classify a test sample by finding its k-nearest neighbors and applying a linear classifier trained on the neighbors. This achieves 𝒪(1) training time but may be impractical for inference, since a new classifier may need to be trained for each test sample.
Instead, we propose an approximation, building a clustering tree from the training data and training a linear classifier in each leaf node. Starting from a root node, we search for the nearest leaf node for a new data point and insert it if the node has not reached the predefined capacity ψ (=10000 in our experiments). If a leaf node reaches capacity, it splits into two child nodes and becomes a non-leaf node. The attached data points are distributed to their children using KMeans clustering with two clusters. In experiments, when receiving new data, samples are added into the cluster tree one by one.
Only classifiers in affected leaf node(s) need to be retrained. When fixing the number of linear model training epochs and KMeans iterations, the complexity to incorporate a new exemplar in training is 𝒪(log n+ψ); the training time stays limited even when the total number of exemplars is very large.
The simplest inference method would be to assign a test sample to a leaf node in the cluster tree and classify it within the corresponding linear model, but this may lead to non-smooth predictions for samples near the cluster boundaries. In our experiments, we use an ensemble of the classifiers from leaf nodes corresponding to the k nearest neighbors for prediction. Experimentally, we find this gives a modest boost to performance with a slight increase in inference time.
For each classifier c, θ^c_y_i represents the learned parameters for class y_i from the corresponding class set 𝐂_c. The output probability is calculated by averaging the output probabilities of each classifier in 𝒞_k(e_I).
The memory embedding can be computed as the text embedding of the most likely label or the average text embedding for the most likely label of each classifier in the retrieval set, but we obtain the best performance using a similarity-weighted average:
𝐯_mem = ∑_c ∈𝒞_k(e_I)β_c· f_txt(max_y_i ∈𝐂_c p(y = y_i | e_I; c)), where β_c = exp(s(e_I, e_I_c))/∑_c' ∈𝒞_k(e_I)exp(s(e_I, e_I_c')).
This method, denoted , achieves similar accuracy to in our continual learning experiments, but with much faster training time.
§.§ Fusing predictions of the two models
We want to integrate predictions from the CLIP model and the exemplar-based memory model to retain both good open-vocabulary classification performance and high accuracy on exemplar classes. This can be tricky, especially when a subset of classes in the task are covered by the exemplar classes.
One way is to obtain an output embedding vector 𝐯_out by averaging the zero-shot embedding e_I and memory embedding 𝐯_mem:
𝐯_out = α𝐯_mem + (1- α) e_I.
Then we follow the open-vocabulary classification pipeline introduced in Sec. <ref> to obtain the final label. In design, α = 0.5 is natural since it reflects equal confidences on zero-shot prediction and memory prediction. This simple fuse operation is called average vector ().
From the probabilistic perspective, given the target class set 𝐂_t, alternatively we can average probabilities from p_e and p_z. Similarly, the final probability is:
p(y=y_i|I)=α p_e(y=y_i|I) + (1-α)p_z(y=y_i|I).
Here α is also 0.5 and we term the fuse operation .
The and approaches presume equivalent influence of the exemplar and zero-shot models for all samples. This presumption does not hold when a test sample's label falls within the exemplar's domain, where the exemplar model is typically more accurate, or outside of this domain, where the zero-shot model tends to outperform.
Addressing this issue, we devise an adaptive weighting mechanism, named Adaptive Instance Marginalization (), that estimates the likelihood of a test sample's label being in the exemplar set and balances the predictions from both models accordingly. The target label set is divided into exemplar y ∈𝐂_e and non-exemplar y∉𝐂_e subsets. The likelihoods p(y∈𝐂_e|I) and p(y∉𝐂_e|I) are obtained by summing the probabilities over these subsets, with the zero-shot prediction providing a confidence metric for label set classification. Namely, p(y∈𝐂_e|I) ∝∑_i ∈𝐂_e p_z(y=y_i|x) and p(y∉𝐂_e|I) ∝∑_i ∉𝐂_e p_z(y=y_i|x)= 1-p(y_i∈𝐂_e|I).
To incorporate this, we revise Eq. <ref> and Eq. <ref>, replacing α with p(y∈𝐂_e|I), resulting in two variants of fuse operations, namely AIM-Emb and AIM-Prob. This adaptive mechanism effectively capitalizes on the strengths of both prediction approaches, improving overall performance.
Since zero-shot predictions also have reasonable accuracies over exemplar classes, we also incorporate them with memory predictions for AIM-Prob:
p(y=y_i|I)=p(y∈𝐂_e|I) p_z(y=y_i|I)p_e(y=y_i|I)/∑_j∈𝐂_e p_z(y=y_j|I) p_e(y=y_j|I) + p(y ∉𝐂_e|I) p_z(y=y_i|I).
This induces a slight performance improvement.
§ EXPERIMENTAL SETUP
§.§ Tasks
In this paper, we evaluate our system through classification tasks, divided into target and zero-shot tasks based on their exposure to the memory model. We utilize general tasks such as ImageNet <cit.>, SUN397 <cit.>, CIFAR100/10 <cit.>, Caltech101 <cit.>, and fine-grained tasks like EuroSAT <cit.>, OxfordIIITPets <cit.>, SVHN <cit.>, DTD <cit.>, Flower102 <cit.>, FGVCAircraft <cit.>, StanfordCars <cit.>, Food101 <cit.>, Resisc45 <cit.>, UCF101 <cit.>.
Hyper-parameter searches are conducted on target tasks CIFAR10, SVHN, and Resisc45, and a zero-shot task Caltech101. For main results, to mitigate hyperparameter selection bias, we adopt different tasks: CIFAR100, SUN397, FGVCAircraft, EuroSAT, OxfordIIITPets, StanfordCars, Food101 and Flowers102 as target tasks; and ImageNet, UCF101, and DTD as zero-shot tasks. This selection, comprising both general and fine-grained tasks, more thoroughly evaluates our model's zero-shot performance. More details including prompt templates and zero-shot performances on all tasks are provided in the supplementary materials.
§.§ Evaluation scenarios
We consider several continual learning scenarios for receiving data: 1) Data incremental: A fraction of the training data, randomly sampled without enforcing class balance, is added in each stage; 2) Class incremental: All training data for a randomly sampled subset of classes are added in each stage; 3) Task incremental: All data for a single task, i.e. a dataset of examples assigned to a set of target labels, are added in each stage.
Data incremental learning includes seven stages, each comprising 2%, 4%, 8%, 16%, 32%, 64%, and 100% of task data respectively. Class incremental learning divides a task into five stages, each containing 20% of classes. In task incremental learning, each task is considered a stage.
In data and class incremental experiments, models are built separately for each target task. A target task is fully evaluated if there is at least one training sample for that task, even if there are no training samples for some classes. In task incremental, one model is built spanning all of the accumulated labels in each stage. In all cases, results are reported as the average accuracy of target tasks and of a held-out zero-shot task at each stage, all normalized by baseline zero-shot accuracy.
In the task incremental setting, after all training data for target tasks is received, we also evaluate each method's performance in several inference scenarios: T Target task; Z Zero-shot task; U Union of target task labels: in each task, the union of all labels is considered as valid predictions; U-Z Same as above, but a random sample of 100 labels from the zero-shot task are also added, and the average of target and zero-shot task performances are reported; M Mix of target task labels: five random splits of union of target labels are created, and average performance across splits is reported with 100 test samples per class; M-Z Same as above, but adding a random sample of 100 labels from the zero-shot task in each split.
§.§ Implementation details
We conduct our experiments on a setup featuring an RTX 3090 GPU and an AMD Ryzen 9 5950X CPU, using PyTorch as our primary framework. We adhere to the CLIP code example, setting the sklearn LogisticRegression regularization strength to 0.316 and the maximum iteration is set to 5K. Our tree probe's node capacity is set at 10K. For efficient and high-speed retrievals from large-scale exemplar sets, we leverage FAISS <cit.>, specifically using the IndexFlatIP class for its precision and performance. Model performances are gauged via Top-1 accuracy, with the officially released ViT-B/32 CLIP checkpoint serving as our memory or zero-shot model. For KNN, we set k=12 for MV-KNN and k=6 for other variants, based on hyperparameter tuning on the validation tasks. Additional details are provided in the supplementary materials.
§ EXPERIMENTAL RESULTS
In Sec. <ref>, we compare three forms of exemplar-based memory models.
We then present results of ablation experiments (Sec. <ref>), evaluating the methods of nearest neighbor blending and model fusion. Next, we evaluate with larger zero-shot backbones (Sec. <ref>), and finally present results on long-tailed classification (Sec. <ref>).
§.§ Main Results
We compare performance of the three forms of exemplar-based memory models on target tasks, zero-shot tasks, mixes, and unions (Figs. <ref>). We compare using only the memory models, in which zero-shot performance suffers, and fusing the zero-shot and memory models' predictions using the method. We consider the zero-shot model as a baseline termed as . We normalize the performances of different approaches with the to show the influence of learning, as is the starting point for all approaches.
We are not aware of existing works that address continual learning in an open-vocabulary image classification setting, but WiSE-FT <cit.> can be easily modified to address this problem. Initializing from a pre-trained model (image encoder and text embeddings for linear weights), such as CLIP, WiSE-FT fine-tunes the image encoder and/or linear model on the target data. Then, the new weights are element-wise weighted-averaged with the original weights, with a blending parameter α (= 0.5 by default). This method was proposed and evaluated as a way to maintain robustness to domain shift.
To extend to continual learning, when new data is received, the linear model(s) for corresponding task(s) are tuned, and we replace the text embeddings for trained classes with the average (α=0.5) of the original embedding and the trained weights. This is similar to our method and shares the disadvantage of time complexity to incorporate new data.
From Fig. <ref>(a-c), , , and outperform other methods across all stages. In the early stages of data and class incremental scenarios, memory-only methods struggle due to limited data. However, with more data, these methods improve and eventually surpass . Yet, they perform subpar in zero-shot tasks, as their reliance on memory samples provides limited information for non-exemplar classes, as depicted in Fig. <ref>(d). With the aid of , all three memory prediction methods' performance improved in zero-shot tasks, although lags slightly.
The results in Fig. <ref>(e) show varied performance levels across tasks and models. In U, T, and M tasks, memory-only models surpass the CLIP zero-shot model, highlighting their efficacy in learned categories tasks. However, their Z task performance significantly diminishes, indicating their limitations in unseen categories. With , models' Z tasks performance remarkably improves, without impacting their U, T, and M tasks performance. This underscores 's role in enhancing models' unseen categories handling while maintaining their learned tasks competence. WiSE-FT excels in T and M tasks but underperforms in Z and Z-involved tasks, revealing its strengths and weaknesses. outperforms in U and T, with other tasks being comparable. However, is significantly more efficient, taking only 40 minutes to pass all stages in the task incremental setting, while requires over 8 hours. Thus, when armed with AIM, provides the best balance between performance and efficiency, making it our most recommended approach.
§.§ Ablation experiments
For ablation experiments, we mainly utilize task incremental learning scenario to perform experiments as its domain shift is larger than other scenarios, being more challenging. More experiment results are in supplementary materials including selection of k.
Comparison of variants of KNN prediction In Sec. <ref>, we illustrate three variants of KNN prediction approaches, namely MV-KNN, AVG-KNN and WAVG-KNN. As also involves the procedure of aggregating results from the k nearest classifiers, we also experiment on the approach. We report averaged accuracy for both target tasks and zero-shot tasks by averaging across all stages. In Fig. <ref>(f), we can read that MV-KNN is the best for KNN predictions and WAVG-KNN is the best for prediction.
Comparison of variants of fuse operations As delineated in Sec. <ref>, we propose four fusion operations: Avg-Prob, Avg-emb, AIM-Prob, and AIM-Emb. Experimental results, averaged across stages and depicted in Fig. <ref>(g), validate our choice of AIM-Emb as the optimal fusion operation. This selection is based on the superior performance of AIM variants over Avg ones, particularly noticeable in both target and zero-shot tasks. Despite AIM-Emb showing marginally lower target performance compared to AIM-Prob, it confers distinct advantages in zero-shot tasks. We thus employ AIM for blending predictions from the zero-shot model and the memory model, averaging the embeddings to achieve overall enhanced performance.
§.§ Scaling to better zero-shot models
We default to using the CLIP ViT/B-32 model as our zero-shot model, but our approach is adaptable to any open-vocabulary foundational models. It can be effortlessly upgraded to superior models by simply swapping the zero-shot model. To demonstrate this, we experiment with the more advanced pretrained CLIP model ViT-L/14@336px, which boasts approximately 4x the capacity of ViT-B/32, and ViT-H/14[Checkpoint from <https://github.com/mlfoundations/open_clip>], which is double the size of ViT-L/14@336px. As zero-shot performance on ImageNet improves with larger capacities, we present our findings in Tab. <ref>. By shifting to more advanced models, both and have performance improvements, and also consistently outperforms , demonstrating how our approach benefits from the enhanced capacity of zero-shot models.
§.§ Evaluation on long-tailed classification
As shown in RAC <cit.>, their framework has superior performance on long-tailed classification, demonstrating the retrieval procedure provides benefits to tail classes. Likewise, in addition to the continual learning settings, we explore the possibilities of extending our method for long-tailed classification.
Specifically, we note the zero-shot model's proficiency in distinguishing tail classes, while the memory model exhibits superior accuracy for head classes. Following this, we categorize classes into tail (𝐂_tail) and head (𝐂_head) based on their sample sizes, designating the bottom two-thirds as tail. Then, in line with the AIM approach, we amalgamate the zero-shot (e_I) and memory (𝐯_mem) embeddings, using the probability of a sample's affiliation to either the head or tail class, to produce the output embedding: 𝐯_out = p(y∈𝐂_head|I) 𝐯_mem + p(y∈𝐂_tail|I) e_I.
Here, p(y∈𝐂_head|I) ∝∑_i ∈𝐂_head p_z(y=y_i|x) and p(y∈𝐂_tail|I) = 1 - p(y∈𝐂_head|I).
Accuracy on the Places365LT dataset <cit.> is presented in Tab. <ref>. These results demonstrate that all three memory prediction approaches, when enhanced with AIM, outperform their respective versions without AIM.
Compared to recent works specifically designed for long-tailed recognition, our method performs similarly to PaCo <cit.>, while RAC achieves moderately higher accuracy.
§ CONCLUSION AND LIMITATIONS
Our work has several limitations. First, we do not consider memory constraints. Since each exemplar requires storing up to 4KB for the image and text encodings (using the base CLIP model without any compression or limiting precision), roughly one million exemplars can be stored per 4GB of memory.
Second, we do not consider how to improve the consolidated zero-shot model using the exemplars.
Finally, we do not consider structured prediction problems like semantic segmentation or visual question answering, in which the notion of exemplar may need to be redefined.
In this work, we present an efficient and performant “tree probe” method for open-vocabulary continual learning. We also devise a strategy to combine zero-shot and memory models, which is also shown useful for long-tailed classification. Our flexible system, thoroughly tested on challenging scenarios, readily expands capabilities while leveraging advanced zero-shot models with minimal effort. We hope this work represents a step towards flexible, efficient, and powerful strategies in open-vocabulary continual learning.
unsrtnat
§ SUPPLEMENTARY MATERIALS
§.§ Algorithmic descriptions of
In Algorithm <ref> and Algorithm <ref>, we utilize algorithmic language to elucidate the training and inference processes of , enhancing its comprehensibility. Definitions of the involved functions are provided below:
* NearestLeaf(x_i, T): Returns the nearest leaf node to the data point x_i in tree T.
* Capacity(l): Returns the current number of data points in leaf node l.
* InsertData(x_i, l): Inserts data point x_i into leaf node l and returns the updated node.
* SplitNode(l, x_i): Splits leaf node l into two child nodes when it reaches capacity, distributes data points using KMeans clustering, and adds new data point x_i to the appropriate child node.
* TrainClassifier(l): Trains a linear classifier on the data points in leaf node l.
* FindNearestNodes(x_j, T, k): Finds the k leaf nodes in tree T nearest to the test sample x_j.
* Classify(x_j, l): Classifies test sample x_j using the linear classifier in leaf node l, returning the output probabilities for each class.
* ComputeEmbedding(x_j, ℒ): Computes the memory embedding for test sample x_j by applying a similarity-weighted average of the text embeddings of the most likely class labels from the classifiers in the set of nodes ℒ.
Note that we re-purpose several denotations from the main paper in Algorithm <ref> and Algorithm <ref> for better clarity.
§.§ Additional implementation details
More main comparison details
When evaluating different methods in data and class incremental learning scenarios, we ensure fairness by randomly selecting an identical portion of data/class for all methods, achieved by setting the same seed. The performance of each stage is averaged across all target tasks. In task-incremental learning, each stage is embodied by a distinct task, with the assumption that training data accumulates across all stages in all scenarios, similar to <cit.>. This assumption is based on the fact that real-world applications are often more limited by computational and time budgets than by storage. Furthermore, we enhance storage efficiency by saving samples as condensed feature vectors, a significant improvement over some earlier works. The results for KNN
(Mem. Only) in Fig. 2 in the main paper
are referred to as MAVG-KNN.
More scaling zero-shot model details
For experiments in Sec. <ref>,
we follow ablation experiments to utilize task incremental learning setting to test the average target and zero-shot accuracies of different backbones across all stages.
More flexible inference details
* T performance averages results over all target tasks, considering only task-specific labels as potential outputs.
* Z is evaluated similarly to T but operated on zero-shot tasks.
* For U, an union of all target tasks is created for evaluation, with classification options consisting of all task labels combined.
* U-Z averages the performance over U and Z, with classification options including a union of all task labels and 100 zero-shot labels.
* M evaluates performance over 5 splits of the union task and reports the average score.
* Similar to U-Z, M-Z adds 100 random zero-shot task labels to each split, expanding classification options to include each split's union and an additional 100 zero-shot labels.
§.§ Intuitive demonstration of evaluation scenarios
§.§ Descriptions of tasks
We perform experiments on a variety of commonly-used visual datasets to demonstrate the generalization capabilities of our method. These datasets encompass a broad range of image categories and application scenarios, including both fine-grained and generalized datasets. We present a brief introduction to all used tasks in this paper in the following.
§.§.§ General tasks
ImageNet
ImageNet <cit.> contains 1,281,167 training images, 50,000 validation images and 100,000 test images. The categories represent a wide variety of objects, animals, scenes, and even abstract concepts. This dataset has served as a fundamental dataset to evaluate performances of classification models, or as a pretraining dataset.
CIFAR100
The CIFAR100 dataset <cit.> consists of object images and is a subset of the 80 million tiny images dataset. It contains 60,000 32×32 color images from 100 object categories, with 600 images per category. The dataset has 100 fine-grained classes, grouped into 20 coarse-grained classes.
CIFAR10
The CIFAR10 dataset <cit.> is a subset of the 80 million tiny images dataset, just like CIFAR100. However, it is significantly smaller, containing 60,000 32×32 color images from 10 distinct categories, with 6,000 images per category. The categories include objects such as cars, birds, cats, and trucks. The dataset has a balanced distribution of images across categories and is frequently used as a benchmark in machine learning research for image classification tasks.
Caltech101
The Caltech101 dataset <cit.> is a widely used benchmark for object recognition. It features images of objects from 101 categories, ranging from 40 to 800 images per category. Caltech-101 is a more general dataset compared to FGVCAircraft, StanfordCars, and Flowers102.
SUN397
The SUN397 dataset <cit.> consists of scene images, containing 108,754 images across 397 scene categories, with each category having between 100 and 500 images. This dataset is commonly used for scene understanding tasks.
§.§.§ Fine-grained tasks
FGVCAircraft
The FGVCAircraft dataset <cit.> serves as a benchmark for fine-grained visual categorization of aircraft. It contains 10,200 images from 102 distinct categories. Each category includes approximately 100 images, annotated with the aircraft model, variant, and manufacturer.
DTD
The Describable Textures Dataset (DTD) <cit.> consists of 5,640 images across 47 texture categories, with each category featuring 120 real-world texture images such as fabrics, rocks, and surfaces. The dataset poses a challenge for texture classification due to subtle differences between textures within the same category and large variations in texture appearance caused by scale, orientation, and lighting.
StanfordCars
The StanfordCars dataset <cit.> is a benchmark dataset containing 16,185 images from 196 different car classes, divided into a 50-50 training and testing split. The classes correspond to specific car makes, models, and years, such as the 2012 Tesla Model S or 2012 BMW M3 coupe.
Flowers102
The 102 Category Flower Dataset <cit.> is a compilation of flower images. It includes 8,189 images across 102 flower categories, with each category containing between 40 and 258 images. The dataset's images vary in size and aspect ratio, captured using different cameras, lighting conditions, and backgrounds.
OxfordIIITPets
The OxfordIIITPets dataset <cit.> is a collection of pet images, featuring 7,349 images from 37 different cat and dog breeds. Each breed has between 100 and 200 images. The dataset is challenging because the appearance of the same breed can vary significantly, and different breeds may have similar-looking features.
SVHN
The Street View House Numbers (SVHN) <cit.> dataset is a collection of house number images. It includes 732,000 images of house numbers in various settings, such as street views, storefronts, and building facades. The dataset is frequently used for digit recognition tasks.
EuroSAT
The EuroSAT dataset <cit.> is a remote sensing image dataset comprising Sentinel-2 satellite data. It contains 27,000 images that cover 13 spectral bands and consist of 10 different land use and land cover categories, including forests, urban areas, and water bodies. This dataset is commonly employed for remote sensing and land cover classification tasks.
Resisc45
The Remote Sensing Image Scene Classification (Resisc45) dataset <cit.> is a comprehensive dataset for remote sensing image scene classification. It comprises 31,500 images, with 700 images from 45 different scene categories. The categories encompass a broad range of natural and man-made scenes such as airports, beaches, forests, and residential areas. The images are collected from Google Earth and are in RGB format with a size of 256×256 pixels. This dataset poses a challenge due to large within-class variations and between-class similarities.
UCF101
The UCF101 dataset <cit.> is a commonly used benchmark for action recognition. It consists of 13,320 videos from 101 action categories, with each category containing at least 100 videos. The actions include a wide range of human activities such as basketball shooting, horse riding, and juggling. The dataset is unique in its focus on complex, naturalistic action sequences, with videos varying in length from a few seconds to a minute.
§.§.§ Long-tailed task
Places365LT
Places365LT <cit.> a synthetic long-tail derivative of Places2 dataset <cit.>. The image resolution is 256×256. It contains 365 scene classes with at least 5 samples each. The classes are not uniformly distributed, forming a long-tailed distribution. It contains some label noises, making classification even harder on this dataset.
§.§ Prompt templates for tasks
CLIP <cit.> suggests utilizing a sentence template (e.g., “A photo of a {label}.”), as input to the text decoder instead of a plain text label, due to its training data being primarily full sentences describing images. Consistent with the focus of this paper, we employ simple prompt template for each task. Most of these templates are based on CLIP's recommendations[<https://github.com/openai/CLIP/blob/main/data/prompts.md>] and are summarized in Tab. <ref>.
§.§ Zero-shot performances on different tasks
Tab. <ref> shows the zero-shot performance of our implementation in different tasks. The main difference of official zero-shot performances comes from the ensemble prompt trick as mentioned in CLIP <cit.>.
§.§ Additional ablation experiments
k selection
KNN and prediction approaches rely on retrieving the nearest k exemplars to make predictions, thus being influential to the performance. To select the best k, we experiment on memory-only approaches by varying k with an interval of 3. As indicated in Fig. <ref>, from the curve, MV-KNN achieves the best result when k=12 while MAVG-KNN performs best at k=6. Optimal k is found relatively larger for MV-KNN since a smaller k may filter out correct answer from the retrievals but larger k would include more mismatches from the nearest neighbors, especially when some classes are not balanced in sample number. From the plot, we can clearly read that MAVG-KNN is consistently better than AVG-KNN and MV-KNN across different ks, making it our default option when using KNN as the prediction approach for rapid learning system.
Effect of ensemble classifiers in inference
Referencing Sec. <ref>,
we observe that ensemble predictions from multiple classifiers associated with k retrievals enhance performance. Fig. <ref> presents these results under the task incremental learning setting for eight target tasks. Our final model, , outperforms its variant without the ensemble classification function in both target and zero-shot performance, confirming the efficacy of this technique.
§.§ Detailed time analysis
Tab. <ref> compares the time complexity and actual times of three prediction approaches in a rapid learning system: KNN, , and . The actual times are calculated in a task incremental learning scenario, with k set to 6.
KNN has a constant time complexity for both training and inference, with actual times of 9.8 and 416.1 seconds respectively. has linear training time complexity and constant inference time complexity. The actual training time is considerably long at 30971.4 seconds (∼8.6 hours), and the inference time is 449.1 seconds. Our most recommended algorithm, , has logarithmic training time complexity and linear inference time complexity related to the number of retrieved classifiers (k). In practice, it exhibits significantly shorter training time than at 2082.0 seconds (∼0.6 hour) while having a slight increase in inference time. The table illustrates that strikes a balance between accuracy and efficiency, being more accurate than KNN and more efficient than . Note that numbers may vary depending on software and hardware situations. This number is collected from the same PC we used to run all experiments.
|
http://arxiv.org/abs/2307.02610v1
|
20230705190843
|
The Importance of Knowing the Arrival Order in Combinatorial Bayesian Settings
|
[
"Tomer Ezra",
"Tamar Garbuz"
] |
cs.GT
|
[
"cs.GT",
"cs.DS"
] |
The Effect of Neutron Imaging Pinholes on Image Blurring
Anemarie DeYoung and Anna Hayes
August 1, 2023
========================================================
We study the measure of order-competitive ratio introduced by Ezra et al. <cit.> for online algorithms in Bayesian combinatorial settings.
In our setting, a decision-maker observes a sequence of elements that are associated with stochastic rewards that are drawn from known priors, but revealed one by one in an online fashion.
The decision-maker needs to decide upon the arrival of each element whether to select it or discard it (according to some feasibility constraint), and receives the associated rewards of the selected elements.
The order-competitive ratio is defined as the worst-case ratio (over all distribution sequences) between the performance of the best order-unaware and order-aware algorithms, and quantifies the loss incurred due to the lack of knowledge of the arrival order.
Ezra et al. <cit.> showed how to design algorithms that achieve better approximations with respect to the new benchmark (order-competitive ratio) in the single-choice setting, which raises the natural question of whether the same can be achieved in combinatorial settings. In particular, whether it is possible to achieve a constant approximation with respect to the best online algorithm for downward-closed feasibility constraints, whether ω(1/n)-approximation is achievable for general (non-downward-closed) feasibility constraints, or whether a convergence rate to 1 of o(1/√(k)) is achievable for the multi-unit setting.
We show, by devising novel constructions that may be of independent interest, that for all three scenarios, the asymptotic lower bounds with respect to the old benchmark, also hold with respect to the new benchmark.
§ INTRODUCTION
We revisit the prophet inequality problem in combinatorial settings.
In the prophet inequality setting <cit.> there is a sequence of boxes, each contains a stochastic reward drawn from a known distribution. The rewards are revealed one by one to a decision-maker, that needs to decide whether to take the current reward, or continue to the next box. The decision-maker needs to make the decisions in an immediate and irrevocable way, where her goal is to maximize her expected selected reward.
The most common performance measure for the analysis of the decision-maker policy is the competitive-ratio, which is the ratio between the expectation of the selected reward and the expected maximum reward. That is, the decision-maker is evaluated by comparison to a “prophet” who can see into the future and select the maximal reward.
This framework has been extended to combinatorial settings, where the decision-maker is allowed to select a set of boxes (instead of only one) under some predefined feasibility constraints, such as multi-unit <cit.>, matroids <cit.>, matching <cit.>, and downward-closed (or even general) feasibility constraints <cit.>.
A recent line of work studied the (combinatorial) prophet setting when instead of comparing to the best offline optimum (or the “prophet”), they compare against the best online algorithm <cit.>, and showed how to achieve tighter approximations compared to the best online algorithms.
Recently, Ezra et al. <cit.> suggested the benchmark termed “order-competitive ratio” defined as the worst-case ratio (over all distribution sequences) between the expectations of the best order-unaware algorithm and the best order-aware algorithm. Thus, the order-competitive ratio quantifies the loss that is incurred to the algorithm due to an unknown arrival order.
Ezra et al. <cit.> showed that for the single-choice prophet inequality setting, it is possible to achieve 1/ϕ-approximation with respect to the new benchmark (where ϕ is the golden ratio).
In particular, they showed a separation between what adaptive and static algorithms can achieve with respect to the new benchmark, while with respect to the optimum offline, there is no such separation as a static threshold can achieve the tight approximation of 1/2.
The question that motivates this paper is whether one can achieve improved approximations for the new benchmark in combinatorial settings.
In particular, whether it is possible to achieve a constant approximation with respect to the best online algorithm for downward-closed feasibility constraints, whether ω(1/n)-approximation is achievable for general (non-downward-closed) feasibility constraints, or whether a convergence rate to 1 of o(1/√(k)) is achievable for the multi-unit setting.
§.§ Our Contribution, Techniques, and Challenges
We study this question in three natural and generic combinatorial structures: k-uniform matroid (also known as multi-unit), downward-closed, and arbitrary (not downward-closed) feasibility constraints.
The first scenario we consider is downward-closed feasibility constraints. We first revisit the example in <cit.> that is based on the upper bound of <cit.> for a different setting, that shows that no algorithm can achieve an approximation of ω(loglog(n)/log(n)):
[<cit.>]
Consider a set of n=2^2^k elements, that are partitioned into 2^2^k-k parts, each of size 2^k. The reward of each element is 1 with probability 2^-k and 0 otherwise. The feasibility constraint is such that the decision-maker is allowed to select elements from at most one part of the partition. The elements arrive in an arbitrary order. It is easy to verify that the expected value of the prophet is Ω(2^k), since it is a maximum of 2^2^k-k random variables that are distributed according to Bin(2^k,2^-k).
On the other hand, no online algorithm can have an expected reward of more than 2, since once the algorithm decides to select an element (with a value at most 1), then the expectation of the sum of the remaining feasible elements is bounded by 1.
As can be observed in Example <ref>, the instance is constructed in a way that no online algorithm (order aware or unaware) can achieve an expected reward of more than 2, while achieving an expected reward of 1 is trivial.
Thus, it fails to show a gap between what order-aware and order-unaware algorithms can achieve.
This leads us to our first result.
*Result A (Theorem <ref>): No order-unaware algorithm can achieve an approximation of ω(loglog(n)/log(n)) with respect to the best order-aware online algorithm.
To show Result A, we need to develop an entirely different construction than the one used in <cit.>. Their construction is such that once the online algorithm selects an arbitrary element, it eliminates all the flexibility that the algorithm had in choosing elements due to the feasibility constraint.
All attempts that are only based on the construction of the feasibility constraint, are destined to fail since the feasibility constraint will influence both the order-aware and order-unaware algorithms in the same way.
Thus, we construct a pair of a feasibility constraint and a distribution over arrival orders. Our elements are partitioned into k layers, and within each layer, the elements are symmetric (with respect to the feasibility constraint). An algorithm needs to select at most one element of each layer. The difference between the elements within the layers, is the role with respect to the arrival order, which draws half of them to be “good”, and half of them to be “bad”. “Good” elements, are such that the best order-aware algorithm does not lose a lot by choosing them, and “bad” elements, are such that the best order-aware algorithm does lose a lot by choosing them.
An order-aware algorithm can distinguish between “good” and “bad” elements and can always choose the “good” ones, while an order-unaware algorithm cannot distinguish between them, therefore cannot do better than guessing and thus it will guess a “bad” one after a constant number of layers in expectation.
The second scenario that we consider is of arbitrary feasibility constraints. For this problem with respect to the best offline algorithm as a benchmark, <cit.> showed that no online algorithm can achieve a competitive-ratio of ω(1/n). Achieving a competitive-ratio of 1/n can be done trivially by selecting the feasible set with the maximal expectation.
We next revisit the example in <cit.> that shows that no online algorithm can achieve an approximation of
ω(1/n).
[<cit.>]
Consider an instance with n=2k elements, where the collection of feasible sets is {{i,i+k}| i ∈ [k] }. The elements arrive according to the order 1,…,n, and the value of each element in [k] is deterministically 0, while the value of each element in {k+1,…,n} is 1 with probability 1/n, and 0 otherwise.
The prophet receives a value of 1 if one of the elements of the second type has a non-zero value, which happens with a constant probability.
Every online algorithm must select exactly one element among the elements of the first type, which restricts the algorithm to select a specific element of the second type, therefore every online algorithm has an expected value of 1/n.
As can be observed in Example <ref>, the instance is constructed with a fixed order, and the optimal algorithm for this feasibility constraint (even for every arrival order), is to discard all zero-value elements and select all elements with a value of 1 as long as there is a way to complete the chosen set to a feasible set. This algorithm is an order-unaware algorithm, and therefore this construction does not induce a separation between what order-unaware and order-aware algorithms can achieve. This leads us to our second result.
*Result B (Theorem <ref>): No order-unaware algorithm can achieve an approximation of 1+Ω(1)/n with respect to the best order-aware online algorithm.
Our result improves upon the result in <cit.> in two dimensions. First, our result is with respect to the tighter benchmark of the best online algorithm rather than the best offline algorithm. Second, our upper bound matches the lower bound, up to low-order terms (and not just up to a constant).
To show Result B, we create three types of elements: The first type of elements is of elements with a value of 1 with a small probability. Almost all elements are of this type, and the utility of the instance comes from these elements. The feasibility constraint requires to select exactly one of these elements. The elements of the other two types have a deterministic value of 0, and their role is to limit the ability of the algorithm to select elements of the first type.
The feasibility constraint is such that for each subset of elements of type 2, and each element of type 1, there is exactly one subset of elements of type 3 such that their union is feasible. The order of arrival is such that in Phase 1, the elements of type 2 arrive, in Phase 2, most of the elements of type 3, in Phase 3, the elements of type 1 arrive, and in Phase 4, the remaining (few) elements of type 3 arrive.
For exactly one subset X of the elements of type 2, it holds that: for each element e of type 1, there is a subset X_e of elements of type 3 that arrive in Phase 4, such that X∪{e}∪ X_e is a feasible set. For all other choices of X, there are at most a few feasible elements of type 3 that arrive at Phase 4, which restricts the algorithm to choose only among a few elements of type 1.
The only way to “catch” the value of all the elements of type 1, is to correctly guess the unique good subset X of type 2 with this special property. An order-aware algorithm can always guess it correctly as this information can be derived from the arrival order (since it knows the partition of elements of type 3 between Phase 2 and Phase 4), while an order-unaware cannot guess the correct subset with high enough probability, and therefore it loses a factor of 1/n in the approximation.
The third scenario that we consider is of k-capacity feasibility constraints. For this problem with respect to the best offline algorithm as a benchmark, Hajiaghayi et al. <cit.> showed that no online algorithm can achieve a competitive-ratio of 1-o(1/√(k)). Achieving a competitive-ratio of 1-Θ(1/√(k)) with respect to the best-offline is achieved by Alaei <cit.>.
Our last result shows, that one cannot achieve an order-competitive ratio that converges to 1 in a faster rate (up to a constant).
*Result C (Theorem <ref>): No order-unaware algorithm can achieve an approximation of 1-o(1/√(k)) with respect to the best order-aware online algorithm.
To show Theorem C, we construct an instance with three types of elements. The first type is with a deterministic low value, the second type is with a deterministic mid-value, and the third type is randomized, with a probability half of being high, and a probability half of being zero. The order of arrival is such that all the type 2 elements arrive first, and then either all elements with type 1 arrive before all elements of type 3 which is considered the “bad” order, or vice versa which is the “good” order. An algorithm that knows whether it is a good order or a bad order, can adapt the number of elements of type 2 to choose in an optimal way, while an algorithm that does not know the order needs to commit to selecting elements of type 2 before any information regarding the order is revealed. Our analysis then follows by balancing the low, mid, and high values in a way that an order-unaware algorithm that commits to selecting a certain amount of elements of type 2, will be far from the optimal order-aware algorithm for one of the two arrival orders.
§.§ Further Related Work
*Comparing to the best online.
Our work is largely related to a line of research that examines alternative benchmarks for the best offline benchmark, and in particular, comparing its performance to the best online algorithm
<cit.>.
One example, Niazadeh et al.<cit.> showed that the original tight prophet inequality bounds comparing the single-pricing with the optimum offline are tight even when comparing to the optimum online as a benchmark (both for the identical and non-identical distributions).
Another example is that Papademitriou et al. <cit.> studied the online stochastic maximum-weight matching problem under vertex arrivals, and presented a polynomial-time algorithm which approximates the optimal online algorithm within a factor of 0.51, which was later improved by Saberi and Wajc <cit.> to 0.526, and to 1-1/e by Braverman et al. <cit.>.
Kessel et al. <cit.> studied a continuous and infinite time horizon counterpart to the classic prophet inequality, term the stationary prophet inequality problem. They showed how to design pricing-based policies which achieve a tight 1/2-approximation to the optimal offline policy, and a better than (1-1/e)-approximation of the optimal online policy.
*Prophet in combinatorial settings.
Another line of work, initiated by Kennedy <cit.>, and Kertz<cit.>,
extends the single-choice optimal stopping problem to multiple-choice settings. Later work extended it to additional combinatorial settings, including multi-unit <cit.> matroids <cit.>,
polymatroids <cit.>, matching <cit.>, combinatorial auctions <cit.>, and downward-closed (and beyond) feasibility constrains <cit.>.
*Different arrival models.
A related line of work studied different assumptions on the arrival order besides the adversarial order <cit.>.
Examples for such assumptions are random arrival order (also known as the prophet secretary) <cit.>, and free-order settings, where the algorithm may dictate the arrival order <cit.>. Another recent study related to the arrival order has shown that for any arrival order π, the better of π and the reverse order of π achieves a competitive-ratio of at least the inverse of the golden ratio <cit.>.
§ MODEL
An instance of our setting is defined by a triplet =(,,) where is the ground set of elements, each element e∈ is associated with a distribution _e∈, and a feasibility constraint ⊆ 2^ over the set of elements (where ≠∅).
The elements arrive one by one. Upon the arrival of element e, its identity is revealed, and a value v_e is drawn independently from the underlying distribution _e.
We call an instance binary if for every element e∈, the support of _e is {0,1}.
A decision-maker, who observes the sequence of elements and their values, needs to decide upon the arrival of each element whether to select it or not subject to the feasibility constraint , which asserts that the set that is chosen at the end of the process (after all elements arrive) must belong to . Another interpretation of the feasibility constraint, is that the decision-maker must select (respectively discard) element e if all feasible sets that agree with all previous decisions before the arrival of element e, contain (respectively do not contain) element e. A feasibility constraint is called downward-closed if for every set S∈, and a subset T⊆ S, then T must be in . For downward-closed feasibility constraints, discarding elements is always feasible. The decision-maker's utility is the sum of the values of the selected elements.
We say that a decision-maker (or algorithm) is order-unaware if she does not know the arrival order of the elements in advance, and needs to make decisions with uncertainty regarding the order of the future arriving elements.
We say that a decision-maker (or algorithm) is order-aware, if she knows the order of arrival of the elements in advance, and can base her decisions on this information.
Given an instance , an order of arrival of the elements π, and an algorithm _ (that might be order-unaware, or order-aware), we will denote the expected utility of _ for arrival order π, by _(π).
Given an instance and an arrival order π, we will denote the order-aware algorithm with the maximal expected utility by _,π, i.e., _,πmax___(π).
We want to quantify the importance of knowing the order in advance, and to do so, we use the measure of order-competitive ratio proposed by <cit.> for the case of choosing a single element (i.e, = {S⊆ E | |S|≤ 1}).
Given an instance , the order-competitive ratio of an order-unaware algorithm _, denoted by ρ(, _) is
ρ(, _) min_π_(π)/_,π(π).
We use [j] to denote the set {1,…,j}. Given two partial orders π^1 = (e^1_1,…,e^1_k_1), and π^2=(e^2_1,…,e^2_k_2) over two disjoint subsets of elements _1,_2 ⊆, we define the order π^1 π^2
(e^1_1,…,e^1_k_1,e^2_1,…,e^2_k_2).
In this paper, we use the following forms of Chernoff bound:
For a series of n independent Bernoulli random variables X_1,…,X_n, and for X=∑_i=1^n X_i it holds:
* For all 0 ≤δ≤ 1, [|X-[X]| ≥δ[X]] ≤ 2 e^-δ^2 ·[X]/3.
* For all δ≥ 0, [X≥ (1+δ)[X]] ≤ e^-δ^2 ·[X]/(2+δ).
Lastly, for an instance =(,,), and an algorithm _ we denote by ξ(,_) the traditional competitive ratio which is
ξ(, _) min_π_(π)/[max_S ∈∑_e∈ S v_e] .
It is easy to observe, that for every instance , and an algorithm _,
ξ(, _) ≤ρ(, _),
thus, every lower bound on the competitive-ratio also applies to the order-competitive ratio (but not vice versa), and any upper bound on the order-competitive ratio also applies to the order-competitive ratio (but not vice versa).
§ DOWNWARD-CLOSED FEASIBILITY CONSTRAINTS
In this section, we show an upper bound on the order-competitive ratio for the family of downward-closed feasibility constraints.
This upper bound also holds with respect to binary instances and matches the best-known upper bound on the competitive-ratio. The current best-known lower bound for the competitive-ratio for downward-closed feasibility constraints of O(1/log^2(n)) was proved by <cit.>, and closing this gap is an open question.
There exists a constant ξ>0 such that for every n> 2 there is a (binary) instance =(,,) with n=|| and a downward-closed feasibility constraint in which for every order-unaware algorithm (deterministic or randomized) _, it holds that
ρ(, _) ≤ξ·loglog n/log n.
We assume that n=∑_i=1^k k^i for some even k. (Otherwise, we can reduce to the largest n'≤ n that is of this form, by having n-n' redundant elements.) Notice that
k ∈Θ(log n/loglog n),
since for k=log n/2loglog n, it holds that ∑_i=1^k k^i ≤ k^k+1≤ n, while for k=2log n/loglog n, it holds that ∑_i=1^k k^i ≥ k^k≥ n for large enough n.
For every string s of length between 1 and k where each character is in [k], we define an element e_s. We denote by s_j for j ∈ [|s|] the j-th character of the string s, moreover, we denote by s_[j] the prefix of s of the first j characters.
The set of elements is defined to be {e_s | s ∈⋃_i=1^k [k]^i}. Given a string s and a character j (respectively, another string s'), we denote by sj (respectively, ss') the string-concatenation of j (respectively, s') at the end of string s.
We say that an element e_sj for a string s and j∈ [k] is a child of element e_s, and that e_s is the parent of e_sj. (Note that an element can have only one parent, but may have multiple children.)
The value of all elements are drawn i.i.d. from the distribution in which v=1 with probability 1/k, and v=0 otherwise. Let {}_e ∈. The feasibility constraint {S⊆| e_s_1,e_s_2∈ S, |s_1| ≤ |s_2|, s_1 s_2 } (in other words, only subsets of a single path from the root to one of the leafs are feasible).
The instance is then =(,,).
It is sufficient to show that for some constant c>0, there is a distribution over the arrival orders, in which the expected utility of every order-unaware algorithm _ is at most c/k of the expected utility of the optimal order-aware algorithm.
I.e.,
∃ c ∀_ _π∼[_(π)] ≤c/k·_π∼[_,π(π)].
Equation (<ref>) is sufficient since it shows that for every algorithm _ there exists an order π^* (in the support of ) in which _(π^*) ≤ c/k ·_,π^*(π^*), which together with Equation (<ref>) concludes the proof.
We now define the distribution over the arrival orders.
We first draw independently for every string s of size between 0 and k-3, a random subset of [k] of size k/2, which we will denote by r_s.
Then, the elements arrive in an arrival order defined by the following recursive formulas. We first define for every string s of size between 0 and k-1 and a parameter i ∈ [k-|s|]:
(s,i) (ss')_s'∈ [k]^i,
and
(s) (s,k-|s|) …(s,1).
We also denote given the random realizations
{r_s}_s, for every string s of size between 0 to k-3 the arrival order
(s) (s1,…,sk) s1(s1)…sj(sj)…sk(sk),
and for s such that |s|=k-2,
(s) (s1,…,sk)(s1)…(sk).
The arrival order is then (ϵ).
For every element e_s, we say that e_s is good, if for every j∈ [min(k-2,|s|)], it holds that s_j ∈ r_s_[j-1], and bad otherwise.
The order of arrival is illustrated in Figure <ref>
We first bound from below the RHS of Equation (<ref>).
For c'=√(e)/√(e)-1, it holds that
_π∼[_,π(π)] ≥ k/c'.
We prove this claim by showing that for every order π in the support of , it holds that _,π(π) ≥k/c'.
Consider an order-aware algorithm (not necessarily _,π) that selects an element e_s if (1) e_s is feasible, (2) e_s is good, and (3) v_e_s=1 or e_s is the last good element to arrive in the set {s_[|s|-1]j | j ∈ [k]}.
By the description of the algorithm we know we will only select good elements, and we will select exactly one element from each layer (elements of strings with the same length). The algorithm receives a utility of 1 from layer j∈ [k] if one of the good elements that are the children of the element chosen from layer j-1, has a value of 1. (For elements of layer 1, it is sufficient that one of the good elements, has a value of 1.)
Thus, the expected utility of the algorithm is at least the number of layers, times the probability that one of the (at least) k/2 elements has a value of 1. Therefore
_,π(π) ≥ k· (1-(1-1/k)^k/2) ≥ k· (1-1/√(e)) = k/c',
which concludes the proof of the claim.
We next bound from above the LHS of Equation (<ref>).
For every (deterministic or randomized) order-unaware algorithm _, it holds that
_π∼[_(π)] ≤ 5.
We analyze the performance of _ by partitioning into three types of contributions: (1) good elements, (2) bad elements that are either children of good elements or in the first layer, and (3) bad elements that are children of bad elements.
We first claim that the expected number of elements of type (1) that _ selects is at most 2.
To show this we can first observe that once a bad element is chosen, then good elements cannot be chosen anymore. After a bad element is chosen, the only elements that can be chosen are the offspring of this element (which are also bad by definition) and the ancestors of the element that haven't arrive yet (which all must be bad).
We next observe, that the algorithm can only select good elements in a strictly increasing order (in the length of their corresponding strings).
Moreover, for every element e_s from layer j for j∈ [k-2], that is a child of a good element or is of layer 1, given the information that the algorithm has up to the arrival of element e_s, the probability of being good is exactly 1/2.
This is since being good, by definition requires that (1) e_s is not a child of a bad element (which the algorithm knows upon the arrival of e_s), and (2) s_|s|∈ r_|s|-1, which happens with probability 1/2.
Thus, each time the algorithm tries to select a good element from the first k-2 layers, it can no longer select additional good elements with probability 1/2. If the algorithm reaches layer k-1 without selecting a bad element, the algorithm can select at most two more good elements.
Therefore if the algorithm tries to “gamble" and select ℓ good elements from the first k-2 layers, it selects in expectation at most ℓ+2/2^ℓ + ∑_i=1^ℓi-1/2^i≤ 2 good elements from all k layers[This argument also holds for randomized ℓ.].
Second, _ can choose at most one element of type (2). This is since in every feasible set, there is at most one such element. (For every feasible set, only the element that corresponds to the shortest string among the bad ones can be of this type.)
Last, the expected utility of _ from elements of type (3) is at most 2.
This is true since we can observe that once a bad element e that is a child of a bad element is selected, the algorithm can only select elements that are ancestors of e.
Since there are less than k such elements, and each can contribute a utility of at most 1/k in expectation, the expected utility of elements of this type is less than 2. (Element e contributes 1, and its ancestors contributes less than k·1/k.) This concludes the proof.
The theorem follows by combining Claims <ref> and <ref>, with Equation (<ref>).
§ NON-DOWNWARD CLOSED FEASIBILITY CONSTRAINTS
In this section, we present an upper-bound on the order-competitive ratio of arbitrary (non-downward closed) feasibility constraints. This upper-bound holds even with respect to binary instances. This result is tight since achieving an order-competitive ratio of 1/n can be done trivially, by an algorithm that selects the set of elements with the maximum expected sum of values among all feasible sets. Our result also improves the best-known upper bound of the competitive-ratio shown in <cit.> of O(1/n) to 1/n+o(1/n).
For every constant ξ> 1, there exists n_0 such that for every n≥ n_0, there exists an instance =(,,) with n elements (i.e., n=||), in which for every order-unaware algorithm (deterministic or randomized) _, it holds that ρ(, _) ≤ξ/n.
We prove that theorem by presenting a construction with n elements, for which no order-unaware algorithm can have an order-competitive ratio of more than 1/n+o(1/n).
We assume for simplicity that n= 2^2x for some integer x.
Consider an instance =(,,) in which = A ∪ B ∪ C, where A={a_1,…,a_k_1}, B={b_1,…,b_k_2}, and C={c_1…,c_k_3} where k_1=4x, k_2=n-√(n)-4x, and k_3=√(n) which sum up to n.
The values of all elements in A ∪ C are deterministically 0.
The values of all elements in B are 1 with probability 1/n^2 and 0 otherwise.
Let U_1,…,U_2^k_1 be subsets of C which satisfy the conditions from the following claim:
There exists n'_0 such that for every n≥ n'_0, there exist sets U_1,…,U_2^k_1 such that:
* For all i∈ [2^k_1], log(n') ≤ |U_i|≤ 21·log(n').
* For each j∈ [k_3], it holds that |{i | c_j ∈ U_i}| ≤22· 2^k_1·log(n)/k_3.
* For all i_1,i_2 ∈ [2^k_1] such that i_1 ≠ i_2, it holds that |U_i_1∩ U_i_2 | ≤ 10.
We prove existence by the probabilistic method. For simplicity of presentation, let α=10.
Consider a series of random variables X_ij that indicate whether c_j∈ U_i, which are drawn independently according to Ber((α+1)·log(n)/k_3).
Note that for the parameter α and for n≥ 2^16 this probability is guaranteed to be in [0,1]. Let E^1_i be the event that |U_i| < log(n) or |U_i| > (2α+1)·log(n) (which is equivalent to |∑_j X_ij- (α+1)·log(n)|>α·log(n)), let E^2_j be the event that |{i | c_j ∈ U_i}| > 2(α+1)· 2^k_1·log(n)/k_3 (which is equivalent to ∑_i X_ij > 2·(α+1)· 2^k_1·log(n)/k_3), let E^3_i_1,i_2 be the event that |U_i_1∩ U_i_2| > α (which is equivalent to ∑_j X_i_1 j· X_i_2 j >α), and let E be the event that one of the formerly defined events occurs, i.e., E=(⋁_i E^1_i ∨⋁_j E^2_j ∨⋁_i_1≠ i_2 E^3_i_1,i_2).
For every i∈[2^k_1], it holds that
[ E^1_i] = [|Bin(k_3,(α+1)·log(n)/k_3) - (α+1)·log(n)|> α·log(n)] ≤2/n^3,
where the inequality is by Chernoff bound. For every j∈ [k_3] it holds that
[ E^2_j] = [Bin(2^k_1,(α+1)·log(n)/k_3) > 2 (α+1)· 2^k_1·log(n)/k_3]≤1/n^3,
where the inequality is by Chernoff Bound. For all i_1,i_2 ∈ [2^k_1], such that i_1≠ i_2 it holds that
[ E^3_i_1,i_2] = [Bin(k_3,(α+1)^2·log^2(n)/k_3^2) > α]
≤ k_3α+1·((α+1)^2·log^2(n)/k_3^2)^α+1≤1/n^5,
where the first inequality holds by the union bound, and the second inequality holds for large enough n (for n>2^1000).
Thus, by the union bound, the probability that one of the events occurs is
[E] ≤ 2^k_1·2/n^3 + k_3 ·1/n^3 + 2^2k_1·1/n^5 < 1.
Thus, there exist realizations of all X_ij in which event E does not occur, which implies the claim.
Next, we name the subsets of A as V_1,…,V_2^k_1, and we define 2^k_1 corresponding functions. For each i∈ [2^k_1], we define an arbitrary injective function f_i : [k_2]→ 2^U_i (such a function exists since |U_i| ≥log(n)≥log(k_2)).
We now define the feasibility constraint
{S |∃ i,j S ∩ A =V_i ∧ S∩ B={b_j} ∧ S ∩ C=f_i(j) }.
For every i, let π_i be the arrival order in which the elements arrive in four phases (within each phase, the order can be arbitrary but during the first phase the order should be consistent for all π_i). Phase 1 is composed of all elements of A. Phase 2 is composed of all elements of C ∖ U_i.
Phase 3 is composed of all elements in B, and Phase 4 is composed of all elements in U_i, i.e.,
π_i= (A_, C ∖ U_i_,B_, U_i_).
We next bound from below for every π_i the performance of _,π_i on π_i.
For every π_i, it holds that _,π_i(π_i) ≥1/n-o(1/n).
Consider the order-aware algorithm, that selects in Phase 1 the subset V_i of A. Then in Phase 2 it selects nothing. In Phase 3 it selects the first element b_j of B that its value is 1 (or the last element of Phase 3, if all of them have values of 0). In Phase 4, the algorithm selects the subset f_i(j) of C. This is always a feasible set.
The value of this set is 1, if one of the elements in B has a non-zero value.
The claim then holds since this happens with probability 1-(1-1/n^2)^k_2 =1/n - o(1/n).
In order to bound the performance of a randomized algorithm _, it is sufficient by Yao's principle to define a distribution D_π over arrival orders, and bound the performance of the best deterministic algorithm on the randomized distribution.
Consider the distribution D_π, where the order π∼ D_π is π_i with probability 1/2^k_1 for every i∈ [2^k_1].
We next bound from above the performance of any deterministic algorithm _.
For every deterministic algorithm _, it holds that E_π∼ D _π[ _(π)] ≤1/n^2 +o(1/n^2).
Let _ be an arbitrary deterministic algorithm, then since in Phase 1, the order is constant, _ selects deterministically a set V_i'⊆ A. We next analyze the performance of _ depending on the realized arrival order π_i. Let G_i' = {π_i | U_i ∩ U_i'≠∅∧π_i ≠π_i'}.
For every π_i ∈ G_i', by Claim <ref> it holds that |U_i∩ U_i'| ≤ 10, then by the end of Phase 2, _ selected a subset of U_i'∖ U_i. Since there are at most 10 elements in U_i'∩ U_i that didn't arrive by the end of Phase 2, there are at most
2^10 elements in B that _ can select that lead to a subset of a feasible set. Thus, it holds that _ (π_i)
≤2^10/n^2.
For the order of arrival π_i', it holds that _ (π_i') ≤ 1-(1-1/n^2)^k_2≤1/n.
Otherwise (for every π_i ≠π_i' such that π_i ∉ G_i'), it holds that U_i∩ U_i' =∅, and therefore by the end of Phase 2, there is only one element that _ can select which leads to a subset of a feasible set. Thus, _ (π_i) = 1/n^2.
The set G_i' is at most of size ∑_c_j ∈ U_i' |{ i | c_j ∈ U_i} | ≤ 21 ·log(n) ·22 · 2^k_1·log(n)/k_3 = o(n^2), where the inequality is by Claim <ref>.
Thus, it holds that E_π∼ D_π[_(π)] ≤1/2^k_1·1/n + |G_i'|/2^k_1·2^10/n^2 + 2^k_1-1-|G_i'|/2^k_1·1/n^2 = 1/n^2 + o(1/n^2).
Thus, by combining Claims <ref>, and <ref> with Yao's principle, we get that for every (deterministic or randomized) algorithm _, there exists an arrival order π_i such that
_(π_i)/_.π_i(π_i)≤1/n + o(1/n),
which concludes the proof.
§ K-UNIFORM MATROID
In this section we show that for the k-uniform feasibility constraint there is an instance in which the order-competitive ratio is 1- 1/Θ(√(k)), which is approaching 1 at the same rate (up to a constant) as the competitive-ratio (with respect to the prophet benchmark) for this feasibility constraint <cit.>.
There is a constant c>0 such that for every k, there is an instance =(,, = {S ⊆| |S| ≤ k }) in which
for every order-unaware algorithm _ it holds that
ρ(,_) ≤ 1- c/√(k).
Consider an instance =(,,) in which = {a_1,…,a_k,b_1,…,b_k,c_1…,c_2k}.
The value of each element a_i is deterministically 7/4, of each element b_i is deterministically 1, and of each element c_i is either 0 or 2 each with probability half.
Consider the following two orders:
* π_1 (a_1,…,a_k,b_1,…,b_k,c_1…,c_2k)
* π_2 (a_1,…,a_k,c_1…,c_2k,b_1,…,b_k)
We first define a few notation to show an upper bound on the order-competitive ratio of this instance.
Let X be the random variable of the number of non-zero values of elements c_1,…,c_2k, and let Z = k-X/√(k/2) (thus X=k-√(k/2)· Z).
We now lower bound _,π_1(π_1) and _,π_2(π_2).
It holds that
_,π_1(π_1) ≥ 2k - 0.291 √(k).
Consider an algorithm that selects d ·√(k/2) elements among {a_1,…,a_k} for d=1.152, 0 elements among {b_1,…,b_k}, and all elements in {c_1,…,c_2k} with a value of 2, as long as capacity allows.
It holds that
_,π_1(π_1) ≥ (π_1)
= [d ·√(k/2)·7/4 + min(k-d ·√(k/2),X)· 2 ]
= [d ·√(k/2)·7/4 + min(k-d ·√(k/2),k-√(k/2)· Z )· 2]
= [2k-√(2k)·( max(d , Z )- d ·7/8)]
= 2k- [Z < d] ·√(2k)·d/8 - Pr[Z≥ d ] ·√(2k)·[ Z - d ·7/8| Z ≥ d ]
≳ 2k - 0.291 √(k),
where the approximation holds since for large enough k, by the central limit theorem, Z is approximately distributed like N(0,1), and thus the result holds by the choice of the value of d.
It holds that
_,π_2(π_2) ≥ 2k - 0.224 √(k).
Consider an algorithm that selects d ·√(k/2) elements among {a_1,…,a_k} for d=0.674, all elements in {c_1,…,c_2k} with a value of 2, as long as capacity allows, and all elements among {b_1,…,b_k}, as long as capacity allows.
It holds that
_,π_2(π_2) ≥ (π_2)
= [d ·√(k/2)·7/4 + min(k-d ·√(k/2),X)· 2 + k - d ·√(k/2) - min(k-d ·√(k/2),X) ]
= [d ·√(k/2)·3/4 + min(k-d ·√(k/2),k-√(k/2)· Z ) + k]
= [2k-√(k/2)·( max(d , Z )- d ·3/4)]
= 2k- [Z < d] ·√(k/2)·d/4 - Pr[Z≥ d ] ·√(k/2)·[ Z - d ·3/4| Z ≥ d ]
≳ 2k - 0.224 √(k),
where the approximation holds since for large enough k, by the central limit theorem, Z is approximately distributed like N(0,1), and thus the result holds by the choice of the value of d.
Let _ be an arbitrary order-unaware (possibly randomized) algorithm.
Let Y be the random variable that indicates the number of elements _ selects among {a_1,…,a_k} divided by √(k/2).
Note that since elements a_1,…,a_k arrive first, Y is independent on X.
Now for d=0.913 and p=[Y > d ] consider two cases: (1) p ≥1/2, and (2) p < 1/2.
In case (1), we bound the performance of _ in the case of arrival order π_2 (see Claim <ref>).
In case (2), we bound the performance of _ in the case of arrival order π_1 (see Claim <ref>).
If p ≥1/2 then
_ (π_2) ≤1/2·_,π_2(π_2) + k - 0.115 √(k).
It holds that
_ (π_2) = [ _ (π_2) | Y > d ] · p + [ _ (π_2) | Y ≤ d ] ·(1 - p )
≤ 1/2·_,π_2(π_2) + 1/2·[ _ (π_2) | Y > d ],
where the inequality is since the algorithm conditioned on the value of Y, cannot obtain more than _,π_2(π_2), and since p≥1/2.
We also have that
[ _ (π_2) | Y > d ]
≤ k + [ √(k/2)· Y ·3/4 + min(k-√(k/2)· Y ,X)| Y > d ]
= 2k-√(k/2)·[ max( Y , Z )- Y ·3/4| Y > d ]
≤ 2k-√(k/2)·[ max( d , Z )- d ·3/4]
= 2k - [Z < d] ·√(k/2)·d/4 - Pr[Z≥ d ] ·√(k/2)·[ Z - d ·3/4| Z ≥ d ]
≲ 2k - 0.231 √(k),
where the first inequality is since the value obtained by the algorithm can be bounded in the following way: first the algorithm receives 1 for each selected box, it then receives an additional term of 3/4 for each selected box in {a_1,…,a_k}, and an additional term of 1 for each selected box in {c_1,…,c_2k} with a value of 2.
The first equality holds by rearranging and replacing X by k-√(k/2)· Z. The second inequality holds since the function f(x)= [ max( x , Z )- x ·3/4] is an increasing function in x for x>d. The last inequality holds for large enough k by the central limit theorem.
Combining Equations (<ref>) and (<ref>) concludes the proof.
If p < 1/2 then
_ (π_1) ≤1/2·_,π_1(π_1) + k - 0.150 √(k).
It holds that
_ (π_1) = [ _ (π_1) | Y > d ] · p + [ _ (π_1) | Y ≤ d ] ·(1 - p )
≤ 1/2·_,π_1(π_1) + 1/2·[ _ (π_1) | Y ≤ d ],
where the inequality is since the algorithm conditioned on the value of Y, cannot obtain more than _,π_1(π_1), and since p < 1/2.
We are now going to bound [ _ (π_1) | Y ≤ d ]. To do so, we observe that the optimal online algorithm that already selected √(k/2)· Y elements among {a_1,…,a_k} and knows that the arrival order is π_1, is a deterministic algorithm. Moreover, the optimal algorithm never selects elements among {b_1,…,b_k}. This is since selecting such element increases the algorithm's value by 1 when Z > Y, but decreases the algorithm's value by 1, when Z ≤ Y. It follows then by the fact that the probability that Z > Y for every non-negative Y is at most 1/2.
Therefore,
[ _ (π_1) | Y ≤ d ]
≤ [ √(k/2)· Y ·7/4 + min(k-√(k/2)· Y ,X) · 2 | Y ≤ d ]
= 2k-√(2k)·[ max( Y, Z )- Y ·7/8| Y ≤ d ]
≤ 2k-√(2k)·[ max( d , Z )- d ·7/8]
= 2k - [Z < d] ·√(2k)·d/8 - Pr[Z≥ d ] ·√(2k)·[ Z - d ·7/8| Z ≥ d ]
≲ 2k - 0.301 √(k),
The first equality holds by rearranging and replacing X by k-√(k/2)· Z.
The second inequality holds
since the function f(x)= [ max( x , Z )- x ·7/8] is a decreasing function in x for x ≤ d. The last inequality holds for large enough k by the central limit theorem.
Combining Equations (<ref>) and (<ref>) concludes the proof.
The proof then follows by considering the two mentioned cases:
If p ≥1/2 then when considering π_2, we get that
_ (π_2)/_,π_2(π_2) ≤ 1/2·_,π_2(π_2) + k - 0.115 √(k)/_,π_2(π_2)
= 1/2 + k - 0.115 √(k)/_,π_2(π_2)
≤ 1/2 + k - 0.115 √(k)/2k - 0.224 √(k)≤ 1- 0.001/√(k),
where the first inequality is by Claim <ref>, and the second inequality is by Claim <ref>.
If p< 1/2 then when considering π_1, we get that
_ (π_1)/_,π_1(π_1) ≤ 1/2·_,π_1(π_1) + k - 0.150 √(k)/_,π_1(π_1)
= 1/2 + k - 0.150 √(k)/_,π_1(π_1)
≤ 1/2 + k - 0.150 √(k)/2k - 0.291 √(k)≤ 1- 0.002/√(k),
where the first inequality is by Claim <ref>, and the second inequality is by Claim <ref>.
§ OPEN PROBLEMS
Our goal in this paper was to ask whether, with respect to the new benchmark of the order-competitive ratio, it is possible to achieve better asymptotic results than with respect to the traditional competitive-ratio.
One natural Open question is whether in settings where the best competitive-ratio is half, it is possible to achieve a better than half order-competitive ratio. Ezra et al.<cit.> showed that this is possible for single-choice prophet inequality, but for many other feasibility constraints (e.g., matching, matroids, knapsack, etc.), this is still an open question.
Another open question is what is the best order-competitive ratio or competitive-ratio for the family downward-closed feasibility constraints, and whether they are the same. The best known lower bound on the competitive-ratio (and also the order-competitive ratio) is O(1/log^2(n)) by Rubinstein<cit.>.
§ ACKNOWLEDGMENTS
This project is supported by the ERC Advanced Grant 788893 AMDROMA, EC H2020RIA project “SoBigData++” (871042), PNRR MUR project PE0000013-FAIR”, PNRR MUR project IR0000013-SoBigData.it.
abbrvnat
|
http://arxiv.org/abs/2307.01150v1
|
20230703164236
|
Reliever: Relieving the Burden of Costly Model Fits for Changepoint Detection
|
[
"Chengde Qian",
"Guanghui Wang",
"Changliang Zou"
] |
stat.ME
|
[
"stat.ME",
"math.ST",
"stat.TH"
] |
: Relieving the Burden of Costly Model Fits for Changepoint Detection
Chengde Qian^a, Guanghui Wang^b[Corresponding Author: [email protected]] and Changliang Zou^a
^aSchool of Statistics and Data Science, Nankai University
^bSchool of Statistics, East China Normal University
July 3, 2023
========================================================================================================================================================================================================================
We propose a general methodology Reliever for fast and reliable changepoint detection when the model fitting is costly. Instead of fitting a sequence of models for each potential search interval, employs a substantially reduced number of proxy/relief models that are trained on a predetermined set of intervals. This approach can be seamlessly integrated with state-of-the-art changepoint search algorithms. In the context of high-dimensional regression models with changepoints, we establish that the , when combined with an optimal search scheme, achieves estimators for both the changepoints and corresponding regression coefficients that attain optimal rates of convergence, up to a logarithmic factor. Through extensive numerical studies, we showcase the ability of to rapidly and accurately detect changes across a diverse range of parametric and nonparametric changepoint models.
§ INTRODUCTION
Changepoint detection refers to the process of identifying changes in statistical properties, such as mean, variance, slope, or distribution, within ordered observations. This technique has gained increasing attention in a broad range of applications including time series analysis, signal processing, finance, neuroscience, and environmental monitoring.
To identify the number and locations of changepoints, a common approach is to conduct a grid search to find the optimal partition that minimizes (or maximizes) a specific criterion. The criterion for each potential partition is typically composed of a sum of losses (or gains, respectively) evaluated for the corresponding segments, along with a penalty term that encourages parsimonious partitions. Grid search algorithms can be broadly classified into two categories: optimal schemes based on dynamic programming <cit.>, which are capable of finding the global minimum, and greedy strategies based on binary segmentation <cit.> or moving windows <cit.>, which iteratively refine the search space to approximate the minimum. Both types of algorithms require evaluating a loss function for a sequence of potential search intervals, denoted as , which represents a set of intervals determined sequentially according to specific algorithms. Table <ref> provides an overview of the number of loss function evaluations required by various grid search algorithms, highlighting their relative efficiency and scalability in terms of the sample size n. These algorithms include the segment neighborhood <cit.>, optimal partitioning <cit.>, pruned exact linear time <cit.>, wild binary segmentation <cit.>, and seeded binary segmentation <cit.>. For an extensive review of different grid search algorithms, please refer to <cit.>.
The evaluation or calculation of the loss function within a potential search interval I∈, denoted as Ł(I;_I), involves fitting a model _I within that interval I. This process is often the primary contributor to computational time, especially for complex changepoint models. Moreover, obtaining model fits along the search path, i.e., {_I}_I∈, usually dominates the computation of {Ł(I;_I)}_I∈. For instance, in high-dimensional linear models with changepoints, utilizing a LASSO-based model fitting procedure <cit.> for a search interval of length n would require O(np) operations per iteration using coordinate descent. This computational cost becomes significant when the number of variables p is large. If the tuning parameter is selected via cross-validation, a single fit becomes even more computationally intensive. Additionally, updating neighboring fits by adding or deleting a few observations is not straightforward in complex models, unlike classical mean change models that utilize the sample mean <cit.>. While problem-specific strategies may exist to expedite the calculations, there is a lack of systematic updating approaches for complex models <cit.>. Consequently, the total computational cost of model fits is multiplied; see Table <ref>.
§.§ Our Contribution
We introduce Reliever, a highly versatile framework designed to speed up changepoint detection while maintaining reliable accuracy in scenarios where model fits are computationally expensive. Our approach can seamlessly integrate with a wide range of changepoint detection methods that involve evaluating a loss function over a sequence of potential search intervals. By leveraging , we effectively address the computational complexities associated with changepoint detection across diverse models characterized by high dimensionality <cit.>, graphical structures <cit.>, vector autoregressive dynamics <cit.>, network topologies <cit.>, nonparametric frameworks <cit.>, and missing data mechanism <cit.>. In particular, in the context of high-dimensional linear models with changepoints, which has been a topic of active research interest, we demonstrate that the method, when coupled with the OP algorithm <cit.>, produces rate-optimal estimators (up to a certain log factor) for both the changepoints and corresponding regression coefficients.
Our approach is simple yet effective. We begin by pre-specifying a set of deterministic intervals, say , with a cardinality of O(n). When evaluating the loss Ł(I;_I) for a potential search interval I∈, a proxy or relief model _R, fitted for a relief interval R∈, arrives to replace the model _I. By employing relief models, the computational complexity of model fitting is reduced to O(na_n) for any grid search algorithm, which represents a significant reduction compared to the original scheme that goes over all search intervals. It is important to note that the actual number of relief intervals visited during the search depends on the specific algorithm, allowing for further complexity reduction. The relief intervals are constructed in a multiscale manner to ensure accurate tracking of the search path and successful recovery of the changepoints. Specifically, for any search interval I∈, there exists a relief interval R∈ such that R⊂ I and both intervals have similar lengths. Through our analysis, we demonstrate that the loss values {Ł(I;_I)} and {Ł(I;_R)} behave similarly, thus yielding satisfactory changepoint estimators.
To provide a glimpse into the benefits of employing , we examine a high-dimensional linear model with multiple changepoints, as described in Section <ref>. The example comprises n=600 observations and p=100 variables. In Figure <ref>(a), we present the average computation time required for model fits (including loss computation) with and without employing , as well as the average time for the pure grid search using each algorithm. The results clearly demonstrate that model fitting is the primary contributor to computational time, and the use of significantly alleviates the computational burden. Figure <ref>(b) displays the average number of model fits required along the search path for each algorithm, while Figure <ref>(c) presents a boxplot of detection errors measured in terms of the Hausdorff distance (see Section <ref> for details). The results illustrate that achieves comparable detection accuracy while considerably reducing the number of model fits required.
§.§ Related Works
Most greedy grid search algorithms aim to alleviate the computational burden in changepoint detection by narrowing down the search space, which reduces the cardinality of . By doing so, these algorithms indirectly reduce the number of model fits. In contrast, our approach directly addresses the reduction of model fits. This strategy is particularly beneficial when the computational cost of fitting a model is high, as it often dominates the overall loss evaluations.
Our use of deterministic intervals is inspired by the concept of seeded intervals proposed by <cit.>. In their work, the authors suggested replacing random intervals in the WBS algorithm with seeded intervals to achieve near-linear scaling of the number of loss evaluations with the sample size. However, the number of model fits could still be large, as a complete search for the best split within each seeded interval is required. <cit.> further proposed an optimistic search strategy that adaptively determines the best split within an interval instead of performing a complete search and paired this approach with the SeedBS algorithm for detecting multiple changepoints. In contrast, our approach utilizes deterministic intervals (i.e., relief intervals) to replace every search interval for model fitting (while the loss is still evaluated for that search interval). The reduction in the number of such intervals directly leads to computational speed-up. It is important to note that the design of relief intervals differs significantly from that of seeded intervals due to disparate objectives. For a more detailed discussion, please refer to Section <ref>.
Our approach of reducing heavy model fits is also related to the two-step procedures that utilize a preliminary set of changepoint candidates. In the context of high-dimensional linear models with a single changepoint, <cit.> proposed a method that fits two regression models before and after an initial changepoint estimator and then searches for the best split to minimize the training error. The two fitted models are used for data before and after a candidate split. To achieve near-optimal convergence rates of the changepoint update, the initial changepoint estimator needs to be consistent.
For multiple changepoint detection, <cit.> extended this approach by initializing with multiple changepoint candidates and developing a simulated annealing algorithm to allocate available model fits. However, this method assumes that all true changepoints are located near some of the initial candidates.
Similarly, in the context of univariate mean change models, <cit.> proposed a method that uses a sparse subsample to obtain pilot changepoint estimators and then updates these estimators by sampling densely in neighborhoods around them. The pilot estimators need to be consistent in both the number and their locations of changepoints
to obtain optimal changepoint estimators. Distinct from those works, our new proposal does not require consistent initial estimators and has general applicability, serving as a building block for existing changepoint detection algorithms.
§.§ Notations
The L_q norm of a vector ∈^p is denoted as _q = (∑_j=1^p z_j^q)^1/q. The sub-Gaussian norm of a sub-Gaussian random variable X is defined as X_Ψ_2=inf{t>0: {exp(X^2/t^2)}≤ 2}. For X∈ℝ^p, we define X_Ψ_2=sup_ v∈𝕊^p-1 v^⊤ X_Ψ_2, where 𝕊^p-1 represents the unit sphere. Let
𝒯_K(δ_m)={(τ_1,…,τ_K):0≡τ_0<τ_1<⋯<τ_K<τ_K+1≡ n, τ_k-τ_k-1≥δ_m, k=1,…,K+1}
be a set of K ordered integers with a minimal spacing δ_m>0.
§ METHODOLOGY
§.§ The Changepoint Model and Grid Search Algorithms
Suppose we observe _i_i=1^n from a multiple changepoint model
_i ∼_k^∗, τ_k-1^∗ < i ≤τ_k^∗, k=1,…,K^∗+1; i=1,…,n,
where K^∗ and {τ_k^∗} denote the number and locations of changepoints, respectively, with the convention that τ_0^∗ = 0 and τ_K^∗+1^∗ = n. The notations {_k^∗} refer to the underlying models, where _k-1^∗_k^∗. These models can represent either generally unknown distributions of {_i} or specific parametric models such that _k^∗ = __k^∗ for a known model and a sequence of unknown parameters of interest {_k^∗} satisfying _k-1^∗_k^∗. A concrete example of such a model is the linear model with structural breaks <cit.>, where we have paired observations _i=(y_i,_i)_i=1^n with responses y_i ∈ and covariates _i ∈^p, admitting y_i = _i^⊤_k^∗ + ϵ_i, where {_k^∗} and {ϵ_i} are the regression coefficients and random noises, respectively.
We introduce a model fitting procedure that yields a fitted model _I (or _θ_I in the case of parametric scenarios) based on the data {_i:i∈ I} within a specific interval I⊂(0,n], e.g., the LASSO in situations where the linear model involves a large number of covariates. Following the model fitting step, we evaluate the quality of the fit by the loss function Ł(I;_I) (or Ł(I;θ_I) in the parametric case), for the given interval I. Typically, the loss Ł is defined as the negative log-likelihood or least-squares loss for parametric models. A grid search algorithm is then employed to minimize a specific criterion over all possible segmented data sequences. This criterion typically comprises the sum of losses evaluated for each segment, along with a penalty that accounts for the complexity of the segmentation. Specially, consider a set of candidate changepoints (τ_1,…,τ_K)∈𝒯_K(δ_m) which partitions the data into K+1 segments, and the criterion is generally formed as
∑_k=1^K+1Ł((τ_k-1,τ_k];_k) + γ K,
where γ≥ 0 controls the level of penalization to prevent overestimation. Optimal-kind algorithms, such as the SN, OP, or PELT algorithm mentioned in Section <ref>, aim to find the exact minimizer over the entire search space 𝒯_K(δ_m). This involves evaluating a sequence of losses (along with fitting the corresponding models) for all O(n^2) intervals I⊂(0,n] satisfying I≥δ_m, which are explored sequentially using a dynamic programming scheme. Although the PELT algorithm utilizes a pruning strategy to skip certain intervals and reduce its complexity to O(n), this does not always apply (see Eq. (4) in <cit.>). In contrast, greedy-kind algorithms, such as binary segmentation (BS), WBS, narrowest-over-threshold, or SeedBS algorithm, only consider a subset of these intervals I in a sequential and greedy manner, aiming to reach a local minimizer. To illustrate, consider the BS algorithm. This algorithm begins by solving (<ref>) with K=1, which involves approximately O(n) intervals. The resulting changepoint divides the data sequence into two segments. Next, the algorithm applies the same procedure within each segment to identify new changepoints. This iterative process continues until a segment contains fewer observations than δ_m or a stopping rule is triggered. Overall, the BS algorithm requires evaluating approximately O(nlog n) intervals. The intervals that are sequentially considered in the search path, whether using a global or greedy grid search algorithm, are referred to as search intervals. We can represent a grid search algorithm by := ({Ł(I;_I)}_I∈), where denotes the set of all search intervals.
§.§ Relief Intervals
Obtaining all model fits {_I}_I∈ along the search path can be computationally demanding, particularly when dealing with expensive-to-fit models. Our approach is straightforward yet versatile, and it can be used in conjunction with any grid search algorithm = ({Ł(I;_I)}_I∈). We begin by constructing a set of deterministic intervals . During the search process, for each search interval I∈, we employ a proxy or relief model _R fitted using data from an interval R∈ to replace _I when evaluating the loss Ł(I;_I). The intervals R∈ are referred to as relief intervals to distinguish them from search intervals I∈. It is possible for multiple search intervals to correspond to a single relief interval, and not all relief intervals may be visited during the search. The key to this construction lies in satisfying two properties: first, significantly reducing the number of intervals for which a sequence of models needs to be fitted compared to considering all search intervals, and second, ensuring that the corresponding losses exhibit similar behavior to the original losses, allowing for the successful recovery of consistent changepoint estimators.
Let δ_m > 0 represent the minimum length required between two successive candidate changepoints in a grid search algorithm. Let 0 < w ≤ 1 be the wriggle parameter and b > 1 be the growth parameter.
For 0 ≤ k ≤⌊log_b{(1 + w) n/δ_m}⌋, define the kth layer as the collection of n_k intervals of length ℓ_k that are evenly shifted by s_k as _k = (qs_k, qs_k+ℓ_k] + a_k: 0 ≤ q ≤ n_k and their collection = ⋃_k=0^⌊log_b{(1 + w) n/δ_m}⌋_k as the set of relief intervals, where ℓ_k = b^k δ_m/(1+w), s_k = w ℓ_k, n_k= ⌊(n - ℓ_k)/s_k⌋, and a_k = n/2 - (ℓ_k + n_k s_k)/2 is an adjustment factor to center the intervals in _k around n/2.
In Figure <ref>, we provide an illustration of the construction of relief intervals with n = 200, δ_m=50, w = 0.25, and b = 1.25. The rationale behind this construction is to ensure that for any search interval I∈ with I≥δ_m, we can always find a relief interval R∈ such that R⊂ I and R/I is maximized. We define the coverage rate as r = min_I∈: I≥δ_mmax_R∈; R⊂ IR/I.
(i) ≤ c_w,b n/δ_m and r≥{(1+w)b}^-1, where c_w,b={(1+w)b}/{w(b-1)}.
(ii) If we set δ_m = Clog n for some constant C>0 and w = b - 1 = δ_m^-1/2, then ≤ n{1 + (C log n)^-1/2}^2 = O(n) and r≥{1 + (C log n)^-1/2}^-2≈ 1 - 2(C log n)^-1/2.
Proposition <ref> demonstrates that, by selecting appropriate wriggle and growth parameters along with the minimal search distance, the number of relief intervals approaches linearity in the sample size n while achieving a nearly perfect coverage rate. In practical applications, we can set a coverage parameter r∈(0, 1) and let 1+w=b=r^-1/2. The r acts as a tuning parameter that balances computational complexity and estimation accuracy. Table <ref> displays the number of search intervals obtained from a complete search over all intervals with a minimum length of δ_m=30 for n=1200, as well as the number of relief intervals corresponding to different coverage parameters r. In practice, we recommend selecting r∈[0.8,0.9], as it significantly reduces computational time while producing satisfactory performances compared to the original implementation.
The deterministic nature of our relief intervals is inspired by the concept of seeded intervals introduced by <cit.>. They proposed replacing random intervals in the WBS algorithm and its variants with deterministic intervals. Their approach focused on constructing shorter intervals that contain a single changepoint, thereby reducing the occurrence of longer intervals that may contain multiple changepoints. In contrast, our approach is applicable to a wide range of grid search algorithms beyond WBS. We construct deterministic intervals to replace all search intervals that enter the search path, ensuring that each search interval approximately covers a relief interval.
§.§ The Procedure
(a) Require a gird search algorithm =({Ł(I;_I)}) with a minimal search distance δ_m≥ 0 and a model fitting procedure _I for any interval I such that I≥δ_m, and a coverage parameter r ∈ (0, 1);
(b) Create a collection of relief intervals according to Definition <ref> with the wriggle and growth parameters 1+w=b=r^-1/2;
(c) Apply the gird search algorithm with relief models, i.e., =({Ł(I;_R)}) with R = _R ∈, R ⊂ IR.
The procedure can be utilized in conjunction with both optimal- and greedy-kind grid search algorithms that can be represented as =({Ł(I;_I)}). When employing the approach, the only difference from the original implementation lies in the employment of a relief model _R to evaluate the loss function Ł(I;·). This key characteristic renders highly versatile. By constructing relief intervals , the number of model fits required in the procedure can be bounded by O(n), resulting in a significant reduction compared to the original implementation (see Table <ref>).
§ THEORETICAL JUSTIFICATIONS
Despite the applicability of to various detection algorithms and model settings, establishing a unified theoretical framework for analyzing detection accuracy is challenging without specific assumptions regarding the involved model, fitting algorithm, and grid search algorithm. Here, we first offer an informal justification by examining the variations in loss values resulting from the application of the technique. Additionally, in Section <ref>, we present rigorous results on changepoints estimation for a concrete example involving high-dimensional linear regression models.
We focus on parametric change detection using loss functions Ł(I;_I)=∑_i ∈ Iℓ(_i, _I), where ℓ(·, ) is a convex function with respect to the parameter ∈Θ⊂^p. We consider the small-p-and-large-n scenario. For the model-fitting module, we utilize the M-estimator, which estimates the parameter as _I = _∈Θ∑_i ∈ Iℓ(_i, ). The corresponding population version is defined as _I^∘ = _∈Θ∑_i ∈ I{ℓ(_i, )}. In the original implementation of a grid search algorithm , the losses Ł(I, _I) = ∑_i ∈ Iℓ(_i, _I) are evaluated. With the approach, these losses are replaced by Ł(I, _R) = ∑_i ∈ Iℓ(_i, _R), where R ∈ represents a relif interval corresponding to I such that R⊂ I. Theorem <ref> establishes the distinction between the losses Ł(I, _I) and Ł(I, _R) uniformly across all intervals I⊂ (0, n].
Given that the conditions outlined in Appendix <ref> are satisfied. With probability at least 1 - n^- C for some constant C>0, the event
0 ≤1/I{Ł(I, _R) - Ł(I, _I)}≤ O(1/I∑_i ∈ I ∖ R∇_ℓ(_i, _R^∘)_2^2 + (1 - r) log n/I + (log n)^2/I^2)
holds uniformly for all intervals R ⊂ I ⊂ (0, n], where ∇_ℓ(·,) denotes the gradient or sub-gradient. Moreover, for the cases where either I = (s, e] contains no changepoint or there is only one changepoint τ∈ I such that min(τ - s, e - τ) = O(log n), this event simplifies to
0 ≤1/I{Ł(I, _R) - Ł(I, _I)}≤ C_1 ((1 - r) log n/I + (log n)^2/I^2),
where C_1 > 0 is a constant.
The conditions in Appendix <ref> bear similarities to those presented in <cit.>, which focused on the asymptotic properties of M-estimators obtained through convex minimization based on independent and identically distributed (i.i.d.) data sequence. These conditions primarily impose requirements on the smoothness and convexity of the loss function ℓ and its expectation. The proof of Theorem <ref> relies on a novel non-asymptotic Bahadur-type representation of _I - _R in the presence of changepoints across all sub-intervals I ⊂ (0, n].
In Theorem <ref>, Eq. (<ref>) indicates that the discrepancy between a -based loss Ł(I, _R) and its original counterpart Ł(I, _I) vanishes when the data within I are (nearly) homogeneous and log n / I→ 0, which provides a justification for the use of of . However, for heterogeneous I that contains a changepoint located far from the boundaries, this vanishing property is not ensured. Surprisingly, the inequality Ł(I, _R)≥Ł(I, _I) in Eq. (<ref>) becomes valuable in excluding inconsistent changepoint estimators in these cases. Therefore, we can expect that can effectively track the original search path. To gain some intuition, consider a scenario where there is a single changepoint τ^∗ such that min(τ^∗, n - τ^∗) ≥δ_m or τ^∗∈𝒯_1(δ_m). We specify the grid search algorithm as the first step of the BS procedure and define the changepoint estimator as τ̂_original = _τ∈𝒯_1(δ_m) S^(I)_I(τ), where S^(I)_I(τ)=Ł(I_1, τ, _I_1, τ) + Ł(I_2, τ, _I_2, τ), and for any τ, I_1, τ = (0, τ] and I_2, τ = (τ, n]. The -based changepoint estimator is denoted as τ̂ = _τ∈𝒯_1(δ_m) S^(R)_I(τ), where S^(R)_I(τ)=Ł(I_1, τ, _R_1, τ) + Ł(I_2, τ, _R_2, τ), and R_j, τ⊂ I_j, τ is the corresponding relief interval for j=1,2. We present the following corollary which establishes the consistency of τ̂.
Assume δ_m = C_m log n for some constant C_m > 0, and the event described in Theorem <ref> holds. If there exists a sufficiently large constant C_2 > 0 such that for any τ∈𝒯_1(δ_m) satisfying τ - τ^∗ > δ for a constant δ > 0,
S^(I)_I(τ) - S^(I)_I(τ^∗) > C_2 log n
holds, then τ̂ - τ^∗≤δ.
Corollary <ref> is a direct consequence of Theorem <ref>. Assume τ̂ - τ^∗ > δ. Since Ł(I, _R)≤Ł(I, _I) according to Eq. (<ref>), it implies that S^(R)_I(τ̂)≥ S^(I)_I(τ̂). By utilizing Eq. (<ref>), we can derive
S^(R)_I(τ^∗)
≤ S^(I)_I(τ^∗) + 2C_1 {(1 - r) + n log n/τ^∗ (n - τ^∗)}log n.
Considering Eq. (<ref>), we have S^(R)_I(τ̂)-S^(R)_I(τ^∗) > C_2 log n - 2C_1 {(1 - r) + C_m^-1}log n ≥ 0, by selecting C_2 ≥ 2C_1 {(1-r) + C_m^-1}. Therefore, the assumption τ̂ - τ^∗ > δ leads to a contradiction, consequently establishing the validity of Corollary <ref>. Eq. (<ref>) imposes implicit constraints on the model, ensuring that the original grid search algorithm produces a consistent changepoint estimator, i.e., τ̂_original - τ^∗≤δ. The verification of Eq. (<ref>) or the establishment of a lower bound for S^(I)_I(τ) - S^(I)_I(τ^∗) is a widely accepted technique for justifying the consistency of changepoint estimators <cit.>. Corollary <ref> demonstrates that the consistency proof for the original grid search algorithm can readily be extended to the estimator.
§.§ High-dimensional Linear Models with Changepoints
To gain a comprehensive understanding of how variations in loss functions impact the accuracy of changepoint detection using the device, we investigate the problem of detecting multiple changepoints in high-dimensional linear models, which has recently garnered considerable attention <cit.>. In our study, the data consists of independent pairs of response and covariates, denoted as (y_i,_i)∈ℝ×ℝ^p, satisfying
y_i = _i^⊤_k^∗ + ϵ_i, τ_k-1^∗ < i ≤τ_k^∗, k=1,…,K^∗+1; i=1,…,n.
Here, {_k^∗} represent the regression coefficients, and {ϵ_i} denote the random noises. Our objective is to identify the unknown number of changepoints K^∗ and their corresponding locations {τ_k^∗} from the observed data. We take a conventional high-dimensional regime where both n and p diverge, and focus on the case of sparse regression coefficients.
We adopt the OP algorithm for detecting multiple changepoints, as proposed by <cit.>. We utilize the LASSO procedure to estimate the regression coefficients within a given interval I⊂(0,n] with I≥δ_m. The estimated coefficients, denoted as _I, are obtained by solving
_I = _∈^p{Ł(I; ) + λ_I_1},
where Ł(I; ) = ∑_i∈ I (y_i - _i^⊤)^2 represents the loss function for the interval I, and λ_I is a tuning parameter that promotes sparsity in the estimated coefficients. The original implementation of the OP algorithm involves minimizing the criterion
∑_k=1^K+1Ł((τ_k-1,τ_k];_(τ_k-1,τ_k]) + γ K,
over all candidate changepoints (τ_1,…,τ_K)∈𝒯_K(δ_m). Here, γ is an additional tuning parameter that discourages overestimation of the number of changepoints. The specific values of λ_I and γ will be specified later in our theoretical analysis. To incorporate the procedure into the OP algorithm, as outlined in Section <ref>, we construct a collection of relief intervals with a coverage parameter 0<r<1. The criterion to be minimized then becomes
∑_k=1^K+1Ł((τ_k-1,τ_k];_R_k) + γ K, R_k = _R ∈, R ⊂ (τ_k-1,τ_k]R.
The optimization problems (<ref>) and (<ref>) can indeed be regarded as special cases of a more general optimization problem
min_(τ_1,…,τ_K)∈𝒯_K(δ_m){∑_k=1^K+1Ł((τ_k-1,τ_k];((τ_k-1,τ_k])) + γ K}.
Here, (I) can represent any valid estimator of the regression coefficients within an interval I such that I≥δ_m. By setting (I)=_I, we can recover (<ref>). Similarly, if we choose (I)=_R_I with R_I = _R ∈, R ⊂ IR, we obtain the problem (<ref>). The optimization (<ref>) can be addressed using the OP algorithm, which integrates a sequence of parameter estimation and loss evaluation steps along the search path, i.e., {Ł_I≡Ł(I;(I)): I⊂(0, n], I≥δ_m}. The dynamic ordering of the intervals I is determined by the OP algorithm itself.
We first state a deterministic claim regarding the consistency and near rate-optimality of the resulting changepoint estimators, but conditional on an event measuring the goodness of the solution path. To this end, we introduce some notations and conditions. For any interval I⊂(0, n], denote _I^∘ = _∈^p{Ł(I; )}, and define Δ_I = (I^-1∑_i ∈ I_i^∘ - _I^∘_Σ^2)^1/2, where _i^∘ = _i^∘ for i=1,…,n. For k=1,…,K^∗, let Δ_k = _k+1^∗ - _k^∗_Σ be the change magnitude at τ_k^∗, and we extend the definition to Δ_0 = Δ_K^∗ + 1 = ∞.
[Change signals]
There exists a sufficiently large constant C_𝗌𝗇𝗋>0 such that for k = 1,…,K^∗+1, τ_k^∗ - τ_k-1^∗≥ C_𝗌𝗇𝗋 slog(p ∨ n) (Δ_k-1^-2∨ 1 + Δ_k^-2∨ 1).
[Regression coefficients]
(a) Sparsity: _k≤ s < p, where _k = 1≤ j≤ p: β_k,j^∗≠ 0 and β_k,j^∗ is the jth component of _k^∗;
(b) Boundness: β_k,j^∗≤ C_β for some constant C_β>0.
[Covariates and noises]
(a) _i_i=1^n are i.i.d. with a sub-Gaussian distribution, having zero mean and covariance Σ. The Σ satisfies that 0<≤σ_x^2<∞, where =λ_min(Σ) and σ_x^2=λ_max(Σ) are the minimum and maximum eigenvalues of Σ, respectively. Furthermore, Σ^-1/2_i_Ψ_2≤ C_x for some constant C_x>0;
(b) ϵ_i_i=1^n are i.i.d. with a sub-Gaussian distribution, having zero mean, variance σ_ϵ^2, and sub-Gaussian norm C_ϵ.
These conditions are commonly adopted in the literature for multiple changepoint detection in high-dimensional linear models <cit.>. Specifically, Condition <ref> introduces a local multiscale signal-to-noise ratio (SNR) requirement for the spacing between neighboring changepoints, providing greater flexibility compared to the global SNR condition in existing works like <cit.> and <cit.>.
Given that Condition <ref> is satisfied. The solution (τ̂_1,…,τ̂_K̂) of the optimization problem (<ref>) with δ_m = C_m s log(p ∨ n) for a sufficiently large constant C_m>0, and γ = C_γ s log(p ∨ n) for a constant C_γ>0, satisfies that
K = K^∗max_1 ≤ k ≤ K^∗min_1 ≤ j ≤K1/2Δ_k^2 τ_k^∗ - τ̂_j≤C s log(p ∨ n),
for some constant C > 0, conditional on the event 𝔾=𝔾_1∩𝔾_2∩𝔾_3. Here,
𝔾_1 = {I∈ E_1, Ł_I - ∑_i ∈ Iϵ_i^2 < C_<ref>lem:loc_err_g.1 s log(p ∨ n)},
𝔾_2 = {I∈ E_2, Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I < C_<ref>lem:loc_err_g.2 s log(p ∨ n)},
𝔾_3 = {I∈ E_3, Ł_I - ∑_i ∈ Iϵ_i^2 > (1 - C_<ref>lem:loc_err_g.3) Δ_I^2 I},
with E_1={I: Δ_I = 0}, E_2={I: 0 < Δ_I^2 I≤C s log(p ∨ n), I ∩≤ 1}and E_3={I: Δ_I^2 I≥C s log(p ∨ n)}, and C_<ref>lem:loc_err_g.1, C_<ref>lem:loc_err_g.2 and C_<ref>lem:loc_err_g.3 are positive constants. In addition, the constants C_γ and C only depends on C_𝗌𝗇𝗋, C_m, C_<ref>lem:loc_err_g.1, C_<ref>lem:loc_err_g.2, and C_<ref>lem:loc_err_g.3.
Lemma <ref> is actually a deterministic result. The probabilistic conditions come into play when certifying that the event 𝔾 holds with high probability for both the original implementation of the detection procedure with Ł_I=Ł(I;_I) and the accelerated version achieved through with Ł_I=Ł(I;_R_I). Lemma <ref> offers new insights into the requirements for the solution path of the OP algorithm to produce consistent and nearly rate-optimal changepoint estimators, which may be of independent interest. Theorem <ref> asserts that the event 𝔾 occurs with high probability when additional Conditions <ref>–<ref> are satisfied.
Given that Conditions <ref>–<ref> are satisfied. Let C_λ and C_γ be positive constants, and 0 < C_m < C_𝗌𝗇𝗋 be sufficiently large constants.
The solution (τ̂_1,…,τ̂_K̂) of either Problem (<ref>) or Problem (<ref>) with δ_m = C_m s log(p ∨ n), λ_I = C_λ C_x σ_x D_I √(Ilog(p ∨ n)), and γ = C_γ s log(p ∨ n), satisfies that
K = K^∗max_1 ≤ k ≤ K^∗min_1 ≤ j ≤KΔ_k^2 τ_k^∗ - τ̂_j≤C s log(p ∨ n)
≥ 1 - (p ∨ n)^-c,
where D_I = √(C_x^2Δ_I^2 + C_ϵ^2). The constants C_γ, C_λ, C and c are independent of (n, p, s, K^∗).
Moreover, under the same event, there exists a constant C > 0 such that for all 1 ≤ k ≤ K^∗ + 1,
_(τ̂_k-1, τ̂_k] - _k^∗_2 ≤ C √(s log(p ∨ n)/τ_k^∗ - τ_k-1^∗).
Theorem <ref> demonstrates that under mild conditions and by appropriately choosing the tuning parameters γ and λ_I, both the original implementation of the OP algorithm (<ref>) and its counterpart (<ref>) consistently estimate the number of changepoints and achieve a state-of-the-art localization rate τ_k^∗ - τ̂_k/n ≤ CΔ_k^-2 s log(p ∨ n)/n with high probability. This localization rate exhibits the phenomenon of superconsistency for changepoint estimation in high-dimensional linear regression with multiple changepoints, extending a well-known result for single changepoint scenarios <cit.>. Importantly, our analysis allows for K^* to depend on n and potentially diverge. When K^*=O(1), the rate aligns with the findings in <cit.> and <cit.>, which employ OP-type algorithms. <cit.> allows for K^* to diverge and derives this rate using a WBS-type algorithm. Additionally, it is noteworthy that the tuning parameter λ_I, which controls the level of penalization for the model within I, not only scales with |I|^1/2 but also depends on the change magnitude Δ_I^2. In fact, determining the rate of λ_I involves examining the uniform bound of a sequence of mean-zero (sub-)gradients, where the variance is, however, influenced by Δ_I^2. When assuming that sup_IΔ_I^2=O(1), as done in previous works <cit.>, this dependence disappears, and thus λ_I specified in those works scales solely with |I|^1/2. Theorem <ref> offers valuable insights into the selection of the nuisance parameter, highlighting its change-adaptive nature. Although the detailed exploration of this aspect is beyond the scope of our paper, it calls for further research and investigation.
Upon initial examination, it may seem that enjoys a free lunch, as the localization rate appears to be independent of the coverage rate r. However, with a closer inspection of the proof, it becomes apparent that the coverage rate r is absorbed into the localization rate constant C̃ since r is fixed. Specifically, the value of C̃ depends on the constants C_𝗌𝗇𝗋, C_m, C_<ref>lem:loc_err_g.1, C_<ref>lem:loc_err_g.2, and C_<ref>lem:loc_err_g.3, as stated in Lemma <ref>. In fact, by choosing C_𝗌𝗇𝗋 and C_m sufficiently large, we have C = 2 (1 - C_<ref>lem:loc_err_g.3)^-1 (3 C_<ref>lem:loc_err_g.1 + 10 C_<ref>lem:loc_err_g.2). It can be shown that the constants C_<ref>lem:loc_err_g.j for j=1,2,3 increase as r decreases, resulting in an increase in C̃ with respect to r. In other words, smaller values of r lead to worse localization rates. Therefore, the coverage rate r in provides a trade-off between computational efficiency and localization accuracy, as anticipated. In the regime where r→ 1, one can expect that the difference between the and the original grid search algorithm would diminish. See Corollary <ref> in Supplementary Material for specific values of C_<ref>lem:loc_err_g.j, j=1,2,3.
§ NUMERICAL STUDIES
To demonstrate the advantages of employing the approach in conjunction with various change detection algorithms, we examine three grid search algorithms: SN, WBS, and SeedBS. We evaluate each algorithm under both a high-dimensional linear model and a nonparametric model. For illustrative purposes, we fix the number of wild intervals M=100 for WBS, and set the decay parameter a = 1/√(2) for SeedBS as recommended in <cit.>. All the results presented in Section <ref> are based on 500 replications.
§.§ High-dimensional Linear Regression Models
In the first scenario, we investigate the linear model (<ref>) with p = 100 and n∈{300,600,900,1200}. The covariates _i are i.i.d. from the standard multivariate Gaussian distribution, and the noises ϵ_i are i.i.d. from the standard Gaussian distribution 𝒩(0,1). We introduce three changepoints τ_k^∗_k=1^3 = ⌊ 0.22 n ⌋, ⌊ 0.55 n ⌋, ⌊ 0.77 n ⌋ into the model. The regression coefficients _k^∗ are generated such that θ_k,j=0 for j=3,…,p, and θ_k,1 and θ_k,2 are uniformly sampled, satisfying the signal-to-noise ratios _1_2 / √({ϵ_1}) = 2 and _k - _k-1_2 / √({ϵ_1}) = 1/2 for k = 2, 3, 4. Here θ_k,j denotes the jth element of _k. To estimate the sparse linear regression model, we utilize the package <cit.> in . We specify a set of hyperparameters λ, consisting of 30 values, and for each search interval I, we set λ_I = λ√(I). For a specific λ, we can apply any of the three grid search algorithms with the prior knowledge of the number of changepoints K^∗=3. Across the entire set of hyperparameters λ, we report the smallest detection error measured by the Hausdorff distance between the estimated and true changepoints, i.e.
max(max_1 ≤ k ≤ K^∗min_1 ≤ j ≤ K^∗ |τ_k^∗ - τ̂_j|, max_1 ≤ j ≤ K^∗min_1 ≤ k ≤ K^∗ |τ_k^∗ - τ̂_j|).
Figures <ref>–<ref> display the detection error and computation time associated with different grid search algorithms at varying values of the coverage rate parameter r. Notice that r=0.9 represents the recommended value for the method, while r=1 corresponds to the original implementation of each respective algorithm. The results indicate that as the coverage rate parameter r approaches 1, the performance of the method converges to that of the original implementation. Furthermore, when r = 0.9, the performance remains nearly identical to the original implementation, while achieving significant time savings. Even when r = 0.6, the performance is still acceptable, considering the negligible running time.
§.§ Changepoint Detection in the Nonparametric Model
In the second scenario, we examine the nonparametric changepoint model (<ref>), where the data z_i_i=1^n follows the distribution
z_i ∼ F_k(z), τ_k-1^∗ < i ≤τ_k^∗, k = 1,…,K^∗ + 1; i = 1,…,n.
Here F_k represents the cumulative distribution function (C.D.F.). <cit.> proposed an NMCD method. This approach involves defining the loss function corresponding to a search interval as the integrated nonparametric maximum log-likelihood function, fitting the model using the empirical C.D.F. of the data within that interval, and employing the OP algorithm to search for multiple changepoints. <cit.> further enhanced the computational efficiency by discretizing the integral and applying the PELT algorithm. To reduce the computational cost of fitting the model, which involves approximating the integral and can be computationally intensive, we leverage the method. Instead of using the empirical C.D.F. for the search interval, we replace it with its counterpart, constructed based on data within a relief interval. In this scenario, we consider the same three-changepoint setting as in the first scenario. The data for the four segments are generated from four different distributions, i.e., 𝒩(0,1), χ_(3)^2 (standardized chi-squared with 3 degrees of freedom), χ_(1)^2 and 𝒩(0,1). Figures <ref> and <ref> provide a summary of the detection error and computation time for the SN, WBS, and SeedBS algorithms. Notably, the method performs effectively for values of r larger than 0.7. In particular, the SN method is stable across different values of r.
§.§ Comparison with the Two-step Method
We present a comparative analysis between the method and the two-step approach proposed by <cit.>. The two-step method is specifically designed to detect a single changepoint in a high-dimensional linear model. It involves an initial guess of the changepoint, which divides the data into two intervals. Proxy models are then fitted within these intervals. Consequently, both methods expedite the process of change detection by reducing extensive model fits. To mitigate the uncertainty in the initialization, multiple guesses are considered, and a changepoint estimator that minimizes the total loss on both segments is reported. In our study, we consider the high-dimensional linear model discussed in Section 5.1 of <cit.>, with n = 1200 and τ^∗ = 120. We consider multiple initial guesses, specifically 0.25n, 0.5n, 0.75n. The results presented in Table <ref> indicate that although the two-step method may offer faster computation due to fewer model fits, it also exhibits larger detection errors. This can be attributed to its performance being heavily reliant on the accuracy of the initial changepoint estimate (or the quality of the corresponding intervals). In contrast, the method demonstrates stability across a range of choices for the parameter r, varying from 0.9 to 0.3.
The two-step method can be extended for multiple changepoint detection by incorporating the BS algorithm along with the multiple guess scheme, as suggested by <cit.>. This extension can also be applied to the WBS and SeedBS methods in a similar manner. In our study, we examine the examples presented in Sections <ref> and <ref> with n=1200. Multiple initial guesses are selected as m-equally spaced quantiles within a search interval, following the recommendation by <cit.>. The results depicted in Table <ref> reveal that the two-step approach is less efficient for multiple changepoint detection, and increasing the number of multiple initial guesses can even have a detrimental impact on its performance. In contrast, the method (with r=0.9) exhibits performances that are almost comparable to the original implementation.
§ CONCLUDING REMARKS
Searching for multiple changepoints in complex models with large datasets poses significant computational challenges. Current algorithms involve fitting a sequence of models and evaluating losses within numerous intervals during the search process. Existing approaches, such as PELT, WBS, SeedBS, and optimistic search algorithms, aim to reduce the number of (search) intervals. In this paper, we introduce which specifically relieves the computational burden by reducing the number of fitted models, as they are the primary contributors to computational costs. Our method associates each search interval with a deterministic (relief) interval from a pre-defined pool, enabling the fitting of models only within (or partially within) these selected intervals. The simplicity of the approach allows for seamless integration with various grid search algorithms and accommodates different models, providing tremendous potential for leveraging modern machine learning tools <cit.>.
incorporates a coverage rate parameter, which balances computational efficiency and estimation accuracy. For high-dimensional regression models with changepoints, by employing an OP algorithm, we characterize requirements on the search path to ensure consistent and nearly rate-optimal estimators for changepoints; see Lemma <ref>. Our analysis demonstrates that the method satisfies these properties for any fixed coverage rate parameter. Further investigation is warranted to characterize the search path for other algorithms and broader model classes. Additionally, our theoretical analysis highlights the importance of adaptively selecting the nuisance parameter based on the underlying change magnitude. Future research should focus on extending the to enable data-driven selection of nuisance parameters. While the focuses on changepoint estimation, it is worth exploring the generalization of these concepts to quantify uncertainty in changepoint detection <cit.> and perform post-change-estimation inference <cit.>.
§ APPENDIX
§ CONDITIONS IN THEOREM <REF>
Define Ł(I, ) = ∑_i ∈ Iℓ(_i, ), Ł(I, ) = Ł(I, ), G_I() = I^-1∑_i ∈ I g(_i, _I^∘ + I^-1/2) and G_I() = G_I(), where g(, ) = ∇_ℓ(, ). The sub-Exponential norm of a sub-Exponential random variable X is defined as X_Ψ_1 = inft > 0: exp(|X|/t) ≤ 2. For X∈ℝ^p, we define _Ψ_1 = sup_∈^p-1^⊤_Ψ_1.
* ℓ(·, ) is convex on the domain Θ for all fixed and Θ is a compact and convex subset of ^p.
* The expectation ℓ(_i, ) is finite for all _i and fixed ∈Θ.
* The population minimizer _I^∘ uniquely exists and is interior point of Θ.
* g(_i, )_Ψ_1≤ C_<ref>sec:proof_mest.1 for each near _I^∘.
* Ł(I, ) is twice differentiable at _I^∘ and _I≜I^-1∇_^2 Ł(I, _I^∘) is positive-define.
* G_I(I^1/2( - _I^∘)) - _I ( - _I^∘) = C_<ref>sec:proof_mest.2 - _I^∘_2^2.
* g(_i, ) - g(_i, _I^∘)_Ψ_1≤ C_<ref>sec:proof_mest.3 - _I^∘_2.
* I^-1Ł(I, ) is ρ-strongly convex in the compact set Θ.
* g(_i, ) is ζ-Lipschitz continuous w.r.t. .
* For i ∈ I ∖ R, _R^∘ - _i^∘_2 ≤Δ_∞ where Δ_∞ > 0 is a fixed constant.
* _R^-1 - _I^-1_𝗈𝗉≤ C_<ref>sec:proof_mest.4_R^∘ - _I^∘_2 and _I^-1_𝗈𝗉≤ C_<ref>sec:proof_mest.5 for any interval I.
rss
Supplementary Material for “: Relieving the Burden of Costly Model Fits for Changepoint Detection”
Supplementary Material includes proofs of Theorem <ref>, Lemma <ref> and Theorem <ref>, and additional simulation results.
§ PROOF OF THEOREM <REF>
For a fixed , denote the random vectors _i by
_i = g(_i, _I^∘ + /√(I)) - g(_i, _I^∘).
Denote v_I = (log n)^1/2. By (g), uniformly for all _2 ≤ M v_I (with some constant M > 0), _i_Ψ_1≤ C_<ref>sec:proof_mest.3 M v_I I^-1/2. Therefore by applying an exponential inequality,
sup__2 ≤ M v_I[ G_I() - G_I() - G_I()≥C_u C_<ref>sec:proof_mest.3 M/c_bI^-1 v_I √(log n)] ≤ 2 exp(-C_u log n).
By (f),
sup__2 ≤ M v_I_I/√(I) - G_I()≤ C_<ref>sec:proof_mest.2 M^2 v_I^2 I^-1.
The above two inequalities imply that
sup__2 ≤ M v_I[ G_I() - G_I() - _I/√(I)≥C_u C_<ref>sec:proof_mest.3 M/c_bI^-1 v_I √(log n)] ≤ 2 exp(-C_u log n).
By the chaining technique for convex function, i.e. the δ-triangulation argument used in <cit.>,
[sup__2 ≤ M v_IG_I() - G_I() - _I/√(I)≥ C_<ref>sec:proof_mest.6I^-1 v_I √(log n)] ≤ 2 I^p/2exp(- C_u log n).
By the sub-Exponential assumption, We can choose M > 0 such that [√(I)_I^-1 G_I()_2 ≥ (M - 1) √(log n)] ≤ 2exp(-C_u log n). It implies that with high probability, √(I)_I^-1 G_I() is in the ball ∈^p: _2 < (M - 1) √(log n). For all ∈^p with _2 = 1, let = -√(I){_I^-1 G_I() + (K log n) I^-1} with K = 2 C_<ref>sec:proof_mest.6/λ_min(_I). With probability at least 1 - 2 (1 + I^p/2) exp(-C_u log n),
^⊤ G_I(I^1/2_I^-1 G_I() + (K log n) I^-1/2)
≥ (K I^-1log n) ·^⊤_I - C_<ref>sec:proof_mest.6I^-1log n > 0.
It means that _I is in the open ball _I^∘ - _I^-1 G_I() + (K log n) I^-1: _2 < 1. By taking the union bounds over the intervals I ⊂ (0, n], uniformly with probability at least 1 - exp(- C_<ref>sec:proof_mest.7log n),
(_I - _I^∘) = - _I^-1 G_I() + _I,
where max_I⊂ (0, n]_II / log n = O(1).
Now we have obtained the uniform Bahadur representation that holds over I ⊂ (0, n] with high probability. To measure the difference between _I and _R, we first consider the population one. Recall that R ∈ is the Relief interval of I. First of all, we study the population minimizers. By the ρ-strong convexity and the definition of _I^∘ and _R^∘,
0 ≤Ł(I, _R^∘) - Ł(I, _I^∘) ≤∇_θŁ(I, _R^∘)^⊤ (_R^∘ - _I^∘) - ρI/2_R^∘ - _I^∘_2^2,
which implies that
_R^∘ - _I^∘_2 ≤2/ρI∑_i ∈ I ∖ R g(_i, _R^∘)_2 ≤2 ζ/ρI∑_i ∈ I ∖ R_R^∘ - _i^∘_2 = O(1-r).
Assume that the Bahadur representation Eq. (<ref>) holds thereinafter. We the following identity of the difference between _I and _R,
_I - _R = _I^∘ - _R^∘ + _R^-1 G_R() - _I G_I() + _I - _R.
For _R^-1 G_R() - _I G_I(), further consider the following decomposition,
_R^-1 G_R() - _I G_I() = (_R^-1 - _I^-1) G_R() + _I^-1{G_R() - G_I()}.
For the first part, by the sub-Exponential assumption (d), with probability at least 1 - exp(-C_ulog n),
(_R^-1 - _I^-1) G_R()_2 ≤ C_<ref>sec:proof_mest.8_I^∘ - _R^∘_2 [(log n/R)^1/2 + log n/R].
For the second part,
G_R() - G_I() = ∑_i ∈ R[1/R g(_i, _R^∘) - 1/I g(_i, _I^∘)] - ∑_i ∈ I ∖ R1/I g(_i, _I^∘) ≜1/I∑_i ∈ I_i,
where _i = [g(_i, _R^∘) I / R] - g(_i, _I^∘) for i ∈ R and _i = - g(_i, _I^∘) for i ∈ I ∖ R. For any individual i ∈ R, by assumptions (d) and (g),
_i_Ψ_1 = {g(_i, _R^∘) - g(_i, _I^∘)} + (1-r)/r g(_i, _R^∘)_Ψ_1
≤_I^∘ - _R^∘_2 + 1-r/r (C_<ref>sec:proof_mest.3_R^∘ - _i^∘_2 + C_<ref>sec:proof_mest.1) ≤_I^∘ - _R^∘_2 + 1-r/rC_<ref>sec:proof_mest.9
For i ∈ I ∖ R,
_i_Ψ_1≤ (C_<ref>sec:proof_mest.3_I^∘ - _i^∘_2 + C_<ref>sec:proof_mest.1) ≤ C_<ref>sec:proof_mest.9.
In the above two bounds, we use Condition (j), the boundness of parameters. By Bernstein's inequality (Lemma <ref>), with probability at least 1 - exp(-C_ulog n),
G_R() - G_I()_2 = C_<ref>sec:proof_mest.10[(_I^∘ - _R^∘_2 + (1 - r)^1/2) (log n/I)^1/2 + log n/I].
Overall we obtain,
_I - _R_2 ≤ O(_I^∘ - _R^∘_2 + (1 - r)^1/2(log n/I)^1/2 + log n/I).
By the definition of _R, one obtains ∇_Ł(I, _R) = ∑_i ∈ I ∖ R g(_i, _R). Similarly, by the δ-triangulation argument used in the proof of the Bahadur representation, with probability at least 1 - exp(-C_u log n), uniformly for all intervals I,
∑_i ∈ I ∖ R{g(_i, _R) - g(_i, _R^∘) - [g(_i, _R) - g(_i, _R^∘)]}_2 = O(log n),
∑_i ∈ I ∖ R{g(_i, _R) - g(_i, _R^∘)}_2 ≤ζ (1 - r) _R - _R^∘_2 = O((1 - r) √(Ilog n) + log n),
∑_i ∈ I ∖ R g(_i, _R^∘)_2 = ∑_i ∈ I ∖ R g(_i, _R^∘)_2 + O(√((1-r)Ilog n) + log n).
Combining the above three upper bounds,
∇_Ł(I, _R) = ∑_i ∈ I ∖ R g(_i, _R^∘)_2 + O((1 - r) √(Ilog n) + log n).
By the convexity condition (h),
1/I{Ł(I, _R) - Ł(I, _I)}≤1/I∇_Ł(I, _R)^⊤ (_R - _I) ≤1/I∇_Ł(I, _R)_2 _R - _I_2
= O(1/ρI^2∑_i ∈ I ∖ R g(_i, _R^∘)_2^2 + (1 - r) log n/I + (log n)^2/I^2).
When I = (s, e] contains no changepoint, or it is nearly homogeneous such that if a true changepoint τ∈ I, then min(τ - s, e - τ) = O(log n), we have ∑_i ∈ I ∖ R g(_i, _R^∘) = O(min(τ - s, e - τ)) = O(log n). Therefore,
1/I{Ł(I, _R) - Ł(I, _I)} = O((1 - r) log n/I + (log n)^2/I^2).
§ PROOF OF LEMMA <REF>
We first introduce some notations. For a given changepoint estimation τ∈ [n] and a changepoints set = 0 < τ_1 < … < τ_K < τ_K+1 < n, denote _+(τ,) ≜min_kk : τ_k > τ and _-(τ,) ≜max_kk: τ_k < τ. For simplicity, further denote k_τ,+^∗ = _+(τ, ), k̂_τ,+ = _+(τ, ), k_τ,-^∗ = _-(τ, ) and k̂_τ,- = _-(τ, ). Let = τ̂_1,…,τ̂_K be the minimizer of Eq. (<ref>). Denote δ_m = C_m s log (p ∨ n) and δ_k = 2 C s log(p ∨ n) Δ_k^-2 where Δ_k = _k+1^∗ - _k^∗_Σ, and = (τ̂_a, τ̂_a+1]: ∃ h ∈ [K^∗], min(τ_h^∗ - τ̂_a,τ̂_a+1 - τ_h^∗) > δ_h.
Assume that ≠∅, i.e. ∃ h ∈ [K^∗] such that ∩ [τ_h^∗ - δ_h, τ_h^∗ + δ_h] = ∅. For such h and a, without loss of generality assume that τ_h^∗ - τ̂_a > δ_h, it can be observed that (τ_h^∗ - δ_h, τ_h^∗ + δ_h] ⊂ (τ̂_a, τ̂_a+1] and Δ_(τ̂_a, τ̂_a+1]^2 (τ̂_a+1 - τ̂_a) ≥ 2 δ_h Δ_(τ_h^∗ - δ_h, τ_h^∗ + δ_h]^2 = δ_h Δ_h^2/2 = C s log(p ∨ n).
To move further, we need the following definitions to divide into four groups.
For a changepoint estimation τ and the true changepoint set , let u = k_τ,-^∗ and v = k_τ,+^∗.
We say that τ is separable from the left if τ - τ_u^∗ > δ_u ∨δ_m and separable from the right if τ_v^∗ - τ > δ_v ∨δ_m.
Otherwise, τ is inseparable from the left (right).
For the intervals (τ_l,τ_r] ∈, we make the following definitions,
* (τ_l,τ_r] ∈ (0, n] is separable if τ_l is separable from the right and τ_r is separable from the left.
* (τ_l,τ_r] ∈ (0, n] is left-separable if τ_l is separable from the right and τ_r is inseparable from the left.
* (τ_l,τ_r] ∈ (0, n] is right-separable if τ_l is inseparable from the right and τ_r is separable from the left.
* (τ_l,τ_r] ∈ (0, n] is inseparable if τ_l is inseparable from the right and τ_r is inseparable from the left.
Now the sub-intervals in have been classified into four groups = _1 ∪_2 ∪_3 ∪_4. We will show that = ∅ by emptying these groups.
§.§.§ Case 1: _1 = ∅
For (τ̂_a, τ̂_a+1] ∈_1, let h = k_τ̂_a,+^∗. Denote _a = τ_h^∗,…,τ_h+t^∗=∩ (τ̂_a, τ̂_a+1). Let = ∪_a. Since γ = C_γ s log(p ∨ n),
L() - L() = C_(τ̂_a, τ̂_a+1] - [C_(τ̂_a, τ_h^∗] + C_(τ_h+t^∗, τ̂_a+1] + ∑_j=h^h+t-1 C_(τ_j^∗, τ_j+1^∗] + (t+1)γ]
> (1 - C_<ref>lem:loc_err_g.3) Δ_(τ̂_a, τ̂_a+1]^2 (τ̂_a+1 - τ̂_a) - (t+2)C_<ref>lem:loc_err_g.1 s log(p ∨ n) - (t+1) γ
= (1 - C_<ref>lem:loc_err_g.3) ∑_i ∈ (τ̂_a, τ̂_a+1]_i^∘ - _(τ̂_a, τ̂_a+1]^∘_Σ^2 - [(t+2) C_<ref>lem:loc_err_g.1 + (t+1) C_γ] s log(p ∨ n)
≥ [(1 - C_<ref>lem:loc_err_g.3) (t + 1) C̃ - (t+2) C_<ref>lem:loc_err_g.1 - (t+1) C_γ] s log(p ∨ n) > 0,
provided that C̃≥ (1 - C_<ref>lem:loc_err_g.3)^-1 (2 C_<ref>lem:loc_err_g.1 + C_γ). Therefore _1 = ∅.
§.§.§ Case 2: _2 = _3 = ∅
Without loss of generality, by the symmetry of _2 and _3, we only show that _3 = ∅. If the claim does not hold, one can choose (τ̂_a, τ̂_a+1] ∈_3 to be the leftmost one. Hence τ̂_a must be separable from the left by Condition <ref>. Since _1 = ∅ and (τ̂_a, τ̂_a+1] is the leftmost interval in _3, one obtains (τ̂_a-1, τ̂_a] ∉. Denote h = k_τ̂_a,+^∗ and _a = ∩ (τ̂_a + δ_m, τ̂_a+1-δ_m) = τ_h+1^∗,…,τ_h+t^∗ (t=0 if _a = ∅). Let = (∖τ̂_a) ∪τ_h^∗∪_a = (∖τ̂_a) ∪τ_j^∗_j=h^h+t.
L() - L() = C_(τ̂_a, τ̂_a+1] + (C_(τ̂_a-1, τ̂_a] - C_(τ̂_a-1, τ_h^∗]) - [∑_j=h^h+t-1 C_(τ_j^∗, τ_j+1^∗] + C_(τ_h+t^∗, τ̂_a+1] + t γ]
> (1 - C_<ref>lem:loc_err_g.3)Δ_(τ̂_a, τ̂_a+1]^2 (τ̂_a+1 - τ̂_a) - [(t+1) C_<ref>lem:loc_err_g.1 + t C_γ] s log(p ∨ n)
+ (∑_i ∈ (τ̂_a,τ_h^∗]ϵ_i^2 + C_(τ̂_a-1, τ̂_a] - C_(τ̂_a-1, τ_h^∗]).
Since (τ̂_a-1, τ̂_a] ∉ and 0 < τ̂_a - τ_h^∗ < δ_m, one must obtain that either (τ̂_a-1, τ̂_a) ∩^∗ = ∅ or 0 < τ_h-1^∗ - τ̂_a-1 < δ_h-1 = 2CΔ_h-1^-2 s log(p ∨ n).
For the first scenario, under 𝔾_1,
∑_i ∈ (τ̂_a,τ_h^∗]ϵ_i^2 + C_(τ̂_a-1, τ̂_a] - C_(τ̂_a-1, τ_h^∗]≤ 2 C_<ref>lem:loc_err_g.1 s log(p ∨ n).
Hence,
L() - L() > (1 - C_<ref>lem:loc_err_g.3)Δ_(τ̂_a, τ̂_a+1]^2 (τ̂_a+1 - τ̂_a) - [(t+3) C_<ref>lem:loc_err_g.1 + t C_γ] s log(p ∨ n)
≥{(1 - C_<ref>lem:loc_err_g.3)(t ∨ 1)C̃ - (t + 3) C_<ref>lem:loc_err_g.1 - t C_γ} s log(p ∨ n) > 0,
provided that C̃≥ (1 - C_<ref>lem:loc_err_g.3)^-1 (4 C_<ref>lem:loc_err_g.1 + C_γ).
For the second scenario, let I_1 = (τ̂_a-1, τ̂_a] and I_2 = (τ̂_a-1, τ_h^∗]. Firstly, we will bound the gap Δ_I_2^2 I_2 - Δ_I_1^2 I_1. Since I_1 ⊂ I_2, we have Δ_I_2^2 I_2 - Δ_I_1^2 I_1≥ 0.
Denote d_1 = τ_h-1^∗ - τ̂_a-1, d_2 = τ̂_a - τ_h-1^∗ and d_3 = τ_h^∗ - τ̂_a. Recall that Δ_h-1 = _h^∗ - _h-1^∗_Σ and the definition of Δ_I^2, we have
Δ_I_2^2 I_2 = d_1 (d_2 + d_3)/d_1 + d_2 + d_3Δ_h-1^2, Δ_I_1^2 I_1 = d_1 d_2/d_1 + d_2Δ_h-1^2.
It follows that
Δ_I_2^2 I_2 - Δ_I_1^2 I_1 = d_1^2 d_3 Δ_h-1^2/(d_1 + d_2)(d_1 + d_2 + d_3)≤C^2 ( C∨ C_m)/C_𝗌𝗇𝗋(C_𝗌𝗇𝗋 - C∨ C_m) s log(p ∨ n).
where the last inequality is from the conditions d_1 ≤CΔ_h-1^-2 s log(p ∨ n), d_3 ≤δ_h ∨δ_m and d_1 + d_2 + d_3 ≥ C_𝗌𝗇𝗋 s log(p ∨ n) [1 + Δ_h-1^-2 + Δ_h^-2]. Denote C_m,1 = C^2 ( C∨ C_m)/C_𝗌𝗇𝗋(C_𝗌𝗇𝗋 - C∨ C_m).
By 0 < τ_h-1^∗ - τ̂_a-1 < δ_h-1 = 2CΔ_h-1^-2 s log(p ∨ n), Δ_I_1^2 I_1≤Δ_I_2^2 I_2≤C s log(p ∨ n). Hence combining 𝔾_2,
∑_i ∈ (τ̂_a,τ_h^∗]ϵ_i^2 + C_(τ̂_a-1, τ̂_a] - C_(τ̂_a-1, τ_h^∗] > -(2 C_<ref>lem:loc_err_g.2 + C_m,1) s log(p ∨ n).
By Eq. (<ref>) and Eq. (<ref>),
L() - L() > (1 - C_<ref>lem:loc_err_g.3)Δ_(τ̂_a, τ̂_a+1]^2 (τ̂_a+1 - τ̂_a) - [(t + 1) C_<ref>lem:loc_err_g.1 + t C_γ + 2 C_<ref>lem:loc_err_g.2 + C_m,1] s log(p ∨ n)
≥ [(1 - C_<ref>lem:loc_err_g.3)(t ∨ 1) C̃ - (t + 1) C_<ref>lem:loc_err_g.1 - t C_γ - 2 C_<ref>lem:loc_err_g.2 - C_m,1] s log(p ∨ n) > 0,
provided that C̃≥ (1 - C_<ref>lem:loc_err_g.3)^-1 (2 C_<ref>lem:loc_err_g.1 + C_γ + 2 C_<ref>lem:loc_err_g.2 + C_m,1). Hence _2 ∪_3 = ∅.
§.§.§ Case 3: _4 = ∅
Similar to Case 2, let (τ̂_a, τ̂_a+1] ∈_4, then τ̂_a is separable from the left and τ̂_a+1 is separable from the right. By the fact that _1 ∪_2 ∪_3 = ∅, we also obtain (τ̂_a-1, τ̂_a] ∉ and (τ̂_a+1, τ̂_a+2] ∉. Let h = k_τ̂_a,+^∗ and h + t = k_τ̂_a+1,-^∗. Denote _a = τ_h^∗,…,τ_h+t^∗ and = (∖τ̂_a, τ̂_a+1∪_a. We have
L() - L() = C_(τ̂_a, τ̂_a+1] +[C_(τ̂_a-1, τ̂_a] + C_(τ̂_a+1, τ̂_a+2] - C_(τ̂_a-1, τ_h^∗] - C_(τ_h+t^∗, τ̂_a+2]]
- ∑_j=h^h+t-1 C_(τ_j^∗, τ_j+1^∗] - (t-1)γ
> (1 - C_<ref>lem:loc_err_g.3)Δ_(τ̂_a, τ̂_a+1]^2 (τ̂_a+1 - τ̂_a) - ( t C_<ref>lem:loc_err_g.1 + (t-1) C_γ) s log(p ∨ n)
+ [ ∑_i ∈ (τ̂_a, τ_h^∗] ∪ (τ_h+1^∗, τ̂_a+1]ϵ_i^2 + C_(τ̂_a-1, τ̂_a] + C_(τ̂_a+1, τ̂_a+2] - C_(τ̂_a-1, τ_h^∗] - C_(τ_h+t^∗, τ̂_a+2]].
Follow the same discussion in Case 2, see Eq. (<ref>), we have
∑_i ∈ (τ̂_a, τ_h^∗] ∪ (τ_h+1^∗, τ̂_a+1]ϵ_i^2 + C_(τ̂_a-1, τ̂_a] + C_(τ̂_a+1, τ̂_a+2] - C_(τ̂_a-1, τ_h^∗] - C_(τ_h+t^∗, τ̂_a+2] > -(4 C_<ref>lem:loc_err_g.2 + 2 C_m,1) s log(p ∨ n).
Hence,
L() - L()
> (1 - C_<ref>lem:loc_err_g.3)Δ_(τ̂_a, τ̂_a+1]^2 (τ̂_a+1 - τ̂_a) - [t C_<ref>lem:loc_err_g.1 + (t-1) C_γ + 4 C_<ref>lem:loc_err_g.2 + 2 C_m,1] s log(p ∨ n)
≥ {(1 - C_<ref>lem:loc_err_g.3)[(t-1) ∨ 1] C̃ - t C_<ref>lem:loc_err_g.1 - (t-1) C_γ - 4 C_<ref>lem:loc_err_g.2 - 2 C_m,1} s log(p ∨ n) ≥ 0
provided that C̃≥ (1 - C_<ref>lem:loc_err_g.3)^-1(2 C_<ref>lem:loc_err_g.1 + C_γ + 4 C_<ref>lem:loc_err_g.2 + 2 C_m,1).
In summary, we obtain = ∅ provided that C̃≥ (1 - C_<ref>lem:loc_err_g.3)^-1(2 C_<ref>lem:loc_err_g.1 + C_γ + 4 C_<ref>lem:loc_err_g.2 + 2 C_m,1). Hence max_1 ≤ j ≤ K^∗min_1 ≤ k ≤K1/2Δ_j^2 |τ_j^∗ - τ̂_k| ≤C̃ s log(p ∨ n). It also implies that K≥ K^∗.
It remains to show that K̂≤ K^∗. Otherwise, assume that K̂ > K^∗. Then there must be j ∈ [0, K^∗] and k ∈ [1, K̂] such that τ_j^∗ - δ_j ≤τ̂_k-1 < τ̂_k < τ̂_k + 1≤τ_j+1^∗ + δ_j+1. Similar to the decomposition of , we can also divide it into four groups.
* τ_j^∗≤τ̂_k-1 < τ̂_k < τ̂_k + 1≤τ_j+1^∗.
* τ_j^∗ - δ_j ≤τ̂_k-1 < τ_j^∗ and τ_j^∗≤τ̂_k < τ̂_k+1≤τ_j+1^∗.
* τ_j^∗≤τ̂_k-1 < τ̂_k ≤τ_j+1^∗ and τ_j+1^∗ < τ̂_k+1≤τ_j+1^∗ + δ_j+1.
* τ_j^∗ - δ_j ≤τ̂_k-1 < τ_j^∗≤τ̂_k ≤τ_j+1^∗ < τ̂_k+1≤τ_j+1^∗ + δ_j+1
§.§.§ Case 1: _1 = ∅
Let = ∖τ̂_k. We have
L() - L() = C_(τ̂_k-1, τ̂_k+1] - C_(τ̂_k-1, τ̂_k] - C_(τ̂_k, τ̂_k+1] - γ
< (3 C_<ref>lem:loc_err_g.1 - C_γ) s log(p ∨ n) ≤ 0,
provided that C_γ≥ 3 C_<ref>lem:loc_err_g.1.
§.§.§ Case 2: _2 ∪_3 = ∅
We will show that _2 = ∅ because the proof for _3 = ∅ is the same by symmetry. Assume that j and k are the leftmost one that satisfies _2. It implies that τ̂_k-2∈ [τ_j-1^∗ - δ_j-1, τ_j-1^∗ + δ_j-1]. Otherwise assume τ̂_k-2 > τ_j-1^∗ + δ_j-1. Since max_1 ≤ j ≤ K^∗min_1 ≤ k ≤KΔ_j^2 τ_j^∗ - τ̂_k≤C s log(p ∨ n), there must be τ̂_k - h∈ [τ_j-1^∗ - δ_j-1, τ_j-1^∗ + δ_j-1] for some h > 2. It contradicts the fact that _1 = ∅ and the choice of k.
Let = τ_j^∗∪∖τ̂_k-1, τ̂_k.
L() - L() = C_(τ̂_k-2, τ_j^∗] + C_(τ_j^∗, τ̂_k+1] - [∑_t = k-2^k C_(τ̂_t, τ̂_t+1] + γ]
= [C_(τ̂_k-2, τ_j^∗] - C_(τ̂_k-2, τ̂_k-1]] + C_(τ_j^∗, τ̂_k+1] - [∑_t = k-1^k C_(τ̂_t, τ̂_t+1] + γ]
< [ C_(τ̂_k-2, τ_j^∗] - C_(τ̂_k-2, τ̂_k-1] - ∑_i ∈ (τ̂_k-1, τ_j^∗]ϵ_i^2 ] + (2 C_<ref>lem:loc_err_g.1 + C_<ref>lem:loc_err_g.2 - C_γ) s log(p ∨ n)
≤ (2 C_<ref>lem:loc_err_g.1 + 3 C_<ref>lem:loc_err_g.2 + C_m,1 - C_γ) s log(p ∨ n) ≤ 0,
provided C_γ≥ 2 C_<ref>lem:loc_err_g.1 + 3 C_<ref>lem:loc_err_g.2 + C_m,1. The second last inequality is from Eq. (<ref>).
§.§.§ Case 3: _4 = ∅
Now _1 ∪_2 ∪_3 = ∅. Assume that τ̂_k-1, τ̂_k, τ̂_k+1 satisfies _4. Similar to the analysis of _2 = ∅, we have τ̂_k-2∈ [τ_j-1^∗ - δ_j-1, τ_j-1^∗ + δ_j-1] and τ̂_k+2∈ [τ_j+1^∗ + δ_j+1, τ_j+1^∗ + δ_j+1]. Follow the same arguments in the proof for _4 = ∅, we can set = τ_j^∗, τ_j+1^∗∪∖τ̂_k-1, τ̂_k, τ̂_k+1.
L() - L() = C_(τ̂_k-2, τ_j^∗] + C_(τ_j^∗, τ_j+1^∗] + C_(τ_j+1^∗, τ̂_k+2] - [∑_t = k-2^k+1 C_(τ̂_t, τ̂_t+1] + γ]
= [C_(τ̂_k-2, τ_j^∗] - C_(τ̂_k-2, τ̂_k-1] + C_(τ_j+1^∗, τ̂_k+2] - C_(τ̂_k+1, τ̂_k+2]]
+ C_(τ_j^∗, τ_j+1^∗] - [∑_t = k-1^k C_(τ̂_t, τ̂_t+1] + γ]
< [ C_(τ̂_k-2, τ_j^∗] - C_(τ̂_k-2, τ̂_k-1] + C_(τ_j+1^∗, τ̂_k+2] - C_(τ̂_k+1, τ̂_k+2] - ∑_i ∈ (τ̂_k-1, τ_j^∗] ∪ (τ_j+1^∗, τ̂_k+1]ϵ_i^2 ]
+ (C_<ref>lem:loc_err_g.1 + 2 C_<ref>lem:loc_err_g.2 - C_γ) s log(p ∨ n)
≤ (C_<ref>lem:loc_err_g.1 + 6 C_<ref>lem:loc_err_g.2 + 2 C_m,1 - C_γ) s log(p ∨ n) ≤ 0,
provided C_γ≥ C_<ref>lem:loc_err_g.1 + 6 C_<ref>lem:loc_err_g.2 + 2 C_m,1. The second last inequality is from Eq. (<ref>).
Combining the proof in the and parts, we can determine the two constants by solving the following inequalities,
{
C_γ ≥ C_<ref>lem:loc_err_g.1 + 6 C_<ref>lem:loc_err_g.2 + 2 C_m,1
C ≥ (1 - C_<ref>lem:loc_err_g.3)^-1(2 C_<ref>lem:loc_err_g.1 + C_γ + 4 C_<ref>lem:loc_err_g.2 + 2 C_m,1)
.
Since C_𝗌𝗇𝗋 and C_m are sufficiently large, C_m,1 = C^2 ( C∨ C_m)/C_𝗌𝗇𝗋(C_𝗌𝗇𝗋 - C∨ C_m) = C^2 C_m/C_𝗌𝗇𝗋(C_𝗌𝗇𝗋 - C_m). Let C_γ = C_<ref>lem:loc_err_g.1 + 6 C_<ref>lem:loc_err_g.2 + 2 C_m,1, one obtains the following inequality w.r.t. C,
4 C_mC^2/C_𝗌𝗇𝗋 (C_𝗌𝗇𝗋 - C_m) - (1 - C_<ref>lem:loc_err_g.3) C + 3 C_<ref>lem:loc_err_g.1 + 10 C_<ref>lem:loc_err_g.2≥ 0.
Treat it as a quadratic inequality w.r.t. C, we can figure out that there exist solutions if and only if C_𝗌𝗇𝗋 (C_𝗌𝗇𝗋 - C_m) ≥ 16 (1 - C_<ref>lem:loc_err_g.3)^-2 C_m (3 C_<ref>lem:loc_err_g.1 + 10 C_<ref>lem:loc_err_g.2). And by solving it, we have
C = a - √(a^2 - b)≤b/2 √(a^2 - b)≤b/a = 2 (1 - C_<ref>lem:loc_err_g.3)^-1 (3 C_<ref>lem:loc_err_g.1 + 10 C_<ref>lem:loc_err_g.2),
satisfies Eq. (<ref>). Here a = (1 - C_<ref>lem:loc_err_g.3) C_𝗌𝗇𝗋 (C_𝗌𝗇𝗋 - C_m)/8 C_m and b = (3 C_<ref>lem:loc_err_g.1 + 10 C_<ref>lem:loc_err_g.2) C_𝗌𝗇𝗋 (C_𝗌𝗇𝗋 - C_m)/4 C_m. The last inequality in Eq. (<ref>) holds provided that C_𝗌𝗇𝗋 is sufficiently large such that b ≤ 3 a^2 / 4.
And it follows that = provided that Eq. (<ref>) holds. Finally, we obtain
K = K^∗; max_1 ≤ k ≤ K^∗min_1 ≤ j ≤K1/2Δ_k^2 |τ_k^∗ - τ̂_j| ≤C s log(p ∨ n),
with C = 2 (1 - C_<ref>lem:loc_err_g.3)^-1 (3 C_<ref>lem:loc_err_g.1 + 10 C_<ref>lem:loc_err_g.2).
§ PROOF OF THEOREM <REF>
For a interval I, denote the sparsity constant s_I = s ∨1 ≤ j ≤ p: ∃ i ∈ I, _i, j^∘≠ 0≥ s. Observe that s_I ≤∩ I× s. Define Δ_I,q = (I^-1∑_i ∈ I_i^∘ - _I^∘_Σ^q)^1/q and Δ_I = Δ_I,2 be the root average square variation of I and Δ_I,∞ = max_i ∈ I_i^∘ - _I^∘_Σ be the maximum variation of I.
As stated in Lemma <ref>, to show that the bound of localization error in Theorem <ref> holds, we only need to certify that the event 𝔾 holds with high probability for both the original full model-fitting approach and the approach with suitable constants. These two claims are shown in Corollary <ref> and Corollary <ref>, respectively. Finally, the L_2 error bound of the parameter estimation follows the oracle inequality of LASSO.
This section is organized as follows. In Section <ref>, we introduce several useful non-asymptotic probability bounds, including the oracle inequality of LASSO with heterogeneous data. In Section <ref> and <ref>, we show that 𝔾 holds with high probability for the two approaches correspondingly. All the proofs are relegated to the last part.
§.§ Supporting Lemmas
Let X_1,…, X_n be independent, mean zero, sub-exponential random variables.
For every t > 0, we have
∑_i=1^n X_i > t≤ 2 exp[-c_b (
t^2/∑_i=1^n X_i_Ψ_1^2∧t/max_i X_i_Ψ_1)],
where c_b > 0 is an absolute constant.
Choose t = C_u/c_b [√(∑_i ∈ [n]X_i_Ψ_1^2 log(p ∨ n))∨{max_i X_i_Ψ_1log(p ∨ n)} ] with C_u≥ c_b, we have
∑_i=1^n X_i > t≤ 2 exp{-C_ulog(p ∨ n)},
Assume Condition <ref> (a) holds.
For any interval I ⊂ (0,n], denote Σ̂_I = I^-1∑_i ∈ I_i _i^⊤.
Uniformly for all intervals I ⊂ (0, n] such that I≥ s_I log(p ∨ n), with probability at least 1 - exp{-C_u,1log(p ∨ n)},
^⊤Σ̂_I≥_Σ^2 - C_u,2 C_x^2 σ_x^2 √(s_I log(p ∨ n)/I) (_2^2 + 1/s_I_1^2), ∀∈^p,
where C_u,1 and C_u,2 are two universal constants.
Let I≥ C_𝗋𝖾 s_I log(p ∨ n) with a sufficiently large constant C_𝗋𝖾≥ 1 ∨ (34 C_u,2C_x^2 σ_x^2/)^2.
For any support set ∈ [p] with ≤ s_I and ∈^p such that _^∁_1 ≤ 3 __1, under the same event above,
^⊤Σ̂_I≥/2_2^2.
Assume Condition <ref> holds.
With probability at least 1 - exp{-C_u,1log(p ∨ n)}, uniformly for any sub-interval I ⊂ (0, n],
∑_i ∈ I_i [_i^⊤ (_i^∘ - _I^∘) + ϵ_i]_∞
≤ C_u,2 C_x σ_x √((C_x^2Δ_I^2 + C_ϵ^2) ∨(C_x^2Δ_I, ∞^2 + C_ϵ^2) log(p ∨ n)/I)√(Ilog(p ∨ n))
≤ C_u,2 C_x σ_x √((C_x^2Δ_I^2 + C_ϵ^2) ∨(C_x^2 σ_x^2 C_β^2 s_I + C_ϵ^2) log(p ∨ n)/I)√(Ilog(p ∨ n)),
where C_u,2 = c_b^-1 (C_u,1+3), Δ_I^2 = 1/I∑_i ∈ I_i^∘ - _I^∘_Σ^2 is the mean square variation and Δ_I,∞ = max_i ∈ I_i^∘ - _I^∘_Σ is the maximum jumps.
Denote Δ_I,4 = (1/I∑_i ∈ I_i^∘ - _I^∘_Σ^4)^1/4.
Assume Condition <ref> holds.
With probability at least 1 - exp{-C_u,1log(p ∨ n)}, uniformly for any sub-interval I ⊂ (0, n],
∑_i ∈ I{_i^⊤(_i^∘ - _I^∘)}^2 - Δ_I^2 I≤ C_u,2 C_x^2 √(Δ_I,4^4 ∨Δ_I,∞^4log(p ∨ n)/I)√(Ilog(p ∨ n))
≤ C_u,2 C_x^2 √(Δ_I^2 ∨C_β^2 σ_x^2 s_I log(p ∨ n)/I)√(C_β^2 σ_x^2 I s_I log(p ∨ n)).
Assume Condition <ref> holds.
With probability at least 1 - exp{-C_u,1log(p ∨ n)}, uniformly for any sub-interval I ⊂ (0, n],
∑_i ∈ I_i^⊤(_i^∘ - _I^∘) ϵ_i≤ C_u,2 C_x C_ϵ√(Δ_I^2 ∨Δ_I,∞^2 log(p ∨ n)/I)√(Ilog(p ∨ n)),
Assume Condition <ref> (a) and Condition <ref> hold.
For any interval I ⊂ (0, n], let D_I = √((C_x^2Δ_I^2 + C_ϵ^2) ∨(C_x^2Δ_I, ∞^2 + C_ϵ^2) log(p ∨ n)/I).
We have with probability at least 1 - 2 exp{-C_u,1log(p ∨ n)}, uniformly for any interval I ⊂ (0, n] with I≥ C_𝗋𝖾 s_I log(p ∨ n), provided that λ_I = 4 C_u,2 C_x σ_x D_I√(Ilog(p ∨ n)), the solution _I satisfies that
_I - _I^∘_2 ≤ C_<ref>lem:oracle D_I√(s_I log (p ∨ n)/I),
_I - _I^∘_1 ≤ C_<ref>lem:oracle D_I s_I √(log (p ∨ n)/I),
where the model-based constant C_<ref>lem:oracle = 12 C_u,2 C_x σ_x/.
§.§ Certifying 𝔾 for the Full Model-fitting
Assume Condition <ref> and Condition <ref> (a) hold.
For any interval I ⊂ (0, n], let D_I = √((C_x^2Δ_I^2 + C_ϵ^2) ∨(C_x^2Δ_I, ∞^2 + C_ϵ^2) log(p ∨ n)/I).
Under the setting in Lemma <ref>,
with probability at least 1 - 4 exp{-C_u,1log(p ∨ n)}, for any interval I = (τ_l, τ_r] such that I≥ C_𝗋𝖾 s_I log(p ∨ n),
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I≤48 C_u,2^2 C_x^2 σ_x^2 D_I^2 s_I log(p ∨ n)/
+ C_u,2 C_x (C_x Δ_I, ∞ + 2 C_ϵ) √([(Δ_I^2I) ∨{Δ_I, ∞^2log(p ∨ n)}] log(p ∨ n)).
Additionally if Condition <ref> (b) holds,
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I≤48 C_u,2^2 C_x^2 σ_x^2 D_I^2 s_I log(p ∨ n)/
+ C_u,2 C_x (C_x σ_x C_β + 2 C_ϵ/√(s_I)) √([(Δ_I^2I) ∨{σ_x^2 C_β^2 s_Ilog(p ∨ n)}] s_I log(p ∨ n)).
Assume Condition <ref>, Condition <ref>, and Condition <ref> hold.
Under the same probability event in Lemma <ref> and with sufficiently large C_m, we have the following conclusions.
* For I such that Δ_I = 0 and I≥ C_m s log(p ∨ n),
Ł_I - ∑_i ∈ Iϵ_i^2≤48 C_u,2^2 C_x^2 σ_x^2 C_ϵ^2 s log(p ∨ n)/≜ C_<ref>cor:in_err.1 s log(p ∨ n).
* For I such that Δ_I^2 I≤C s log(p ∨ n) for some sufficiently large C≥ 2 C_β^2 σ_x^2, I ∩^∗≤ 1 and I≥ C_m s log(p ∨ n),
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I≤ C_<ref>cor:in_err.2 s log(p ∨ n),
where C_<ref>cor:in_err.2 = 2 C_<ref>cor:in_err.1 + 96 C_u,2^2 C_x^4 σ_x^2 C/C_m + C_u,2 C_x (C_x σ_x C_β + 2 C_ϵ/√(s)) √(2C).
* For I such that I≥ C_m s log(p ∨ n) and Δ_I^2 I≥C s log(p ∨ n) for some sufficiently large C≥ 3 C_β^2 σ_x^2,
Ł_I - ∑_i ∈ Iϵ_i^2 ≥ (1 - C_<ref>cor:in_err.3)Δ_I^2 I,
where
C_<ref>cor:in_err.3 = 96 C_u,2^2 C_x^4 σ_x^2/ C_m + C_u,2 C_x (C_x σ_x C_β + 2 C_ϵ/√(s)) √(3/C) + 3 C_<ref>cor:in_err.1/C.
§.§ Certifying 𝔾 for
Notations:
Let R be the surrogate interval w.r.t. I and J = I ∖ R be the complement. Denote Δ_I^2 = 1/I∑_i ∈ I_i^∘ - _R^∘_Σ^2, Δ_J^2 = 1/J∑_i ∈ J_i^∘ - _R^∘_Σ^2 and Δ_J^2 = 1/J∑_i ∈ J_i^∘ - _R_Σ^2. Let Δ_J,q^q = 1/J∑_i ∈ J_i^∘ - _R^∘_Σ^q. The following identity holds for these variations, Δ_I^2 I = Δ_R^2 R + Δ_J^2 J. Denote the cost function of interval I by Ł_I = ∑_i ∈ I (y_i - _i^⊤_R)^2.
Assume Condition <ref> and Condition <ref> (a) hold.
With probability at least 1 - 4exp{-C_u,1log(p ∨ n)}, for any interval I = (τ_l, τ_r] such that R≥ C_𝗋𝖾 s_R log(p ∨ n),
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_J^2 J - Δ_R^2 R≤48 C_u,2^2 C_x^2 σ_x^2 D_R^2 s_R log(p ∨ n)/
+ C_u,2 C_x (C_x Δ_R, ∞ + 2 C_ϵ) √([(Δ_R^2R) ∨{Δ_R, ∞^2log(p ∨ n)}] log(p ∨ n))
+ C_u,2 C_x (C_x Δ_J, ∞ + 2 C_ϵ) √([(Δ_J^2 J) ∨{Δ_J,∞^2 log(p ∨ n)}]log(p ∨ n))
In the final bound of Lemma <ref>, there exists a random variation term Δ_J^2. To obtain deterministic result, we will show that Δ_J^2 - Δ_J^2J is relatively small in the following lemma.
Assume Condition <ref> and Condition <ref> hold.
Assume that the joint probability event of Lemmas <ref>–<ref> holds, which implies a probability lower bound 1 - 3 exp{-C_u,1log(p ∨ n)}.
For any I = (τ_l, τ_r] ∈ (0, n] such that I≥ C_m s log(p ∨ n) and R≥ r I, the set J = I ∖ R satisfies that,
* For I such that Δ_I = 0 and I≥ C_m s log(p ∨ n),
Δ̂_J^2 - Δ_J^2J≤ C_<ref>lem:oracle^2 σ_x^2 J/R C_ϵ^2 s log(p ∨ n) ≤C_<ref>lem:oracle^2 C_ϵ^2 σ_x^2 (1-r)/r s log(p ∨ n).
Also since R≥ r C_m s log(p ∨ n), we have the upper bound for the average term,
Δ_J, ∞^2 = Δ_J^2 = Δ_J^2 - Δ_J^2≤C_<ref>lem:oracle^2 σ_x^2 C_ϵ^2/R s log(p ∨ n) ≤C_<ref>lem:oracle^2 σ_x^2 C_ϵ^2/C_m r.
* For I such that Δ_I^2 I≤C s log(p ∨ n), I ∩^∗≤ 1 and I≥ C_m s log(p ∨ n), we have
Δ_J^2 - Δ_J^2J≤ C_<ref>lem:rd_var.1 s log(p ∨ n),
where
C_<ref>lem:rd_var.1 = 4 C_<ref>lem:oracleσ_x √( 1 - r / r )√(2 C_x^2 C^2/C_m r + C_ϵC) + 2 C_<ref>lem:oracle^2 σ_x^2 1 - r / r ( 2 C_x^2 C/C_m r + C_ϵ^2 ).
* For I such that I≥ C_m s log(p ∨ n) and Δ_I^2 I≥C s log(p ∨ n),
Δ_J^2 - Δ_J^2J≤ C_<ref>lem:rd_var.2Δ_I^2 I,
where
C_<ref>lem:rd_var.2 = 2 C_<ref>lem:oracleσ_x √(1 - r/r)√(2 C_x^2/C_m r + 3 C_ϵ^2/C) + C_<ref>lem:oracle^2 σ_x^2 1 - r/r(2 C_x^2/C_m r + 3 C_ϵ^2/C).
Both of the constants C_<ref>lem:rd_var.1 and C_<ref>lem:rd_var.2 are o(1) provided that J = o(R).
Assume Condition <ref> and Condition <ref> hold.
Here we only consider those intervals I such that I≥ C_m s log(p ∨ n) for some sufficiently large constant C_m.
With probability at least 1 - 4exp{-C_u,1log(p ∨ n)}, we have the following conclusions uniformly.
* If Δ_I = 0,
Ł_I - ∑_i ∈ Iϵ_i^2≤ C_<ref>cor:mix_err.1 s log(p ∨ n),
where C_<ref>cor:mix_err.1 = C_<ref>cor:in_err.1 + (1-r) C_<ref>lem:oracle^2 σ_x^2 C_ϵ^2/r + C_u,2 C_x (C_x C_<ref>lem:oracle^2 σ_x^2 C_ϵ^2/√(C_m r) + 2 C_ϵ^2 C_<ref>lem:oracleσ_x) √( (1-r)/r s∨ 1 /C_m r s^2).
* For I such that Δ_I^2 I≤C s log(p ∨ n) for some sufficiently large C≥ 3 C_β^2 σ_x^2, I ∩^∗≤ 1 and I≥ C_m s log(p ∨ n), we have
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I≤ C_<ref>cor:mix_err.2 s log(p ∨ n),
where C_<ref>cor:mix_err.2 = 2C_<ref>cor:in_err.1 + C_<ref>lem:rd_var.1 + 96 C_u,2^2 C_x^4 σ_x^2 C/C_m r + C_u,2 C_x (√(3) C_x σ_x C_β + 2 C_ϵ/√(s)) [C^1/2 + (2 C + C_<ref>lem:rd_var.1)^1/2].
* If Δ_I^2 I≥C s log(p ∨ n) for some sufficiently large C≥ 3 C_β^2 σ_x^2,
Ł_I - ∑_i ∈ Iϵ_i^2 ≥ (1 - C_<ref>cor:mix_err.3) Δ_I^2 I,
where C_<ref>cor:mix_err.3 = C_<ref>lem:rd_var.2 + 3 C_<ref>cor:in_err.1/C + 96 C_u,2^2 C_x^4 σ_x^2/C_m r + C_u,2 C_x √(3)/√(C) (C_x σ_x C_β + 2 C_ϵ/√(s)) (1 + √(1 + C_<ref>lem:rd_var.2)).
§.§ Proofs
To ease the notation, we will replace s_I with s without loss of generality in the proof. Denote (s) = ∈^p: _2 = 1, ()≤ s. We will show that with high probability, sup_∈(2s) |^⊤ (Σ̂_I - Σ) | = O(C_x^2 √(s log(p ∨ n)/I)), then the result follows from Lemma 12 in loh_high-dimensional_2012. Let = Σ̂_I - Σ.
For any ⊂ [p] and = 2s, let _∈^s × s be the sub-matrix of with being the set of row and column indices. Let _ = ∈^p: _2 = 1, () =. There is a 1/4-net _ of _ with cardinality _≤ 9^s. For any ∈_ - _, there is ∈_ such that - _2 ≤1/4 and - / - _2∈_. Therefore,
^⊤ - ^⊤ = ^⊤ ( - ) + ^⊤ ( - )≤ 2 __𝗈𝗉 - _2 ≤1/2__𝗈𝗉.
By the definition of _, we have __𝗈𝗉 = sup_∈_^⊤. Hence
sup_∈_^⊤≤ 2 sup_∈_^⊤.
Let = ∪_ = 2s_. We have ≤p2s 9^2s≤ (9p)^2s and is the 1/4-net of (2s) because (2s) = ∪_=2s_. Also,
sup_∈(2s)^⊤≤ 2 sup_∈^⊤.
For a fixed ∈(2s), by the Bernstein's inequality (Lemma <ref>),
[ ^⊤ > t/I] ≤ 2 exp[-c_b (t^2/C_x^4 I∧t/C_x^2)].
Set t = c_b^-1 C_u C_x^2 √(I s log(p ∨ n)) with C_u ≥ c_b be a sufficiently large constant. With probability at least 1 - exp{- C_u s log(p ∨ n)},
^⊤≤ c_b^-1 C_u C_x^2 √(slog(p ∨ n)/I).
By taking the union bound over ∈ and I:I≥ s log(p ∨ n), with probability at least 1 - n^2 (9p)^2sexp{-C_u s log(p ∨ n)}≥ 1 - exp{- C_u,1log(p ∨ n)} for some C_u,1 > 0,
sup_∈(2s)^⊤ (Σ̂_I - Σ) ≤ 2sup_∈^⊤ (Σ̂_I - Σ) ≤ 2 c_b^-1 C_u C_x^2 √(slog(p ∨ n)/I).
By Lemma 12 in loh_high-dimensional_2012, under the above event,
^⊤ (Σ̂_I - Σ) ≤ 54 c_b^-1 C_u C_x^2 √(slog(p ∨ n)/I) (_2^2 + 1/s_1^2),
for all ∈^p and all intervals in I: I≥ s log(p ∨ n). Let C_u,1 = (C_u - 4) s - 2 and C_u,2=54 c_b^-1 C_u. With probability at least 1 - exp{-C_u,1log(p ∨ n)},
^⊤Σ̂_I ≥_2^2 - C_u,2 C_x^2 √(s log(p ∨ n)/I) (_2^2 + 1/s_1^2).
If there exists a support set ∈ [p] with ≤ s and _^∁_1 ≤ 3 __1, we have _1 ≤ 4 __1 ≤ 4 √(s)__2 ≤ 4 √(s)_2. The second result in the lemma follows from Eq. (<ref>) and the inequality that 1/s_1^2 ≤ 16 _2^2 then
^⊤Σ̂_I ≥_2^2 - 17 C_u,2 C_x^2 √(s log(p ∨ n)/I)_2^2 ≥/2_2^2,
where the last inequality is due to the condition that I≥ C_𝗋𝖾 s log(p ∨ n) with C_𝗋𝖾≥ 1 ∨ (34 C_u,2C_x^2/)^2.
By the definition of _i^∘ and _I^∘, {∑_i ∈ I_i _i^⊤ (_i^∘ - _I^∘)} =. By Condition <ref>, _i^⊤(_i^∘ - _I^∘) is sub-Gaussian with mean zero and Ψ_2-norm C_x _i^∘ - _I^∘_Σ and ϵ_i is sub-Gaussian with mean zero and Ψ_2-norm ϵ_Ψ_2 = C_ϵ. Hence _i^⊤(_i^∘ - _I^∘) + ϵ_i is sub-Gaussian with mean zero and
_i^⊤(_i^∘ - _I^∘) + ϵ_i_Ψ_2≤√(C_x^2 _i^∘ - _I^∘_Σ^2 + C_ϵ^2).
Then _i [_i^⊤ (_i^∘ - _I^∘) + ϵ_i] is sub-exponential with Ψ_1-norm
_i [_i^⊤ (_i^∘ - _I^∘) + ϵ_i]_Ψ_1≤ C_x σ_x √(C_x^2 _i^∘ - _I^∘_Σ^2 + C_ϵ^2).
By the Bernstein's inequality (Lemma <ref>), for any given ∈^p-1,
∑_i ∈ I^⊤_i [_i^⊤ (_i^∘ - _I^∘) + ϵ_i] > t
≤ 2 exp(-c_b t^2/C_x^2 σ_x^2 (C_x^2Δ_I^2 + C_ϵ^2)I∧c_b t/C_x σ_x √(C_x^2Δ_I,∞^2 + C_ϵ^2)).
By the union-bound inequality,
sup_I = (s,e] ⊂ [n]∑_i ∈ I_i [_i^⊤ (_i^∘ - _I^∘) + ϵ_i]_∞ > t
≤ n^2 p exp(-c_b t^2/C_x^2 σ_x^2 (C_x^2Δ_I^2 + C_ϵ^2)I∧c_b t/C_x σ_x √(C_x^2Δ_I,∞^2 + C_ϵ^2)).
Set t = c_b^-1 (C_u,1+3) C_x σ_x [√((C_x^2Δ_I^2 + C_ϵ^2) Ilog(p ∨ n))∨√((C_x^2Δ_I, ∞^2 + C_ϵ^2) log^2(p ∨ n))] with C_u,1≥ c_b. With probability at least 1 - n^2 p exp{-(C_u,1 + 3) log(p ∨ n)}≥ 1 - exp{-C_u,1log(p ∨ n)},
∑_i ∈ I_i [_i^⊤ (_i^∘ - _I^∘) + ϵ_i]_∞
≤ C_u,2 C_x σ_x [√((C_x^2Δ_I^2 + C_ϵ^2) Ilog(p ∨ n))∨√((C_x^2Δ_I, ∞^2 + C_ϵ^2) log^2(p ∨ n))],
where C_u,2 = c_b^-1 (C_u,1+3).
It follows from Bernstein's inequality with similar arguments in the proof of Lemma <ref>.
(Oracle inequality for the mixture of distributions.)
In the following proof, we assume that the inequalities in Lemma <ref>, <ref> hold. It implies a probability lower bound 1 - 2 exp{-C_u,1log(p ∨ n)}.
By the definition of _I,
∑_i ∈ I (y_i - _i^⊤)^2 + λ_I _1 = ∑_i ∈ I{y_i - _i^⊤_i^∘ + ^⊤(_i^∘ - _I^∘) + _i^⊤ (_I^∘ - _I)}^2 + λ_I _I_1
= ∑_i ∈ Iϵ_i^2 + {_i^⊤ (_i^∘ - _I^∘)}^2 + {_i^⊤ (_I^∘ - _I)}^2 + λ_I _I_1
+ 2 ∑_i ∈ I{ϵ_i _i^⊤(_i^∘ - _I^∘) + ϵ_i _i^⊤(_I^∘ - _I)} + 2(_I^∘ - _I)^⊤∑_i ∈ I_i _i^⊤ (_i^∘ - _I^∘)
≤ ∑_i ∈ I (y_i - _i^⊤_I^∘)^2 + λ_I _I^∘_1 = ∑_i ∈ I{y_i - _i^⊤_i^∘ + ^⊤(_i^∘ - _I^∘)}^2 + λ_I _I^∘_1
= ∑_i ∈ Iϵ_i^2 + {_i^⊤(_i^∘ - _I^∘)}^2 + 2 ∑_i ∈ Iϵ_i _i^⊤(_i^∘ - _I^∘) + λ_I _I^∘_1.
Hence,
∑_i ∈ I{_i^⊤(_I - _I^∘)}^2 + λ_I _I_1
≤ 2 (_I - _I^∘)^⊤∑_i ∈ I{ϵ_i _i + _i _i^⊤ (_i^∘ - _I^∘)} + λ_I _I^∘_1 ≤λ_I,1_I - _I^∘_1 + λ_I _I^∘_1,
where λ_I,1 = 2∑_i ∈ I{ϵ_i _i + _i _i^⊤ (_i^∘ - _I^∘)}_∞.
By Lemma <ref>,
λ_I,1≤ 2 C_u,2 C_x σ_x D_I √(Ilog(p ∨ n)).
where D_I = √((C_x^2Δ_I^2 + C_ϵ^2) ∨(C_x^2Δ_I, ∞^2 + C_ϵ^2) log(p ∨ n)/I) for easing the notation. Since ∑_i ∈ I{_i^⊤(_I - _I^∘)}^2 ≥ 0, (λ_I - λ_I,1) _I,^∁ - _I,^∁^∘_1 ≤ (λ_I + λ_I,1) _I, - _I,^∘_1. Choosing λ_I = 2 λ_I,1, we have _I,^∁ - _I,^∁^∘_1 ≤ 3 _I, - _I,^∘_1.
Apply Lemma <ref>, the uniform restricted eigenvalue condition holds for any interval I with I≥ C_𝗋𝖾 s_I log(p ∨ n). Hence
1/2I_I - _I^∘_2^2 ≤∑_i ∈ I{_i^⊤(_I - _I^∘)}^2 ≤λ_I,1_I - _I^∘_1 + λ_I _I^∘_1 - λ_I _I_1 ≤ (λ_I + λ_I,1) _I, - _I,^∘_1 - λ_I,1_I,^∁_1 ≤ (λ_I + λ_I,1)√(s_I)_I - _I^∘_2. By basic algebra,
_I - _I^∘_2 ≤3λ_I,1√(s_I)/2^-1I≤12 C_u,2 C_x σ_x D_I/√(s_Ilog(p ∨ n)/I),
and
_I - _I^∘_1 ≤3λ_I,1 s_I/2^-1I≤12 C_u,2 C_x σ_x D_I s_I/√(log(p ∨ n)/I).
Assume that the joint probability of Lemmas <ref>–<ref> event holds, which implies a probability lower bound 1 - 4 exp{-C_u,1log(p ∨ n)}.
For any interval I = (c,d], we will analyze the cost Ł_I. By the definition of the cost Ł_I,
Ł_I = ∑_i ∈ I (y_i - _i^⊤_I)^2 = ∑_i ∈ I{y_i - _i^⊤_I^∘ + _i^⊤(_I^∘ - _I)}^2
= ∑_i ∈ I{(y_i - _i^⊤_I^∘)^2 + {_i^⊤(_I^∘ - _I)}^2} + 2 ∑_i ∈ I{_i ϵ_i + _i _i^⊤ (_i^∘ - _I^∘)}^⊤ (_I^∘ - _I)
≥ ∑_i ∈ I (y_i - _i^⊤_I^∘)^2 - λ_I,1_I^∘ - _I_1 ≥∑_i ∈ I (y_i - _i^⊤_I^∘)^2 - λ_I _I^∘ - _I_1,
where the second last inequality follows from Lemma <ref> and the last one is from λ_I = 2 λ_I,1 > 0. By the definition of _I,
Ł_I - ∑_i ∈ I(y_i - _i^⊤_I^∘)^2 ≤λ_I (_I^∘_1 - _I_1) ≤λ_I _I^∘ - _I_1.
By combining the result in Lemma <ref>,
Ł_I - ∑_i ∈ I (y_i - _i^⊤_I^∘)^2≤λ_I _I^∘ - _I_1 ≤12 λ_I,1^2 s/I≤48 C_u,2^2 C_x^2 σ_x^2 D_I^2 s log(p ∨ n)/.
By Lemma <ref> and Lemma <ref>,
∑_i ∈ I [(y_i - _i^⊤_I^∘)^2 - ϵ_i^2] - Δ_I^2 I
= ∑_i ∈ I{_i^⊤(_i^∘ - _I^∘)}^2 - Δ_I^2 I + ∑_i ∈ I 2 ϵ_i _i^⊤ (_i^∘ - _I^∘)
≤ C_u C_x^2 √(Δ_I,4^4 ∨Δ_I,∞^4 log(p ∨ n)/I)√(Ilog(p ∨ n))
+ 2 C_u C_x C_ϵ√(Δ_I^2 ∨Δ_I,∞^2 log(p ∨ n)/I)√(Ilog(p ∨ n)).
From Eq. (<ref>) and Eq. (<ref>),
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I≤48 C_u,2^2 C_x^2 σ_x^2 D_I^2 s log(p ∨ n)/
+ C_u C_x^2 √(Δ_I,4^4 ∨Δ_I,∞^4 log(p ∨ n)/I)√(Ilog(p ∨ n))
+ 2 C_u C_x C_ϵ√(Δ_I^2 ∨Δ_I,∞^2 log(p ∨ n)/I)√(Ilog(p ∨ n)).
By Condition <ref> (b), Δ_I, ∞≤ C_β√(s). Hence one obtains
√(Δ_I^2 ∨Δ_I,∞^2 log(p ∨ n)/I)√(Ilog(p ∨ n))≤1/√(s)[{C_β s log(p ∨ n)}∨√(Δ_I^2 I s log(p ∨ n))].
Note that Δ_I,4^2 ≤Δ_I,∞Δ_I,2, it also holds that
√(Δ_I,4^4 ∨Δ_I,∞^4 log(p ∨ n)/I)√(Ilog(p ∨ n))≤ C_β[{C_β s log(p ∨ n)}∨√(Δ_I^2 I s log(p ∨ n))].
Finally, we obtain,
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I≤48 C_u,2^2 C_x^2 σ_x^2 D_I^2 s log(p ∨ n)/
+ C_u,2 C_x (C_x σ_x C_β + 2 C_ϵ/√(s)) √([(Δ_I^2I) ∨{σ_x^2 C_β^2 slog(p ∨ n)}] slog(p ∨ n)).
All of the results in these three parts follow from the proof of Lemma <ref> and the conditions about the variations Δ_I^2 I.
* If Δ_I = 0, we have D_I^2 = C_ϵ^2 and Δ_I, ∞ = 0.
Lemma <ref> reduces to
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I≤48 C_u,2^2 C_x^2 σ_x^2 C_ϵ^2/ s log(p ∨ n) = C_<ref>cor:in_err.1 s log(p ∨ n).
* Since I ∩^∗≤ 1, we have s_I ≤ 2 s and Lemma <ref> still holds for I≥ C_m s log(p ∨ n)≥C_m/2 s_I log(p ∨ n) ≥ C_𝗋𝖾 s_I log(p ∨ n) with sufficiently large C_m≥ 2 C_𝗋𝖾. Recall that Δ_I^2 I≤C s log(p ∨ n) and I≥ C_m s log(p ∨ n).
We have (C_x^2 Δ_∞^2 + C_ϵ^2) log(p ∨ n)/I≤2 C_x^2 σ_x^2 C_β^2 s + C_ϵ^2/C_ms≤ C_ϵ^2 provided that C_m is sufficiently large.
Hence D_I^2 = C_x^2 Δ_I^2 + C_ϵ^2.
Combining Δ_I^2 I≤C s log(p ∨ n) and I≥ C_m s log(p ∨ n), Δ_I^2 ≤C/C_m.
Therefore by Lemma <ref>,
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I≤ C_<ref>cor:in_err.2 s log(p ∨ n),
where C_<ref>cor:in_err.2 = 96 C_u,2^2 C_x^2 σ_x^2 C_ϵ^2/ + 96 C_u,2^2 C_x^4 σ_x^2 C/C_m + C_u,2 C_x (C_x σ_x C_β + 2 C_ϵ/√(s)) [(2C_βσ_x) ∨√(2C)].
* When I ∩^∗≤ 1, the discussion in (ii) follows and I≥ C_m s log(p ∨ n) ≥C_m/2 s_I log(p ∨ n) and Δ_I^2 I≥C/2 s_I log(p ∨ n). Otherwise when I ∩^∗≥ 2, by condition <ref>, we can obtain I≥C_𝗌𝗇𝗋/3 s_I log(p ∨ n) ≥C_m/2 s_I log(p ∨ n) and Δ_I^2 I≥C/3 s_I log(p ∨ n) since C_𝗌𝗇𝗋 is sufficiently large. Recall that C≥ 3 C_β^2 σ_x^2.
We have Δ_I^2 ≥C s_I log(p ∨ n)/3I≥C_β^2 σ_x^2 s_I log(p ∨ n)/I≥Δ_I,∞^2 log(p ∨ n)/I.
Again, it implies that D_I^2 = C_x^2 Δ_I^2 + C_ϵ^2.
By Lemma <ref>,
Ł_I - ∑_i ∈ Iϵ_i^2 ≥Δ_I^2 I - 48 C_u,2^2 C_x^2 σ_x^2 (C_x^2 Δ_I^2 + C_ϵ^2) s_I log(p ∨ n)/
- C_u,2 C_x (C_x σ_x C_β + 2 C_ϵ/√(s_I)) √(Δ_I^2 I s_I log(p ∨ n))≥ (1 - C_<ref>cor:in_err.3)Δ_I^2 I,
where
C_<ref>cor:in_err.3 = 96 C_u,2^2 C_x^4 σ_x^2/ C_m + C_u,2 C_x (C_x σ_x C_β + 2 C_ϵ/√(s)) √(3/C) + 3 C_<ref>cor:in_err.1/C.
Assume that the joint probability of Lemmas <ref>–<ref> event holds, which implies a probability lower bound 1 - 4 exp{-C_u,1log(p ∨ n)}.
Let J = I ∖ R, Δ_J,q^q = 1/J∑_i ∈ J_i^∘ - _R_Σ^q and Δ_J,∞ = max_i ∈ J_i^∘ - _R_Σ.
For simplicity, denote Δ_J = Δ_J,2.
We first perform a common decomposition of the out-of-sample error,
∑_i ∈ J (y_i - _i^⊤_R)^2 = ∑_i ∈ J{ϵ_i^2 + {_i^⊤ (_i^∘ - _R)}^2 + 2 ϵ_i _i^⊤ (_i^∘ - _R)}.
Denote ∞· 0 = 0 for the case that J = 0.
By the Bernstein's inequality, uniformly for all intervals I and their surrogates R, with probability at least 1 - exp{-C_u,1log(p ∨ n)},
∑_i ∈ Jϵ_i _i^⊤(_i^∘ - _R)≤ C_u,2 C_x C_ϵ√(Δ_J^2 ∨Δ_J,∞^2 log(p ∨ n)/J)√(Jlog(p ∨ n)),
∑_i ∈ J{_i^⊤ (_i^∘ - _R)}^2 - Δ_J^2 J≤ C_u,2 C_x^2 √(Δ_J,4^4 ∨Δ_J,∞^4 log(p ∨ n)/J)√(Jlog(p ∨ n))
≤ C_u,2 C_x^2 Δ_J, ∞√(Δ_J^2 ∨Δ_J,∞^2 log(p ∨ n)/J)√(Jlog(p ∨ n)).
Note that Δ_R,4^4 ≤Δ_R,∞^2 Δ_R^2 ≤ C_β^2 σ_x^2 s_R Δ_R^2. By Lemma <ref>, we have the following in-sample control,
Ł_R - ∑_i ∈ Rϵ_i^2 - Δ_R^2 R≤48 C_u,2^2 C_x^2 σ_x^2 D_R^2 s_R log(p ∨ n)/
+ C_u,2 C_x (C_x Δ_R, ∞ + 2 C_ϵ) √([(Δ_R^2R) ∨{Δ_R, ∞^2log(p ∨ n)}] log(p ∨ n))
Combining the out-of-sample error and in-sample error,
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_J^2 J - Δ_R^2 R≤48 C_u,2^2 C_x^2 σ_x^2 D_R^2 s_R log(p ∨ n)/
+ C_u,2 C_x (C_x Δ_R, ∞ + 2 C_ϵ) √([(Δ_R^2R) ∨{Δ_R, ∞^2log(p ∨ n)}] log(p ∨ n))
+ C_u,2 C_x (C_x Δ_J, ∞ + 2 C_ϵ) √([(Δ_J^2 J) ∨{Δ_J, ∞^2log(p ∨ n)}] log(p ∨ n))
Assume that the joint probability of Lemmas <ref>–<ref> event holds, which implies a probability lower bound 1 - 3 exp{-C_u,1log(p ∨ n)}.
In this part, we measure the difference between Δ_J^2 J and Δ_J^2 J.
It begins with the following relation between them,
Δ_J^2 = Δ_J^2 + 2 (_J^∘ - _R^∘)^⊤Σ (_R^∘ - _R) + _R^∘ - _R_Σ^2.
By Eq. (<ref>), the measurement is done if the absolute values of (_J^∘ - _R^∘)^⊤Σ (_R^∘ - _R) and _R^∘ - _R_Σ^2 can be successfully upper-bounded.
For the first term, by Lemma <ref>,
J (_J^∘ - _R^∘)^⊤Σ (_R^∘ - _R) = ∑_i ∈ J (_i^∘ - _R^∘)^⊤Σ (_R^∘ - _R)≤∑_i ∈ J_i^∘ - _R^∘_Σ_R^∘ - _R_Σ
≤ Δ_JJ C_<ref>lem:oracleσ_x D_R√(s_R log(p ∨ n)/R) = C_<ref>lem:oracleσ_x D_R√(Δ_J^2 J)√(J s_R log(p ∨ n)/R).
Following the discussion in Corollary <ref>, across all the three cases in the lemma, we have D_R = √(C_x^2Δ_R^2 + C_ϵ^2) and R≥ C_𝗋𝖾 s_R log(p ∨ n) provided that R≥ C_m s log(p ∨ n) and C_m is sufficiently large.
Hence by Eq. (<ref>),
J (_J^∘ - _R^∘)^⊤Σ (_R^∘ - _R)≤ C_<ref>lem:oracleσ_x √(J/R)√(Δ_J^2 J)√(C_x^2 Δ_R^2 + C_ϵ^2)√(s_R log(p ∨ n)).
By Lemma <ref>,
J_R^∘ - _R_Σ^2 ≤ C_<ref>lem:oracle^2 σ_x^2 J D_R^2 s_R log(p ∨ n)/R≤ C_<ref>lem:oracle^2 σ_x^2 J/R( C_ϵ^2 + C_x^2 Δ_R^2 ) s_R log(p ∨ n).
* For I such that Δ_I = 0 and I≥ C_m s log(p ∨ n), we have Δ_J = Δ_R = 0 and s_R = s. Hence
Δ̂_J^2 - Δ_J^2J≤ C_<ref>lem:oracle^2 σ_x^2 J/R C_ϵ^2 s log(p ∨ n) ≤C_<ref>lem:oracle^2 C_ϵ^2 σ_x^2 (1-r)/r s log(p ∨ n).
* For I such that Δ_I^2 I≤C s log(p ∨ n), I ∩^∗≤ 1 and I≥ C_m s log(p ∨ n), we have s_R ≤ 2 s and Δ_I^2 I≤ 2 Δ_I^2 I≤ 2 C s log(p ∨ n) provided that r ≥1/2. And for r ∈ (0, 1], one can obtain Δ_I^2 I≤ C Δ_I^2 I for some constant C that only depends on r. Here we only consider the case that r ≥1/2 for simplicity. Since Δ_R^2 R≤Δ_I^2 I≤C s log(p ∨ n) and R≥ r C_m s log(p ∨ n), one obtains Δ_R^2 ≤C/C_m r≤2 C/C_m. By Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>), we have
Δ_J^2 - Δ_J^2J≤ C_<ref>lem:rd_var.1 s log(p ∨ n),
where
C_<ref>lem:rd_var.1 = 4 C_<ref>lem:oracleσ_x √( 1 - r / r )√(2 C_x^2 C^2/C_m r + C_ϵC) + 2 C_<ref>lem:oracle^2 σ_x^2 1 - r / r ( 2 C_x^2 C/C_m r + C_ϵ^2 ).
* Assume that I≥ C_m s log(p ∨ n) and Δ_I^2 I≥C s log(p ∨ n). By Conditions <ref> and <ref>, I≥C_m/2 s_Ilog(p ∨ n) and R≥C_m r/2 s_Rlog(p ∨ n) provided that C_𝗌𝗇𝗋 is sufficiently large. Similarly Δ_I^2 I≥C s_I log(p ∨ n)/3≥C s_R log(p ∨ n)/3.
Therefore we have s_R log(p ∨ n) ≤2R/C_m r and s_R log(p ∨ n) ≤3 Δ_I^2 I/C. Therefore By Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>),
Δ_J^2 - Δ_J^2J≤ C_<ref>lem:rd_var.2Δ_I^2 I,
where
C_<ref>lem:rd_var.2 = 2 C_<ref>lem:oracleσ_x √(1 - r/r)√(2 C_x^2/C_m r + 3 C_ϵ^2/C) + C_<ref>lem:oracle^2 σ_x^2 1 - r/r(2 C_x^2/C_m r + 3 C_ϵ^2/C).
Assume that the joint probability of Lemmas <ref>–<ref> event holds, which implies a probability lower bound 1 - 4 exp{-C_u,1log(p ∨ n)}.
In Lemma <ref>, we have analyzed the approximation error between the random variation Δ^2 J and the ground truth Δ_J^2 J. The three parts of Corollary <ref> follow by aggregating the results in Lemma <ref> and Lemma <ref>.
* The condition Δ_I = 0 implies that Δ_R = Δ_J = 0 and Δ_J,∞ = Δ_J. By Lemma <ref> (a) and Lemma <ref>,
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_J^2 J≤ C_<ref>cor:in_err.1 s log(p ∨ n)
+ C_u,2 C_x (C_x Δ_J, ∞ + 2 C_ϵ) √([(Δ_J^2 J) ∨{Δ_J,∞^2 log(p ∨ n)}]log(p ∨ n))
≤ C_<ref>cor:in_err.1 s log(p ∨ n)
+ C_u,2 C_x (C_x √(C_<ref>lem:oracle^2 σ_x^2 C_ϵ^2/C_m r) + 2 C_ϵ) C_<ref>lem:oracle C_ϵσ_x √([ (1-r) s log(p ∨ n)/r∨log(p ∨ n)/C_m r] log(p ∨ n))
≤ C_<ref>cor:in_err.1 s log(p ∨ n) + C_u,2 C_x (C_x C_<ref>lem:oracle^2 σ_x^2 C_ϵ^2/√(C_m r) + 2 C_ϵ^2 C_<ref>lem:oracleσ_x) √( (1-r)/r s∨ 1 /C_m r s^2) s log(p ∨ n)
Hence,
Ł_I - ∑_i ∈ Iϵ_i^2≤ C_<ref>cor:mix_err.1 s log(p ∨ n),
where C_<ref>cor:mix_err.1 = C_<ref>cor:in_err.1 + (1-r) C_<ref>lem:oracle^2 σ_x^2 C_ϵ^2/r + C_u,2 C_x (C_x C_<ref>lem:oracle^2 σ_x^2 C_ϵ^2/√(C_m r) + 2 C_ϵ^2 C_<ref>lem:oracleσ_x) √( (1-r)/r s∨ 1 /C_m r s^2).
* For I such that Δ_I^2 I≤C s log(p ∨ n) for some sufficiently large C≥ 3 C_β^2 σ_x^2, I ∩^∗≤ 1 and I≥ C_m s log(p ∨ n), we have Δ_R^2 R≤Δ_I^2 I≤C s log(p ∨ n), Δ_R^2 ≤C/C_m r and Δ_I^2 I≤ 2 Δ_I^2 I≤ 2 C s log(p ∨ n) as discussed in the proof of Lemma <ref> (ii).
By the definition of Δ_R, ∞, we have Δ_R, ∞^2 ≤ 2 C_β^2 σ_x^2 s. By Lemma <ref>, _R - _R^∘_Σ^2 ≤C_<ref>lem:oracle^2 σ_x^2 D_R^2/C_m r≤C_<ref>lem:oracle^2 σ_x^2/C_m r (C_ϵ^2 + C_x^2
C/C_m r) which can be sufficiently small provided that C_𝗋𝖾 and C_𝗌𝗇𝗋 are sufficiently large. Hence w.l.o.g. we can assume that Δ̂_J,∞^2 ≤ 3 C_β^2 σ_x^2 s.
By Lemma <ref> (b),
Δ_J^2 J≤Δ_J^2 J + Δ_R^2 R≤Δ_I^2 I + C_<ref>lem:rd_var.1s log(p ∨ n) ≤ (2 C + C_<ref>lem:rd_var.1) s log(p ∨ n)
Combining the result in Lemma <ref>,
Ł_I - ∑_i ∈ Iϵ_i^2 - Δ_I^2 I
≤ Δ_J^2 - Δ^2_JJ + 2 C_<ref>cor:in_err.1 s log(p ∨ n) + 96 C_u,2^2 C_x^4 σ_x^2 C s log(p ∨ n)/C_m r
+ C_u,2 C_x (√(2) C_x σ_x C_β + 2 C_ϵ/√(s)) √([(Δ_R^2R) ∨{2 C_β^2 σ_x^2 s log(p ∨ n)}] s log(p ∨ n))
+ C_u,2 C_x (√(3) C_x σ_x C_β + 2 C_ϵ/√(s)) √([(Δ_J^2J) ∨{3C_β^2 σ_x^2 s log(p ∨ n)}] s log(p ∨ n))
≤ C_<ref>cor:mix_err.2 s log(p ∨ n),
where C_<ref>cor:mix_err.2 = C_<ref>lem:rd_var.1 + 2 C_<ref>cor:in_err.1 + 96 C_u,2^2 C_x^4 σ_x^2 C/C_m r + C_u,2 C_x (√(3) C_x σ_x C_β + 2 C_ϵ/√(s)) {C^1/2 + (2 C + C_<ref>lem:rd_var.1)^1/2}.
* When Δ_I^2 I≥C s log(p ∨ n) and I≥ C_m s log(p ∨ n), by Conditions <ref>–<ref>, we have Δ_J^2 J + Δ_RR = Δ_I^2 I≥Δ_I^2 I≥C/3 s_I log(p ∨ n) and R≥C_m r/2 s_R log(p ∨ n), c.f. the proof of Lemma <ref>. By Lemma <ref> (c), Δ_J^2 J≤Δ_J^2 J + C_<ref>lem:rd_var.2Δ_I^2 I≤ (1 + C_<ref>lem:rd_var.2) Δ_I^2 I and Δ_J^2 J + Δ_R^2 R≥ (1 - C_<ref>lem:rd_var.2) Δ_I^2 I.
By Lemma <ref>,
Ł_I - ∑_i ∈ Iϵ_i^2 ≥ (1 - C_<ref>lem:rd_var.2) Δ_I^2 I - C_<ref>cor:in_err.1 s_R log(p ∨ n) - 48 C_u,2^2 C_x^4 σ_x^2 Δ_R^2 s_R log(p ∨ n)/
- C_u,2 C_x (C_x σ_x C_β + 2 C_ϵ/√(s_I)) (1 + √(1 + C_<ref>lem:rd_var.2)) √(Δ_I^2I s_I log(p ∨ n))
≥ (1 - C_<ref>cor:mix_err.3) Δ_I^2 I,
where C_<ref>cor:mix_err.3 = C_<ref>lem:rd_var.2 + 3 C_<ref>cor:in_err.1/C + 96 C_u,2^2 C_x^4 σ_x^2/C_m r + C_u,2 C_x √(3)/√(C) (C_x σ_x C_β + 2 C_ϵ/√(s)) (1 + √(1 + C_<ref>lem:rd_var.2)).
The localization error bound of {τ̂_k} in Theorem <ref> follows from Lemma <ref>, Corollary <ref> and Corollary <ref>. And provided the localization error bound, the error bound of the parameter estimation follows from Lemma <ref>.
§ ADDITIONAL NUMERICAL RESULTS
§.§ The Single Changepoint Model in Section <ref>
The data in the single changepoint scenario in Section <ref> are generated from the following model,
y_i = _i^⊤_1 𝕀{i ≤τ^∗} + _i^⊤_2 𝕀{i > τ^∗} + ϵ_i, i = 1,…, n,
where {ϵ_i} and {_i} are drawn independently satisfying ϵ_i ∼(0, 1) and _i ∼_p(0, Σ). Here Σ is a p × p matrix with elements Σ_ij = 1/2^|i- j|. The regression parameters of the model are set to be _1 = (1/3,1/3,1/3,1/3,0,…,0)_p × 1^⊤ and _2 = (0_1 × 4, 1/3, 1/3, 1/3, 1/3,0,…,0)_p × 1^⊤. We set n = 1200 and the true changepoint τ^∗ = 120.
§.§ Complementary Numerical Results in Section <ref>
We provide the complementary numerical results of the multiple changepoint scenarios in Section <ref> with n varying from n=300 to n=1200. In the method, we set r=0.9 as recommended. The provides almost comparable performance with the original algorithm in all the cases.
rss
ref.bib
|
http://arxiv.org/abs/2307.00269v1
|
20230701082036
|
AE-RED: A Hyperspectral Unmixing Framework Powered by Deep Autoencoder and Regularization by Denoising
|
[
"Min Zhao",
"Jie Chen",
"Nicolas Dobigeon"
] |
cs.CV
|
[
"cs.CV",
"eess.IV"
] |
AE-RED: A Hyperspectral Unmixing Framework Powered by Deep Autoencoder and Regularization by Denoising
Min Zhao, Student Member, IEEE,
Jie Chen, Senior Member, IEEE
and Nicolas Dobigeon, Senior Member, IEEE
M. Zhao and J. Chen are with School of Marine Science and Technology,
Northwestern Polytechnical University, Xi'an 710072, China (e-mail:
[email protected]; [email protected]).
N. Dobigeon is with University of Toulouse, IRIT/INP-ENSEEIHT,
CNRS, 2 rue Charles Camichel, BP 7122, 31071 Toulouse Cedex 7, France (e-mail: [email protected]).
Part of this work was supported by the Artificial Natural Intelligence Toulouse Institute (ANITI, ANR-19-PI3A-0004) and the IMAGIN project (ANR-21-CE29-0007).
August 1, 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Spectral unmixing has been extensively studied with a variety of methods and used in many applications. Recently, data-driven techniques with deep learning methods have obtained great attention to spectral unmixing for its superior learning ability to automatically learn the structure information. In particular, autoencoder based architectures are elaborately designed to solve blind unmixing and model complex nonlinear mixtures. Nevertheless, these methods perform unmixing task as black-boxes and lack of interpretability. On the other hand, conventional unmixing methods carefully design the regularizer to add explicit information, in which algorithms such as plug-and-play (PnP) strategies utilize off-the-shelf denoisers to plug powerful priors. In this paper, we propose a generic unmixing framework to integrate the autoencoder network with regularization by denoising (RED), named AE-RED.
More specially, we decompose the unmixing optimized problem into two subproblems. The first one is solved using deep autoencoders to implicitly regularize the estimates and model the mixture mechanism. The second one leverages the denoiser to bring in the explicit information.
In this way, both the characteristics of the deep autoencoder based unmixing methods and priors provided by denoisers are merged into our well-designed framework to enhance the unmixing performance. Experiment results on both synthetic and real data sets show the superiority of our proposed framework compared with state-of-the-art unmixing approaches.
Hyperspectral unmixing, deep learning, autoencoder, plug-and-play, image denoising, RED.
§ INTRODUCTION
Hyperspectral imaging has been a widely explored imaging technique during recent years and is still receiving a growing attention in various applicative fields <cit.>. Benefiting from the rich spectral information, hyperspectral images enable the analysis of fine materials in the observed scenes to tackle various challenging tasks such as target detection and classification <cit.>. However, due to the limitations of the imaging acquisition devices, there is an unsurmountable trade-off between the collected spectral and spatial information, which limits the spatial resolution of the hyperspectral sensors. As a consequence, a pixel observed by a hyperspectral sensor generally corresponds to a relatively large area and may encompass several materials, in particular when observing complex scenes. More precisely, the spectrum collected at a given spatial position of the scene is assumed to be a mixture of several elementary spectral signatures associated with the materials present in the observed pixel. This has led to research focused on hyperspectral unmixing (HU), which aims at decomposing the ith observed pixel spectrum _i ∈ℝ^B into a set of R spectral signatures of so-called endmembers collected in the matrix =[𝐬_1,…,𝐬_R] ∈ℝ^B× R and their associated fractions or abundances _i ∈ℝ^R <cit.>.
For the sake of physical interpretability, the abundances are subject to two constraints, namely abundance sum-to-one constraint (ASC), 1_R^⊤_i=1, and abundance nonnegativity constraint (ANC), _i≥0. The endmembers are constrained to be nonnegative (ENC), ≥0.
Many methods have been proposed in the literature to address the HU problem <cit.>. Considering a set of N observed pixels =[_1,…,_N] ∈ℝ^B× N sharing the same endmembers, HU can be formulated as an optimization problem, which aims at estimating the endmembers and the abundances jointly, i.e.,
min_, ∑_i=1^N 𝒟[_i || ℳ(, _i))] + ℛ(, )
s.t. 1_R^⊤=1_N^⊤, ≥0, and ≥0
where
* 𝒟(·,·) stands for a discrepancy measure (e.g., divergence),
* ℳ(·,·) describes the inherent nonlinear mixture model which relates the endmembers and the abundances to the measurements,
* ℛ(·,·) acts as a regularization term that encodes prior information regarding the endmembers and the abundances .
The regularization ℛ(·,·) is often designed to be separable with respect to the abundances and endmembers,
ℛ(,)=ℛ_ e()+ℛ_ a(),
where the endmember and abundance prior information is encoded in ℛ_ e and ℛ_ a, respectively. For instance, geometry-based penalizations, such as minimum volume <cit.>, are often chosen as endmember regularizers. Sparsity-based <cit.>, low-rankness <cit.> or spatial regularizers, such as total variation (TV) <cit.>, are usually utilized to promote expected properties of the abundances. This work specifically focuses on the design of an abundance regularization.
As for the mixing process, typical methods rely on an explicit mathematical expression for ℳ(·,·) to describe the mixture mechanism. For example, the linear mixing model (LMM) is by far the most used in the literature since it provides a generally admissible first-order approximation of the physical processes underlying the observations. LMM assumes that the measured spectrum is a linear combination of endmembers weighted by the abundances, which assumes that the incident light comes in and only reflects once on the ground before reaching the hyperspectral sensor. Besides, bilinear models consider second-order reflections, for instance in the case of multiple vegetation layers <cit.>. These explicit models are usually designed by describing the path of the light, along with its scattering and the interaction mechanisms among the materials. They are thus generally referred to as physics-based models. However, in some acquisition scenarios, they may fail to accurately account for real complex scenes. Data-driven methods have been thus proposed to implicitly learn the mixing mechanism from the observed data. Nevertheless, if not carefully designed a data-driven method may overlook the physical mixing process and require abundant training data <cit.>.
§.§ Motivation
Numerous methods cope with the HU problem by carefully designing the data-fitting term and the regularizer <cit.>. To reduce the computational complexity, most HU methods are based on the LMM. It may be not sufficient to account for spectral variability and endmember nonlinearity. On the other hand, designing a relevant regularizer is not always trivial and is generally driven by an empirical yet limited knowledge. For these reasons, research works have been devoted to the design of deep learning based HU approaches. Among them, autoencoders (AEs) become increasingly popular for unsupervised HU. The encoder is trained to compress the input into a lower dimensional latent representation, usually the abundances. The decoder is generally designed to mimic the mixing process parametrized by the endmember signatures and to produce the hyperspectral image from the abundances defined in the latent space.
AE-based HU methods exhibit several advantages: i) they can embed a physical-based mixing model into the structure of the decoder, ii) they implicitly incorporates data-driven image priors and iii) the unmixing procedure can benefit from powerful optimizers, such as Adam <cit.> and SGD <cit.>. However, these deep architectures behave as black boxes and the results lack of interpretation. Motivated by these findings, this paper attempts to answer the following question: is it possible to design an unsupervised HU framework which combines the advantages of AE-based unmixing network while leveraging on explicit priors?
§.§ Contributions
This paper derives a novel HU framework which answers to this question affirmatively. More precisely, it introduces an AE-based unmixing strategy while incorporating an explicit regularization of the form of a RED. To solve the resulting optimization problem, an alternating direction method of multiplier (ADMM) is implemented with the great advantages of decomposing the initial problem into several simpler subproblems. One of these subproblems can be interpretated as a standard training task associated with an AE. Another is a standard denoising problem. The main advantages of the proposed frameworks are threefold:
* This framework combines the deep AE with RED priors for unsupervised HU. By incorporating the benefits of AE with the regularization of denoising, the framework provides accurate unmixing results.
* The optimization procedure splits the unmixing task into two main subtasks. The first subtask involves training an AE to learn the mixing process and estimate a latent representation of the image as abundance maps. In the second subtask, a denoising step is applied to improve the estimation of the latent representation.
* The proposed framework is highly versatile and can accommodate various architectures for the encoder, and the decoder can be tailored to mimic any physics-based mixing model, such as the LMM, nonlinear mixing models, and mixing models with spectral variability.
This paper is organized as follows. Section <ref> provides a concise overview of related HU algorithms, with a particular focus on the design of regularizations and AE-based unmixing methods. Section <ref> describes some technical ingredients necessary to build the proposed framework. In Section <ref>, the proposed generic framework is derived, and details about particular instances of this framework are given. Section <ref> reports the results obtained from extensive experiments conducted on synthetic and real datasets to demonstrate the superiority of the proposed framework. Finally, Section <ref> concludes the paper.
§ RELATED WORKS
This section provides brief overviews on two aspects related to this work, namely regularization design in HU and AE-based unmixing.
§.§ Regularization design
Efficient algorithms for HU often require effective regularizations that incorporate prior knowledge about the images and constrain the solution space. Traditional methods exploit the spatial consistency of the image, and sparsity-based regularizers have also been extensively used on the abundances since the number of endmembers is typically much smaller than the size of the spectral library.
In <cit.>, a TV regularizer is applied to the abundance to promote similarity between adjacent pixels, and an ℓ_1-norm is used for sparse unmixing. Since the ℓ_1-norm is inconsistent with the abundance sum-to-one constraint, ℓ_p-norms with 0<p<1 have been studied to obtain sparse estimates <cit.>. In <cit.>, a non-local sparse unmixing method is proposed to exploit similar patterns and structures in the abundance image. A weighted average is applied to all pixels to exploit non-local spatial information. A weighted average is applied to all pixels to exploit non-local spatial information. Spatial group sparsity regularizers have also been proposed to incorporate spatial priors and sparse structures. The authors of <cit.> introduce a spatial group sparsity regularizer generated using image segmentation methods such as SLIC. In <cit.>, a cofactorization model is used to jointly exploit spectral and spatial information, while the work of <cit.> introduces an adaptive graph to automatically determine the best neighbor points of pixels and assign corresponding weights. However, these methods require handcrafted regularizers, which can be time-consuming when non-standard regularizers are applied to large images.
More recently, the idea of PnP has been introduced to exploit the intrinsic properties of hyperspectral images. These methods use generic denoisers that act as explicit regularizers. In <cit.>, an HU method based on an ADMM algorithm is introduced that can handle explicit regularizations. By selecting different pattern switch matrices, the denoising operator can be used to penalize the reconstructed hyperspectral image or estimated abundances. The work of <cit.> proposes a nonlinear unmixing method with prior information provided by denoisers. However, the denoisers used in these methods are traditional denoising methods or deep denoisers trained on grayscale or RGB images, which may not be optimal for hyperspectral images.
§.§ Deep AE-based unmixing methods
Elegant neural network structures have been proposed to formulate the HU task as a simple training process. Early works used fully connected layers to design the model, such as <cit.> and <cit.>. However, these networks process the pixels independently and ignore the spatial correlation intrinsic to the image. To overcome this limitation, some AE-based methods include spatial regularizations, such as total variation (TV), in the loss function <cit.>. Recently, convolutional neural networks (CNNs) have been used to perform HU and have shown promising performance. CNNs convolve the input data with filter kernels to capture spatial information <cit.>. Recurrent neural networks (RNNs), which have memory cells, implement a sequential process with hidden states that depend on the previous states <cit.>. Hyperspectral images are often corrupted by noise or outliers, which can dramatically decrease the unmixing performance. To address this issue, denoising-oriented architectures have been proposed <cit.>. Some works have also proposed variants of encoders. In <cit.>, a dual-branch AE network is designed to leverage multiscale spatial contextual information.
Most AE-based HU methods use a fully connected linear layer in the decoder part to mimic the linear mixing process. However, considering the physical interactions between multiple materials and the superior ability of deep networks to model nonlinear relationships, some works <cit.> have focused on the design of structured decoders to ensure the interpretability of the nonlinear model inherent in the mixing process. The work of <cit.> introduces a nonlinear decoder. Recycling an LMM-based AE architecture, the decoder contains two parts: one linear and the other nonlinear. The linear part is considered a rough approximation of the mixture and is then fed into two fully connected layers with a nonlinear activation function to learn the nonlinear mechanism. However, this post-nonlinear model-based decoder may not be sufficient to represent complex nonlinear cases. Some works <cit.> reexamine the nonlinear fluctuation part of the decoder. For example, the method in <cit.> designs a special layer to capture the second-order interaction, similar to the Fan or bilinear models. Moreover, spectral variability can also be addressed by using deep generative decoders <cit.>.
Recently, deep unfolding techniques have been used to unroll a model and its related iterative algorithm into deep networks. This approach can include physical interpretability into the design of network layers, and such model-inspired networks are also used in the design of unmixing methods. In <cit.>, an iterative shrinkage-thresholding algorithm (ISTA)-inspired network layer is applied to build an AE-based unmixing architecture. The work of <cit.> unrolls a sparse non-negative matrix factorization (NMF)-based algorithm with an ℓ_p-norm regularizer to integrate prior knowledge into the unmixing network. An ADMM solver with a sparse regularizer is also unrolled to build an AE-like unmixing architecture. However, these methods do not utilize spatial consistency information in the design of the network, which may limit their unmixing performance.
§ BACKGROUND
§.§ Autoencoder-based unmixing
As highlighted in the previous section, AEs have demonstrated to be a powerful tool to conduct unsupervised unmixing. An AE typically consists of an encoder and a decoder. The encoder, represented by 𝖤__ E(·), aims at learning a nonlinear mapping from input data, denoted as _i, to their corresponding latent representations, denoted as _i. This can be expressed as follows:
_i= 𝖤__ E(_i),
where _ E gather all parameters of the encoder. The input =[_1,…,_N] depends on the architecture chosen for the encoder network. For instance, when dealing with the specific task of HU, the input can be chosen as the image pixels =[_1,…,_N] or as noise realizations =[_1,…,_N] with _i ∼𝒩(0,𝐈). The decoder, denoted by 𝖣__ D(·), is responsible for reconstructing the data, or at least an approximation _i, from the latent feature _i provided by the encoder. This can be expressed as follows:
_i= 𝖣__ D(_i),
where _ D parameterizes the decoder. Under this paradigm, adjusting the encoder and decoder parameters _ E and _ D is generally achieved by minimizing the empirical expectation of a discrepancy measure between the input data _1,…,_N and their corresponding approximation _1,…,_N, i.e.,
ℒ(_ E,_ D)=1/N∑_i=1^N𝒟[𝐲_i||
_i]
with _i= 𝖣__ D(𝖤__ E(_i)). This reconstruction loss function can be complemented with additional terms to account for any desired property regarding the network parameters and the latent representation.
Drawing a straightforward analogy with the problem (<ref>), AE-based unmixing frameworks generally assume that the latent variable =[_1,…,_N] can be considered as an estimate of the abundance matrix . The architecture of the encoder should be chosen to be able to extract key spatial features from the input data. Several choices are possible and will be discussed as archetypal examples later in Section <ref>. The decoder can then be designed to mimic the mixing process ℳ(·,·) in (<ref>). The endmember signatures to be recovered are part of the decoder parameters, i.e., Θ_ D = {Θ̃_ D,} where Θ̃_ D are intrinsic network parameters. For instance, when the decoder is designed according to a physics-based nonlinear mixing model prescribed beforehand, Θ̃_ D gathers the nonlinearity parameters. In the simplistic assumption of the LMM, the decoder does not depend on any additional intrinsic parameters and Θ_ D =.
§.§ Regularization by denoising priors
Various regularizers have been considered to design the term ℛ_a(·). Among them, PnP is a flexible and generic framework that naturally emerges when resorting to splitting-based optimization procedures. This framework replaces the proximal operator associated with ℛ_a(·) by off-the-shelf and highly engineered denoiser. This strategy has been effectively used when tackling many imaging inverse problems, such as image denoising, super-resolution and inpainting <cit.>. Recently, an advanced version of PnP, regularization by denoising (RED) <cit.> has demonstrated superior performance. It can be expressed as
ℛ_a()= 1/2^⊤(-𝖢()),
where 𝖢(·) is a denoiser. This regularizer is proportional to the inner-product between the abundance and its post-denoising residual and exhibits many appealing characteristics. First, it is a convex function. Second, under some mild assumptions and reasonable conditions on 𝖢(·), its derivative with respect to is simple and given as the denoising residual, i.e., ∇ℛ()=-𝖢() <cit.>. This work aims at devising a generic AE-based HU framework that can incorporate the RED regularizer.
§ PROPOSED METHOD
§.§ Generic framework
The generic unmixing framework proposed in this paper, referred to as AE-RED hereafter, formulates the HU problem as the training of an AE while leveraging the RED paradigm. Adopting a conventional Euclidean divergence for 𝒟(·,·), the HU problem
(<ref>) is now specified as
min_ 𝐘-𝖣__ D(
𝖤__ E(𝐖))_F^2
+λ𝖤__ E(𝐖)^⊤(𝖤__ E(𝐖)-𝖢(𝖤__ E(𝐖)))
s.t. 1^⊤_R𝖤__ E(𝐖)=1_N^⊤, 𝖤__ E(𝐖)≥0 and ≥0
with = {_ E, _ D}. As stated in the previous section, the endmembers are part of the set of decoder parameters, i.e., _ D={_ D,} and the latent representation directly provides abundance estimates, i.e., 𝐀=𝖤__ E(𝐖). This formulation of the unmixing task leverages on a combination of the AE modeling and RED, providing several benefits. First, the AE is effective in handling the mixture mechanism and learning underlying information. Second, RED provides a flexible and efficient way to model data priors.
Solving the minimization problem (<ref>) with deep learning-flavored black-box optimizers is challenging if not infeasible, in particular because back-propagating _ E would require differentiating the denoising function 𝖢. For most denoisers, this differentiation is not straightforward and may need a huge amount of computations. However, it is worth noting that one of the great advantages of RED is that its derivative can be directly calculated. To benefit from this property, one simple strategy consists in reintroducing the abundance matrix explicitly as an auxiliary variable and then reformulating (<ref>) as a constrained problem
min_, 𝐘-𝖣__ D(
𝖤__ E(𝐖))_F^2+λ^⊤(-𝖢())
s.t. 1^⊤_R𝖤__ E(𝐖)=1_N^⊤, 𝖤__ E(𝐖)≥0, ≥0
and =𝖤__ E(𝐖).
To solve (<ref>), a common yet efficient strategy boils down to split the initial problems into several simpler subproblems following an ADMM. The main steps of the resulting algorithmic scheme write
^(k+1) =min_-𝖣__ D(𝖤__ E(𝐖))_F^2
+μ^(k)-𝖤__ E(𝐖)-𝐆^(k)_F^2
s.t. 1^⊤_R𝖤__ E(𝐖)=1_N^⊤, 𝖤__ E(𝐖)≥0 and ≥0
𝐀^(k+1) = min_λ^⊤(-𝖢())
+μ-𝖤__ E^(k+1)(𝐖)-𝐆^(k)_F^2
𝐆^(k+1) =𝐆-^(k+1)+𝖤__ E^(k+1)(𝐖)
where μ is the penalty parameter and 𝐆 is the dual variable.
The framework of the proposed AE-RED is summarized in Fig. <ref>. It embeds a data-driven autoencoder with a model-free RED. The algorithmic scheme is shown to be a convenient way to fuse the respective advantages of these two approaches. Note that, since the AE-based formulation is nonlinear, providing convergence guarantees about the resulting optimization scheme is not trivial. However, the experimental results reported in Section <ref> show that the proposed method is able to provide consistent performance. Finally, without loss of generality, detailed technical implementations of the first two steps (<ref>) and (<ref>) are discussed in the following paragraphs for specific architectures of the autoencoder.
§.§ Updating Θ
At each iteration, the set of parameters Θ of the autoencoder is updated through rule (<ref>). This can be achieved by training the network with the function in (<ref>) as the objective function. The first term measures the data fit while the second acts as a regularization to enforce the representation 𝖤__ E(𝐖) in the latent space to be close to a corrected version -𝐆 of the abundance.
Regarding the ASC, ANC and ENC constraints, they can be ensured by an appropriate design of the network. In practice, Adam is used to train the autoencoder.
Various autoencoder architectures can be envisioned and the encoder and the decoder can be chosen by the end-user with respect to the targeted applicative context. The encoder 𝖤__ E(·) aims at extracting relevant features to be incorporated into the estimated abundances. A popular choice is a CNN-based architecture where the input is the observed image. Another promising approach consists in leveraging on a deep image prior (DIP) with a noise input. These two particular choices are discussed later in this section. Regarding the decoder 𝖣__ D(·), it generally mimics the mixing process and the endmembers usually define the weights of one specially designed linear layer. Again, the proposed AE-RED framework is sufficiently flexible to host various architectures and to handle various spectral mixing models. A popular strategy is to design the decoder such that it combines physics-based and data-driven strategies to account for complex nonlinearities or spectral variabilities. For instance, additive nonlinear and post-nonlinear models have been extensively investigated <cit.> as well as spectral variability-aware endmember generators <cit.>.
Some archetypal examples of possible elements composing the architecture of the AE are (non-exhaustively) listed in Fig. <ref>(c). In the sequel of this paper, for illustration purpose but without loss of generality, two particular architectures are discussed and then instantiated, as shown in Fig. <ref>. Both consider an LMM-based decoder composed of a convolutional layer with a filter size of 1× 1× B to mimic the LMM. The adjusted decoder weights are finally extracted to estimate the endmember spectral signature. For this particular instance of the decoder, the optimization problem (<ref>) can be rewritten as
{_ E,
𝐒} ∈min__ E,
𝐒-𝐒𝖤__ E(𝐖)_F^2
+μ-𝖤__ E(𝐖)-𝐆_F^2
s.t. 1^⊤_R𝖤__ E(𝐖)=1_N^⊤, 𝖤__ E(𝐖)≥0 and ≥0.
The two examples of AE considered in this paper differ by the architecture of the encoder. The first network is composed of a CNN-based encoder while the second is a DIP. These two choices are discussed below.
§.§.§ CNN-based encoder
The architecture of the CNN-based encoder is shown in Fig. <ref>. The whole image 𝐘 is used here as the input to extract the structure information from the hyperspectral image. Another choice would consist in considering over-lapping patches as the input. The encoder is composed of 5 blocks. The first two blocks implement 3× 3 convolution filters to learn the spatial consistency information. The next two blocks apply 1× 1 convolution operators (i.e., fully connected layers) to model the spectral priors. Moreover, to satisfy the ANC and ASC, the conventional LeakyReLU activation function of the last block is replaced by a softmax function. The output dimensions of each block are narrowly diminished to compress the input pixels into the abundance domain. Considering the optimization function defined in (<ref>), the objective function to train this model is expressed as
ℒ_AE(Θ)=𝐘-𝖤__ E()_F^2 +μ-𝖤__ E()-𝐆_F^2.
The resulting unmixing method will be denoted as AE-RED-C in the sequel.
§.§.§ Deep image prior-based encoder
Another architecture considered in this paper exploits the DIP strategy to implicitly learn the priors of hyperspectral image. Unlike conventional AE-based unmixing methods which use spectral signatures as input for training, this network applies a Gaussian noise image 𝐙 of size of the abundance matrix as input to generate the hyperspectral image. The encoder can be a U-net like architecture to extract the features from different levels. In this work the encoder has been designed with an encoder-decoder structure for abundance estimation. The inner encoder is composed of 4 down-sampling to compress the features. Each down-sampling block consists of three layers, namely a convolution layer with a filter of size 3× 3, a batch normalization layer, and a ReLU nonlinear activation layer. The inner decoder is composed of 5 up-sampling blocks. Each of the first 4 blocks has 4 layers: a bilinear upsampling layer, a convolution layer, a batch normalization layer and a ReLU nonlinear activation layer. The last block has two layers, namely a convolution layer and a softmax nonlinear activation layer to generate the estimated abundances while satisfying the ANC and ASC. Skip connections relate the encoder and decoder which are used to fuse the low-level and high-level features and to obtain multiscale information. The objective function to train this deep model is also defined as (<ref>) where 𝖤__ E() is replaced by 𝖤__ E().
The proposed method with this architecture is denoted as AE-RED-U.
§.§ Updating
The abundance matrix is updated by solving (<ref>).
This problem is a standard RED objective function and can be interpreted as a denoising of 𝖤__ E(𝐖)+𝐆. The seminal paper <cit.> discusses two algorithmic schemes to solve this problem, namely fixed-point and gradient-descent strategies. In this work we derive a fixed-point algorithm by setting the gradient of the objective function to 0,
λ(-𝖢())+μ(-𝖤__ E(𝐖)-𝐆)=0.
Then, at the (k+1)th iteration of the ADMM, the jth inner iteration of the fixed-point algorithm can be summarized as
^(k+1,j+1)
=1/λ+μ[λ𝖢(^(k+1,j))
+μ(𝖤^(k+1)__ E(𝐖)+𝐆^(k))].
For illustration, we consider two particular denoisers 𝖢(·), namely nonlocal means (NLM) <cit.> and block-matching and 4-D filtering (BM4D) <cit.>. NLM is a 2D denoiser and should be applied on each spectral bands indendently while and BM4D is a 3D-cube based denoiser. Depending on the architecture chosen for the encoder (see Section <ref>), the corresponding instances of the proposed framework are named as AE-RED-CNLM, AE-RED-CBM4D, AE-RED-UNLM and AE-RED-UBM4D, respectively.
§ EXPERIMENTAL RESULTS
This section presents experiments conducted to evaluate the effectiveness of the proposed unmixing framework. These experiments have been conducted on synthetic and real data sets to quantitatively assess the unmixing results and to demonstrate the effectiveness of our proposed method in real applications, respectively (see Sections <ref> and <ref>).
Compared methods – Several state-of-the-art methods have been compared. A first family of unmixing algorithms are conventional methods. SUnSAL-TV <cit.> leverages on a handcrafted TV-term to regularize the optimization function. PnP-NMF <cit.> is an NMF-based unmixing method, and denoisers are embedded as PnP to introduce prior information. A second family of compared methods is based on deep learning. CNNAE <cit.> is a deep AE-based unmixing method where convolutional filters capture spatial information. UnDIP <cit.> is a DIP-based unmixing method which uses a convolutional network. A geometric endmember extraction method is applied to estimate endmembers. SNMF <cit.> is a deep unrolling algorithm, which unfolds the ℓ_p-sparsity constrained NMF model into trainable deep architectures. CyCU-Net <cit.> proposes a cascaded AEs for unmixing with a cycle-consistency loss to enhance the unmixing performance.
Hyperparameter settings –
All hyperparameters of the compared methods have been manually adjusted to obtain the best unmixing performance. The choice of the parameters associated with the proposed AE-RED method are discussed in detail as follows. The regularization parameters λ and μ have been selected according to the noise level of generated data. More precisely, λ and μ have been set to 0.5 for the data with noise levels of 5dB and 10dB, to 0.1 for the data with a noise level of 20dB, to 0.01 for the data with a noise level of 30dB.
The learning rate to train the deep networks is set to 1×10^-3, and set 1×10^-4 to fine-tune the decoder weights. For the proposed CNN based unmixing method, the number K of ADMM iterations is set to 15, the number of epochs is set to 250 and the number of inner iterations when updating the abundances is set to J=1. As for the proposed DIP based unmixing method, K, the number of epochs and J are respectively set to 10, 2300 and 1.
Performance metrics – The root mean square error (RMSE) is used to evaluate the abundance estimation performance, which can be expressed by
RMSE = √(1/NR∑_i=1^N𝐚_i-𝐚̂_i^2),
where 𝐚_i is the actual abundance of the ith pixel, and 𝐚̂_i is the corresponding estimate. A smaller value of RMSE indicates better abundance estimation results. The endmember estimation is assessed by computing the mean spectral angle distance (mSAD) and the mean spectral information divergence (mSID) given by
mSAD=1/R∑_r=1^Rarccos(_r^⊤_r/_r_r)
and
mSID=1/R∑_r=1^R𝐩_rlog(𝐩_r/𝐩̂_r),
where _r and _r are the actual and estimate of the rth endmember, respectively, 𝐩_r=_r/1^⊤_r and 𝐩̂_i=_r/1^⊤_r. A smaller value indicates better estimation results. Finally, the peak signal-to-noise ratio (PSNR) is used to evaluate the image denoising and reconstruction, which is defined by
PSNR=10×log_10(MAX^2/MSE)
where MAX is the maximum pixel value of the reconstructed image 𝐘̂ and MSE is the mean square
error between the reconstructed image and the noise-free image. A higher value of PSNR indicates better reconstruction.
§.§ Experiments on the Synthetic data set
Data description – The synthetic images are composed of 100× 100 pixels. Abundance maps are generated using the method of the Hyperspectral Imagery Synthesis tools[http://www.ehu.es/ccwintco/index.php/Hyperspectral Imagery Synthesis
tools for MATLAB] to mimic a spatial homogeneity. A Gaussian field is drawn to generate the abundance matrix . The abundance ground-truth is shown in Fig. <ref>.
The abundances maps satisfy ANC and ASC. Sets of R=5 endmembers are randomly selected from the U.S. Geological Survey (USGS) spectral library with a number of spectral bands of B=224. These endmembers are mixed according to the LMM and an additive zero-mean Gaussian noise is considered with variances according to 4 signal-to-noise (SNR) ratios, i.e., SNR∈{5dB, 10dB, 20dB, 30dB}.
Results –
Tables <ref>-<ref> report the estimation results obtained by the compared algorithms in terms of RMSE for the abundance estimation, mSAD and mSID for the endmember estimation and PSNR for the reconstruction. Conventional unmixing methods, such as SUnSAL-TV and PnP-NMF, achieve good unmixing results, demonstrating the usefulness of the explicit prior provided by manually designed regularization. Deep learning-based methods, such as CNNAE, SNMF and CyCU-Net, they can obtain suitable unmixing results and better endmember estimation results compared with the conventional methods, illustrating the ability of deep networks of embedding prior information. These results also show that the proposed AE-RED framework outperforms the compared state-of-the-art methods, across all performance metrics and the noise levels. Fig. <ref> depicts the estimated abundance maps associated with the synthetic data set with SNR=10dB. It can be observed that the abundance maps estimated by the AE-RED framework exhibit better agreement with the ground-truth. Fig. <ref> shows the endmember estimated by the proposed framework on the synthetic data set with SNR=20dB, which are consistent with the ground-truth.
§.§ Experiments on the Real data set
Data description – Finally, experiments conducted on two real data sets are discussed. Firstly, one considers the Samson data set, which was acquired by SAMSON observer and contains B=156 spectral channels ranging from 400 nm to 889 nm. The original image is of size of 952× 952 pixels, and a subimage of 95× 95 pixels is cropped in the experiment. There are three endmembers in this data, namely “water", “tree" and “soil". The second real data set used in these experiments is known as the Jasper Ridge image. It was acquired by Analytical Imaging and Geophysics (AIG) in 1999 with B=224 spectral bands covering a spectral range from 380 nm to 2500 nm. One considers a subimage of size of 100× 100 pixels and B=198 channels after removing the bands affected by water vapor and atmospheric effects. It contains R=4 endmembers, namely “water", “soil", “tree" and “road".
Results –
As there is no available ground-truth for real datasets, a quantitative performance evaluation of abundances and endmembers cannot be provided. Therefore, we only rely on PSNR to evaluate the results of the compared methods. Table <ref> presents the PSNR performance associated with the compared methods obtained on the Samson data set. It is noteworthy that all methods produce comparable PSNR results, except for CNNAE, SNMF, and CyCU-Net, which provide significantly worse reconstruction. Although there is no ground-truth for the abundances, we can visually inspect the maps. For illustration purposes, we show the abundance maps estimated by the compared methods in Fig. <ref>. The proposed AE-RED framework can successfully separate the materials and provide sharp abundance estimates.
Table <ref> also lists the PSNR results for the Jasper Ridge data set.
It can also be observed that the proposed method reaches the best PSNR. Fig. <ref> depicts the abundance maps estimated by all compared methods. Some of them, such as UnDIP, fail to recover the road. Due to the learning ability of deep networks, most deep learning based methods are able distinguish the individual materials. Finally the proposed AE-RED framework provides abundance maps with more detailed information and sharper boundaries.
§ CONCLUSION
This paper proposed a generic unmixing framework to embed a RED within an autoencoder. By carefully designing the encoder and the decoder, the autoencoder was able to provide estimated abundance maps and endmember spectra. In particular, for illustration purpose, two different encoder architectures are considered, namely a CNN and a DIP. Moreover the decoder could be chosen according to a particular mixture model. Leveraging on ADMM scheme, the resulting optimization problem was split into simpler subproblems. The first one was described by an objective function composed of a data-fitting term and a quadratic regularization. It was solved through the training of an autoencoder. The second subproblem was a standard RED objective function and solved by the fixed-point strategy. Two denoisers were considered, namely NLM and BM4D. The effectiveness of the proposed framework was evaluated through experiments conducted on synthetic and real data sets. The results showed that the proposed framework outperformed state-of-the-art methods. Future works include considering explicit endmember priors within the proposed framework and automatically selecting mixing model.
IEEEtran
|
http://arxiv.org/abs/2307.02141v1
|
20230705093240
|
Anisotropic Inflation in Dipolar Bose-Einstein Condensates
|
[
"Arun Rana",
"Abhijit Pendse",
"Sebastian Wüster",
"Sukanta Panda"
] |
gr-qc
|
[
"gr-qc",
"cond-mat.quant-gas",
"quant-ph"
] |
1/2
1/4
Department of Physics, Indian Institute of Science Education and Research, Bhopal, Madhya Pradesh 462 066, IndiaDepartment of Physics, Indian Institute of Science Education and Research, Bhopal, Madhya Pradesh 462 066, IndiaDepartment of Physics, Indian Institute of Science Education and Research, Bhopal, Madhya Pradesh 462 066, IndiaDepartment of Physics, Indian Institute of Science Education and Research, Bhopal, Madhya Pradesh 462 066, [email protected]
Early during the era of cosmic inflation, rotational invariance may have been broken, only later emerging as a feature of low-energy physics. This motivates
ongoing searches for residual signatures of anisotropic space-time, for example in the power spectrum of the cosmic microwave background.
We propose that dipolar Bose-Einstein condensates (BECs) furnish a laboratory quantum simulation platform for the anisotropy evolution of fluctuation spectra during inflation,
exploiting the fact that the speed of dipolar condensate sound waves depends on direction.
We construct the anisotropic analogue space-time metric governing sound, by linking the time-varying strength of dipolar and contact interactions in the BEC to the scale factors in different coordinate directions. Based on these, we calculate the dynamics of phonon power spectra during an inflation that renders the initially anisotropic universe isotropic.
We find that the expansion speed provides an experimental handle to control and study the degree of final residual anisotropy.
Gravity analogues using dipolar condensates can thus provide tuneable experiments for a field of cosmology that was until now confined to a single experiment, our universe.
Anisotropic Inflation in Dipolar Bose-Einstein Condensates
S. Panda
August 1, 2023
==========================================================
Introduction
The cosmological principle, the assumption that our universe is isotropic and homogeneous on the largest length scales, is strongly supported by the isotropic thermal microwave radiation field known as Cosmic microwave background (CMB). But as we zoom in closer, we find several unexpected features <cit.> in the CMB, such as the alignment of lowest multipoles <cit.>, a hemispherical power asymmetry <cit.>, a preference for odd parity modes <cit.> and a large cold spot in the southern hemisphere <cit.>. There are several mechanisms to explain their origin <cit.>, one of which involves primordial breaking of rotational invariance. In that case, anomalies could be the imprints of a space-time anisotropy existing prior to inflation <cit.>.
Theory discussing the evolution of CMB power spectra in an anisotropic inflation <cit.>
can presently be compared with just our one single universe, additionally constrained to small residual asymmetries.
We show that both limitations can be overcome in analogue gravity experiments <cit.> with Bose-Einstein condensates (BEC) of particles with permanent dipoles <cit.>.
Analogue gravity <cit.> evolved from Unruh's seminal discovery of an analogue Hawking effect <cit.>
in a transsonic fluid flow <cit.>, arising since quantum sound waves propagate in an effective metric determined by the flow profile.
The latter can give rise to the sonic analog of a black hole event horizon, which has been realised and extensively studied in BEC <cit.>.
Similarly, rotating BEC can furnish analogs of rotating Kerr black holes and the Penrose effect <cit.>, while expanding BEC or those with changing interaction strengths
can mimic expanding universes <cit.> for the study of quantum fields during cosmological inflation. However, only isotropic expanding universes were explored <cit.>.
Our proposal will overcome this limitation, and thus provide the field of cosmology in anisotropic spacetimes with tuneable experiments
to study power spectra after complex inflation sequences, probe the effect of high frequency dispersion <cit.>, initial vacua <cit.>
conversion of inhomogeneities into anisotropies <cit.> or instabilities <cit.>. The dipolar BEC platform will also
enable interdisciplinary exchange with condensed matter and atomic physics communities <cit.>, exploring for example vacuum squeezing <cit.>.
Foundation
In dipolar BEC, the speed of sound c(β) depends on the angle β between propagation direction of phonons and the dipolar axis 𝐝 of the condensate atoms, see sketchb. In the gravitational analogy, this implies that the metric governing the propagation of sound waves acquires a preferred direction.
In BEC this analogue metric can then be tuned from anisotropy to isotropy by control over the contact and dipolar interaction strengths. This exploits Feshbach resonances <cit.>, to adjust the relative strength of s-wave and dipolar interactions <cit.> and time-averaged control of the dipolar interaction strength by rapidly rotating external fields <cit.>. Using both, the direction and degree of anisotropy can be temporally controlled in experiments.
In cosmology, anisotropies prior to inflation would impact the evolution of primordial density fluctuations in the inflaton field δ( k)<cit.>, leading to residual signatures in their power spectrum defined through
⟨δ( k)δ^*( q)⟩ =P( k)δ^3( k- q). Here k, q are wave vectors of fluctuating modes. A violation of rotational invariance during the inflationary era can modify the power spectrum from an isotropic form P(𝐤)=P(k) to an anisotropic one:
P'(𝐤) = P(k) + (k̂·n̂)^2 Δ P(k),
where n̂ is a unit vector along a preferred direction, k̂= 𝐤/|𝐤|<cit.>, and Δ P(k) the amplitude of the anisotropic component.
In our analog universe, made from an expanding dipolar BEC, the power spectrum of phonon vacuum fluctuations also starts anisotropically, and can then be experimentally followed through its evolution
while the universe expands and becomes isotropic. To demonstrate this, we tackle the initial phase of an inflation with direction dependent expansion rates as sketched in sketch, analytically and through simulations, focussing on the retention of anisotropy in fluctuation spectra even at the time where the universe itself has become isotropic.
Anisotropic effective space-time for phonons
The Hamiltonian for a dipolar BEC with atoms of mass m is <cit.>
Ĥ= ∫d^3r Ψ̂^†(𝐫,t)[-ħ^2∇^2/2m +ϕ̂_int(𝐫,t)/2]Ψ̂(𝐫,t),
with interaction operator
ϕ̂_int(r,t)=∫d^3r' Ψ̂^†(r',t)V_int(r-r',t)Ψ̂(r',t),
where V_int(𝐫-𝐫',t)=U(t)δ^(3)(𝐫-𝐫')+U_ dd(𝐫-𝐫',t) includes contact interactions of strength U(t) and long-range dipole-dipole interactions (DDI) U_ dd. For ψ=Ψ̂, the mean field approximation of Heisenberg's equation,
known as Gross-Pitaevskii equation (GPE), is
iħ∂ψ/∂ t=-ħ^2/2m∇^2ψ+(U(t)|ψ|^2+(t) )ψ,
with (𝐫,t)=∫|ψ(𝐫,t)|^2 U_ dd(𝐫-𝐫',t) d^3𝐫'.
Using the convolution theorem, the DDI can be expressed as (𝐫,t)= F^-1[Ũ_ dd(k,t)ñ(k,t)], where F denotes a Fourier transform, ñ(k,t)= F[|ψ(𝐫,t)|^2] and
Ũ_ dd(k,t)=U_d(t)(cos^2β(𝐤)-1/3)
the dipole-dipole interaction in Fourier space. Writing U_d(t)=μ_0μ(t)^2, with μ_0 the vacuum magnetic permeability, the dipole moment μ(t) of the atoms <cit.> is assumed
adjustable through external field averaging <cit.>. Here,
β is the angle between excitation wavenumber k and the constant polarization direction 𝐝, which we take as our z-axis.
The contact interaction strength U(t) = 4πħ^2a_s(t)/m is governed by the scattering length a_s(t), which can also be varied in time using Feshbach resonances <cit.>.
Expressing the condensate wavefunction as ψ(𝐫,t)=√(n(𝐫,t))e^iθ(𝐫,t) in eqn:GPE, we obtain two coupled partial differential equations for real variables, density n(𝐫,t) and phase θ(𝐫,t). We then re-instate small fluctuations on top of the mean field as n→ n_0+n̂_1 and θ→θ_0+θ̂_1, where n̂_1 and θ̂_1 are the fluctuations and n_0 and θ_0 are the background density and phase, respectively. Linearizing in n̂_1 and θ̂_1, we can eliminate n̂_1 to obtain an equation for phase fluctuations θ̂_1 of the form
1/√(-g) ∂_μ( √(-g) g^μν ∂_νθ̂_1 ) = 0,
defining an effective anisotropic metric tensor g_μν with
g_μμ = n_0/mc(t)[-c^2(t), a̅^2(t), a̅^2(t), b̅^2(t)]
on the diagonal, and g_μν=0 for μ≠ν. Here c(t)=√(n_0U(t)/m) is a fictitious speed of sound ignoring dipole interactions, while scale factors a̅(t)=[1-U_d(t)/3U(t)]^-1/2 and b̅(t)=[1+2U_d(t)/3U(t)]^-1/2 now incorporate the direction dependence of the true sound speed. We assumed a constant background density n_0, no condensate flow and dominant contact interactions U_d(t)/3U(t) < 1, see supplemental information (SI) <cit.>.
Inflation in metric shall arise dominantly through the time-dependence of contact interactions U(t)=U_0f(t), where U_0 is the interaction strength at t=0 and f(t) specified later. Meanwhile the relative importance of dipolar interactions governs (an)isotropy. Defining c_0^2=n_0U_0/m, the line element in the laboratory frame can then be written as
ds^2 = -c_0^2√(f(t)) dt^2 + a̅^2(t)/√(f(t)) (dx^2+dy^2)+b̅^2(t)/√(f(t)) dz^2.
To see the analogy to cosmology more clearly, we employ the time transformation dη^2=√(f(t)) dt^2 to reach
ds^2 = -c_0^2dη^2 + a^2(η) (dx^2+dy^2)+b^2(η) dz^2.
with a^2(η)=a̅^2(η)/√(f(η)) and b^2(η)=b̅^2(η)/√(f(η)).
Now, we construct an anisotropically expanding analogue inflationary universe, which evolves into an isotropic one and calculate the expected phonon fluctuation power spectrum, starting from an initial vacuum state. For this, we chose a(η)=a_0e^H_aη and b(η)=b_0e^H_bη, with two different (constant) Hubble parameters H_a=ȧ(η)/a(η) and H_b=ḃ(η)/b(η).
We will also refer to the average Hubble parameter H̅=(2H_a+H_b)/3 and deviation from dynamic isotropy as ϵ_H=2(H_b-H_a)/3H̅.
Together, our ansatz U(t)=U_0f(t) for the time variation of s-wave interactions and the target evolution of anisotropic scale factors, a(η) and b(η), now fix the relation between conformal time and laboratory time and required form of U_d(t)=μ_0μ_m^2 h(t)/4π with h(0)=1, as shown in the SI.
Power spectrum of fluctuation correlations
A key observable that can record the imprint of a possible anisotropy in the early universe is the fluctuation power spectrum, the analogue of which we propose to experimentally probe in tuneable experiments with dipolar BEC.
Here we define the power spectrum through P()=⟨â^†_â_⟩,
as vacuum expectation value of plane wave modes of the phase fluctuation field
θ̂_1(,t) = ∫d^3/(2π)^3(e^i ·θ̃_1(,t) â_
+ e^-i ·θ̃^*_1(,t) â^†_). P() can also be found through the Fourier transform of the phase correlation function, see SI. Condensate phase correlations can be measured through interference experiments <cit.>, or phase fluctuations could first be related to density fluctuations <cit.>.
Then high resolution density-density correlations can be recorded in experiments <cit.>.
Inserting θ̂_1 into eqn:field_equation, the metric metric_labtime implies
∂^2θ̃_1/∂ t^2+γ(t)∂θ̃_1/∂ t
+ω(t)^2θ̃_1 = 0,
the equation of motion of a damped harmonic oscillator with time-dependent frequency
ω(t)=(𝐤^2 n_0 𝒬/m)^1/2 and damping rate γ(t)= 𝒬(∂𝒬/∂ t),
using 𝒬=𝒬(𝐤,t)=-U(t)-U_d(t)[cos^2β(𝐤)-(1/3)].
We convert bec_fluc_eqn into the equivalent Hamilton equations, to obtain complex mode amplitudes θ̃_1(𝐤 ,t), from which we can obtain P(𝐤,t)=|θ̃_1(𝐤 ,t)|^2, see SI.
Initial conditions are found from the Bogoliubov equations of the initial dipolar BEC, and the resultant k^3P(k) is shown in power_spectra_fig.
For the demonstration, we assume a dipolar BEC of Erbium atoms <cit.>, each of mass m=2.8×10^-25 kg, with initial magnetic dipolar moment μ_m = 1.897μ_B already reduced compared to the usual μ̅_m = 7μ_B, where μ_B is the Bohr magneton.
The initial modified s-wave scattering length is a_s= 0.599 nm and homogenous density n_0=5×10^20 m^-3, yielding an initial healing length ξ_0=0.364μm. The inflationary parameters are taken as a_0=1.225, b_0=0.775, H_a=(200/q) s^-1 and H_b=(658/q)s^-1, where the factor q just scales the expansion rate.
For these choices, the metric becomes isotropic at a lab time tiso=q× 1.25 ms.
The evolving power spectrum thus obtained is shown in power_spectra_fig for the case of q=10. Fourier components of correlations in different directions have different strength initially, a signature of an anisotropic Bogoliubov vacuum. As the analogue universe expands, it also becomes more isotropic, since the two scale factors approach each other. Consequently the power spectrum changes from strongly anisotropic to nearly isotropic. At
t=tiso, shown in panel power_spectra_fig(d), small imprints of the initial anisotropy still remain, although the metric has become isotropic.
Experiments could naturally handle much more extreme inflation sequences than the one here, and probe additional topics actively explored in cosmology, such as unstable modes <cit.> and the conversion of inhomogeneity into anisotropy <cit.>.
Beyond mean field simulations
To confirm the analogue model, we numerically simulate the same inflation with the Truncated Wigner Approximation (TWA)
<cit.>, which can provide the quantum field evolution from DBEC_Hamiltonian as long as fluctuations remain small. Unlike the analytical calculations, these simulations also describe BEC excitations with wavenumbers kξ(t)>1 for which the analogy does not hold. They further would cover particle creation <cit.>, which is absent here, and can verify the dynamical stability of the mean field background on time-scales of interest.
In TWA, one generates an ensemble of stochastic fluctuations added to the mean field, to sample the Wigner quasi-distribution function of the initial density operator. The quantum field dynamics is then found from noisy GPE simulations. We extract the power spectrum from phase fluctuation correlation functions via
P(k,t)=∫ d^3𝐫_0 ∫ d^3𝐫'⟨θ̂_1(𝐫_0,t)θ̂_1(𝐫_0+𝐫',t)⟩ e^-i 𝐤·𝐫' /V, as discussed in the SI <cit.>, and use the same parameters as before, in a cubic box of volume V=(50μm)^3 with (64)^3 gridpoints and Ntraj=5120 stochastic trajectories.
TWA power spectra confirm our analytical results, as shown in power_spectra_fig, and thus verify that there is no disturbing effect of
single particle excitations at high wavenumbers and that dynamic instabilities of the mean field are absent. These would only occur in dipolar BEC for larger dipolar interaction strength <cit.>.
Analog gravity has thus allowed us to map isotropisation during cosmic inflation to continuous variations of a many-body Hamiltonian.
The slower the Hamiltonian changes, the better the system will be able to adiabatically follow the quantum ground-state. The latter will be isotropic for an isotropic system, unless there is spontaneous symmetry breaking. We thus expect final power spectra to be more isotropic at t=tiso for slow evolution (large q). This is indeed what we find, as shown in power_spectra_results. We have defined the net anisotropy of a spectrum as A(q,t)=[P̅_z-P̅_x]/P̅_z, with P̅_j=∫_0^kmax dk_j k_j P(k_j,t), where the upper integration limit is the largest wavenumber containing noise in TWA, kmax=0.94μm^-1 for power_spectra_results. The figure also shows more detailed cuts through power-spectra from TWA in the (k_x,k_z) plane, illustrating that |𝐤| P only depends on β initially (see SI), which is why we have chosen it as integrand for A(q,t). During inflation, the function |𝐤| P then acquires nontrivial structure, shown in power_spectra_resultsc.
An important dynamical scale during cosmic inflation is the Hubble radius R_h(t) = c̅(t)/H̅∼ q (Hubble wavenumber K_h=R_h^-1). Only modes with wavelengths λ<R_h(t) will be oscillating, while those with λ>R_h(t) freeze out <cit.>. The latter are situated on the left of the vertical blue dotted lines in power_spectra_fig, but would contain most modes shown for the lower q. Meanwhile, the analog metric metric_labtime only describes long wavelength modes with k ξ(t)< 1, to the left of the magenta dashed vertical line in power_spectra_figd, and at larger k in other panels. We thus demonstrated that one can study both, frozen and unfrozen modes, with wavenumbers for which the analogy is valid. Dipolar BEC can have lifetimes of a few hundred milliseconds even while tuning interactions <cit.>, and all chosen isotropisation times tiso are shorter.
Conclusions and outlook
We have shown that dipolar Bose-Einstein condensates can provide an experimental window on the dynamics of quantum fields during anisotropic cosmological inflation, which was
was hitherto experimentally inaccessible, except for observations of our one single universe. Thus one can probe different residual anisotropies after a given inflation sequence, conversion of inhomogeneities into anisotropies <cit.>, instabilities <cit.>
or mode squeezing <cit.>. If the condensate is given a finite flow velocity, the same experimental platform can also create analogue black holes in anisotropic space times. By tuning the initial fluctuations, we can explore the analog of primoridal gravitational waves and how these would later reflect an initial anisotropy of the universe, motivated by Ito:2016aai predicting that the detection of gravitational wave in the 10-100 MHz regime would solidify the occurrence of anisotropic inflation. Instead of dipolar BEC, anisotropic analogue space times could also be engineered using spin-orbit coupling <cit.>, and even more tunability might arise from combining the two.
We gratefully acknowledge financial support from the Max-Planck society under the MPG-India partner group program
and helpful comments from Rejish Nath. S.P. would like to thank DST (Govt. of India) for the financial support under Grant No. SERB/PHY/2021057.
apsrev4-1
§ SUPPLEMENTARY MATERIAL
§.§ Derivation of the anisotropic metric
To investigate dipolar BEC, consider the Gross-Pitaevskii (GP) equation for the 3+1-D case as
iħ∂ψ/∂ t=-ħ^2/2m∇^2ψ+(V_ ext+U|ψ|^2+)ψ,
where is the dipolar mean field interaction
(𝐫,t)=∫|ψ(𝐫',t)|^2 U_ dd(𝐫-𝐫') d^3𝐫'.
Using the convolution theorem, app:GPE becomes
iħ∂ψ(𝐫,t)/∂ t=-ħ^2/2m∇^2ψ(𝐫,t)
+U|ψ(𝐫,t)|^2ψ(𝐫,t)
+U_d(∫d^3𝐤/(2π)^3 e^i· f(𝐤)
ñ(𝐤,t))ψ(𝐫,t).
Here, U=4πħ^2a_s/m where a_s is the s-wave scattering length and m is the mass of particles constituting the BEC. The dipolar interaction strength is U_d=μ_0μ^2 where μ is the dipole moment of the BEC particles and μ_0 the permeability of the vacuum. The function ñ(𝐤,t) denotes the Fourier transform of the atomic number density |ψ(𝐫,t)|^2. The interaction kernel f(𝐤) is given by
f(𝐤)=3(𝐤̂·𝐝̂)^2-1/3=3cos^2β-1/3,
where β is the angle between the wavevector direction 𝐤̂=𝐤/|𝐤| and the dipole axis 𝐝̂=𝐝/|𝐝|, which we choose to define the z-axis and keep constant.
Now, to obtain the metric from GPE_explicit, we use the Madelung ansatz for the wavefunction ψ(𝐫,t)=√(n(𝐫,t)) e^iθ(𝐫,t) to derive evolution equations for n(𝐫,t) and θ(𝐫,t) as
∂ n/∂ t=-ħ/m[(∇ n)·(∇θ)+n∇^2θ],
∂θ/∂ t=-ħ/2m(∇θ)^2-Un/ħ -U_d/ħℱ^-1[f(𝐤) ñ],
where ℱ^-1 denotes the inverse Fourier transform and we have omitted the arguments of n and θ for the purpose of compactness.
Next, we wish to obtain equation for fluctuations about the mean field and replace n and θ as n→ n_0+n_1 and θ→θ_0+θ_1, where n_1 and θ_1 are small amplitude fluctuations. We thus focus on fluctuations around the mean field √(n_0)e^iθ_0. Further, we assume that the mean field has no flow velocity associated with it, ∇θ_0=0, and that the mean density is constant over space, ∇ n_0=0. Both are well satisfied near the centre of a large BEC in the Thomas-Fermi limit. With these assumptions and linearization in the small amplitude fields n_1 and θ_1, we turn eqn:nt_evo_eqn1eqn:nt_evo_eqn2 into
∂ n_1/∂ t=-ħ/m[n_0∇^2θ_1],
∂θ_1/∂ t=-Un_1/ħ-U_d/ħℱ^-1[f(𝐤) ñ_1].
Taking the Fourier transform w.r.t. spatial dimensions of the above equation yields
∂ñ_1/∂ t=ħ/m[n_0 (k_x^2+k_y^2+k_z^2) θ̃_1],
∂θ̃_1/∂ t=-Uñ_1/ħ-U_d/ħ[f(𝐤) ñ_1],
using the short hand θ̃_1 and ñ_1 for Fourier space fluctuations.
We can formally solve dtheta1_dt for
ñ_1 =∂θ̃_1/∂ t×{- U/ħ-U_d/ħ[f(𝐤) ] }^-1,
and insert this into dn1_dt, using fkernel to find:
∂/∂ t(∂θ̃_1/∂ t×{-U-U_d[-k_x^2-k_y^2+ 2 k_z^2/3𝐤^2] }^-1)
-n_0/m[ 𝐤^2 θ̃_1] = 0,
which is an equation for the phase fluctuations alone.
To obtain the metric, we compare eqn:comp_field_eqn with the Fourier transform of eqn:field_equation and see
g_μν = n_0/mc(t)[ -c^2(t) 0 0 0; 0 a̅(t)^2 0 0; 0 0 a̅(t)^2 0; 0 0 0 b̅(t)^2 ],
with a_1(t), a_2(t) defined as
1/a̅(t)^2=(1-U_d(t)/3 U(t)), 1/b̅(t)^2=(1+2 U_d(t)/3 U(t)).
Whenever the dipole interactions are absent and thus U_d=0, the metric is isotropic as expected.
§.§ Anisotropic analogue inflation in BEC
We can rewrite the metric metric1
using c^2(t)=n_0U(t)/m, inserting the parametrisation of time dependent contact interactions U(t)=U_0f(t) and definitions c_0^2=n_0U_0/m and Ω_0^2=√(n_0/mU_0) as
g_μν = Ω_0^2
[ -c_0^2√(f(t)) 0 0 0; 0 a̅(t)^2/√(f(t)) 0 0; 0 0 a̅(t)^2/√(f(t)) 0; 0 0 0 b̅(t)^2/√(f(t)) ],
from which we remove the conformal factor Ω_0^2 with the definition g_μν = Ω_0^2 g̃_μν and then express the
line element in terms of g̃_μν
ds^2 = -c_0^2√(f(t)) dt^2 + a̅(t)^2/√(f(t)) (dx^2+dy^2)+b̅(t)^2/√(f(t)) dz^2 .
Now, we re-define the time-coordinate as
dη^2=√(f(t)) dt^2,
and write the line element in the new coordinates as,
ds^2 = -c_0^2dη^2 + a^2(η) (dx^2+dy^2)+b^2(η) dz^2.
For the analogue inflationary universe to expand anisotropically, we take a(η)=a_0e^H_aη and b(η)=b_0e^H_bη where H_a=ȧ(η)/a(η) and H_b=ḃ(η)/b(η), U(t)=U_0f(t) and U_d(t)=μ_0μ_m^2h(t), where f(t) and h(t) contain the time dependent part of contact and dipolar interactions respectively.
Using these relations, we reach
f(t)={[(2/a_0^2) e^-2H_aη(t)+(1/b_0^2)e^-2H_bη(t)]/3}^2
and
h(t)= [(2/a_0^2) e^-2H_aη(t)+(1/b_0^2)e^-2H_bη(t))]
×[(-1/a_0^2) e^-2H_aη(t)+(1/b_0^2)e^-2H_bη(t)]/3.
Now using time_transf and def_ft, we find the relation between transformed time η and lab time t, which we express in the form η(t)=∑_j=1^l c_j t^j where the coefficients c_j depend on Hubble parameters H_a,b and thus on the inflation rate control parameter q. This dependence arises since f(t), h(t) depend on H_a and H_b, which in turn depend on q. From η(t) we can insert def_ft and def_ht into U(t)=U_0f(t) and U_d(t)=μ_0μ_m^2h(t) to generate a target inflationary scenario.
We have now provided a complete recipe for tuning the interactions such that one obtains an anisotropically expanding universe in dipolar BEC.
The same recipe can also be used to implement a different functional form for scale factors in conformal time than the one assumed above.
§.§ Evolution of the power spectrum
The power spectrum is defined through the correlations of the field operators ⟨0|θ̂_1(𝐫,t)θ̂_1(𝐫',t)|0⟩. Since we are considering a homogenous system,
these can only depend on the relative coordinate 𝐫 -𝐫', and thus Fourier transform as
⟨0|θ̂_1(𝐫,t)θ̂_1(𝐫',t)|0⟩=∫d^3/(2π)^3e^-i · (𝐫 -𝐫')P().
The free scalar field of phase fluctuations θ_1(,t) is quantized as usual by expanding in Fourier
modes,
θ̂_1(,t) = ∫d^3/(2π)^3(e^i ·θ̃_1(,t) â_
+ e^-i ·θ̃^*_1(,t) â^†_) .
The creation and annihilation operators â^†_ and â_ satisfy the standard Bosonic commutation
relations.
In the following, we calculate the cosmologically relevant quantity k^3P(k), for which results are shown in power_spectra_fig of the main text.
First we use eq:FourierDef and the metric in lab_frame_metric in eqn:field_equation to reach
∂^2θ̃_1/∂ t^2+γ(t)∂θ̃_1/∂ t
+ω(𝐤,t)^2θ̃_1 = 0 ,
which is analogous to the equation of motion of a damped harmonic oscillator with time-dependent frequency
ω(𝐤,t)=(n_0 𝒬𝐤^2/m)^1/2 and damping rate γ(t)= 𝒬(∂𝒬/∂ t)
where
𝒬=𝒬(𝐤,t)=-U(t)-U_d(t)[cos^2β(𝐤)-(1/3)].
To seek complex solutions of bec_fluc_eqn_SI, we first convert it to the equivalent Hamiltonian equations of motion:
q̇(t)=p(t), ṗ(t)=-γ(t)p(t)-ω(𝐤,t)^2q(t),
from which we can construct complex mode amplitudes θ̃_1(,t)=q(𝐤 ,t)+i p(𝐤 ,t)/ω(t).
In the vacuum â_|0⟩=0, we then have
P(𝐤,t)=|θ̃_1(,t)|^2. We solve hamiltonian_eoms numerically with initial conditions θ̃(0) matched onto the Bogoliubov vacuum of the initial state of the dipolar BEC, discussed in the next section, and p(0)=0.
The results obtained for modes along the x and the z axis are shown in power_spectra_fig .
§.§ Truncated Wigner simulations
Here we describe how correlations of phase fluctuations can be obtained from TWA averages. We start from the Bose field operator written as a sum of mean field and quantum fluctuations. In the Madelung ansatz, Ψ̂(𝐫,t)= √(n_0+n̂_1(𝐫,t)) e^i(θ_0+θ̂_1(𝐫,t)), where n̂_1 and θ̂_1 represent density and phase fluctuations respectively. Assuming that fluctuations are small compared to the mean field, the field operator and consequently, the fluctuations may be written as
Ψ̂(𝐫,t) =Ψ_0+δΨ̂(𝐫,t)
=√(n_0)+√(n_0)(n̂_1(𝐫,t)/2n_0+iθ̂_1(𝐫,t)),
n̂_1(𝐫,t) =√(n_0)(δΨ̂(𝐫,t)+δΨ̂^†(𝐫,t))
θ̂_1(𝐫,t) =i/2√(n_0)(-δΨ̂(𝐫,t)+δΨ̂^†(𝐫,t)) .
With this form of the phase fluctuations θ̂_1 we can write the phase correlations as
⟨θ̂_1(𝐫,t)θ̂_1(𝐫+𝐫',t)⟩= 1/4n_0(-⟨δΨ̂(𝐫,t)δΨ̂(𝐫+𝐫',t)⟩ + ⟨δΨ̂(𝐫,t)δΨ̂^†(𝐫+𝐫',t)⟩+ ⟨δΨ̂^†(𝐫,t)δΨ̂(𝐫+𝐫',t)⟩ -⟨δΨ̂^†(𝐫,t)δΨ̂^†(𝐫+𝐫',t)⟩).
We know that truncated Wigner averages ⋯_W provide an approximation for symmetrically ordered expectation values
of field operators:
⟨α^*(𝐫,t) α(r',t) ⟩ _W =(⟨Ψ̂^†(r,t) Ψ̂( r',t) + Ψ̂( r',t)Ψ̂^†(r,t) ⟩)/2.
Hence, the correlation of phase fluctuations can be expressed in terms of TWA averages in position space as
⟨θ̂_1(𝐫,t)θ̂_1(𝐫+𝐫',t)⟩=
1/4[- ⟨α(𝐫,t)α(𝐫+𝐫',t)⟩ _W/⟨α(𝐫,t) ⟩ _W⟨α(𝐫+𝐫',t) ⟩ _W + ⟨α(𝐫,t)α^*(𝐫+𝐫',t)⟩ _W/⟨α(𝐫,t) ⟩ _W⟨α^*(𝐫+𝐫',t) ⟩ _W
+⟨α^*(𝐫,t)α(𝐫+𝐫',t)⟩ _W/⟨α^*(𝐫,t) ⟩ _W⟨α(𝐫+𝐫',t) ⟩ _W -⟨α^*(𝐫,t)α^*(𝐫+𝐫',t)⟩ _W/⟨α^*(𝐫,t) ⟩ _W⟨α^*(𝐫+𝐫',t) ⟩ _W],
where ⟨α(𝐫,t)⟩ _W= √(n_0). Since we consider a homogeneous system, these correlation do not depend on 𝐫 and we average over that coordinate to increase statistics.
The power spectrum using the 3D correlation function is written as
P(,t)=∫⟨θ̂_1(𝐫,t)θ̂_1(𝐫+𝐫',t)⟩ e^-i·𝐫d𝐫,
where the integrand is given by eq:phase_corr2.
In our numerical TWA implementation, we initialize the stochastic fields by adding noise to the mean field. The noise is added in the Bogoliubov mode basis and the stochastic field α(𝐫,t) is initialized as
α(𝐫,t)=√(n_0)+1/√(V)∑_𝐤,k<kmax( β_k u_k e^ik·𝐫 + β^*_k v_k e^-ik·𝐫),
with k=|𝐤|, where √(n_0) is the uniform initial wavefunction of BEC.
Here kmax=K/2 is the largest wavenumber for which we add noise, chosen less than the maximum K allowed by our Fourier domain, to avoid aliasing.
The quantum fluctuations are captured by β_k which are random numbers satisfying the relation ⟨β_k⟩=0, ⟨β_qβ_k⟩=0 and ⟨β^*_qβ_k⟩=δ_q,k, where δ_q,k is the Kronecker delta.
In the simulation, the wavefunction is initialized at t=0, and we can write
⟨α(𝐫,0)⟩_W=√(n_0),
E_k = ħ^2k^2/2m, ϵ_k=ħ k/√(2m)√(E_k+[U_0+U_d(0)/3(3cos^2β-1)]2n_0), u_k=1/2E_k+ϵ_k/√(ϵ_kE_k),
v_k=1/2E_k-ϵ_k/√(ϵ_kE_k).
Using eq:phase_corr2-eq:initial_rel we can analytically find the initial power spectrum as
P(k,t=0) =1/4n_0[2(u_k-v_k)^2]
=1/2n_0ϵ_k/E_k,
which is also used to determine initial conditions for the analytical solutions of bec_fluc_eqn.
|
http://arxiv.org/abs/2307.00608v1
|
20230702162747
|
On a methodology to determine whether the fluid slips adjacent to a solid surface
|
[
"Josef Málek",
"Kumbakonam R. Rajagopal"
] |
physics.flu-dyn
|
[
"physics.flu-dyn",
"math-ph",
"math.MP",
"76A02, 76D05, 35B30"
] |
#1#1
#1#1
#1#1
#⃗1#1
#1|| #1 ||
#1|| #1 ||
#1| #1 |
I
II
III
div
grad
Div
Grad
cof
det
span
Proof:
#1#2∂#1/∂#2
#1#2#̣1/#̣2
#⃗1#1
0⃗
𝒜
ℬ
ḇ̱⃗
𝒞
ç̧⃗
v⃗
D⃗
D⃗
B⃗
ε
ϵ
f⃗
F⃗
g⃗
G⃗
H̋⃗̋
ℋ
I⃗
Im
ȷj⃗
J⃗
d⃗
k̨̨⃗
n⃗
q⃗
S⃗
s⃗
T⃗
ŭ⃗̆
φ⃗
_τ
⊗
𝒱
w⃗
W⃗
x⃗
Z⃗
Z⃗
X⃗
Y⃗
α⃗
_
≥_
ϕ_δ(||^2)
ϕ_δ(||^2)
ϕ_δ(||^2)
_
^
Ã_^
_
^
^
^
^η
^
^
^
^η
^
⊗
^δ
^δ
^δ
#1W^1,#1_,
W^1,r_,
øΩ
∂Ω
/ṭ
∂_t
∫_0^t
∫_0^T
∫_ø
∫_Q
∫_Q^t
∫_
∫_Γ
∫_Γ^t
ḍ
𝒫
𝒲
tr
⇀
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
remark[theorem]Remark
corollary[theorem]Corollary
definition[theorem]Definition
example[theorem]Example
equationsection
On determination of boundary conditions]On a methodology to determine whether the fluid slips adjacent to a solid surface
J. Málek acknowledges the support of the project No. 23-05207S financed by the Czech Science
Foundation (GA ČR). J. Málek is a member of the Nečas Center for Mathematical Modelling. K R. Rajagopal thanks the Office of Naval Research for its support of this work.
J. Málek]J. Málek
Charles University, Faculty of Mathematics and Physics, Mathematical institute, Sokolovská 83, 18675 Prague 8, Czech Republic
[email protected]
K. R. Rajagopal]K. R. Rajagopal
Department of Mechanical Engineering,
Texas A&M University, College Station, TX 77845 USA
[email protected]
We discuss a methodology that could be gainfully exploited using easily measurable experimental quantities to ascertain if the “no-slip" boundary condition is appropriate for the flows of fluids past a solid boundary.
[
[
August 1, 2023
==================
§ INTRODUCTION
The imprimatur of Stokes popularized the assumption of the “no-slip"[The notion of “no-slip" at a solid boundary can be traced back to Daniel Bernoulli in Hydrodynamica.] boundary condition for the flow of a Navier-Stokes fluid adjacent to a solid boundary at the point of contact, even though Stokes was far from sanguine about its aptness, for Stokes <cit.> remarks: “Du Buat found by experiments that when the mean velocity of water flowing through a pipe is less than one inch in a second, the water near the inner surface of the pipe is at rest." The speed at which the fluid is flowing, which Stokes is referring to, is very slow, and Stokes' opinion was that probably in such exceedingly slow flows the “no-slip" boundary condition for the fluid adjacent to a solid
boundary might be appropriate. In fact, Stokes also remarks: “The most interesting questions connected with this subject require for their solution a knowledge of the conditions which must be satisfied at the surface of a solid in contact with the fluid, which, except in the case of very small motions, are unknown." The key part of the above quote from Stokes “…except in the case of very small motions, are unknown." makes it abundantly clear that Stokes was far from convinced that the “no-slip" boundary condition for a fluid flowing past a solid boundary at the point of contact was felicitous in general flows.
To determine experimentally whether the fluid slips, or does not slip at the boundary is challenging. While one could possibly make measurements very close to the boundary, to get data immediately adjacent to the boundary
is exceedingly difficult. There are numerous experiments, see for example <cit.> amongst many others, that determine the slip for the fluids at the boundary. Moreover, the “no-slip" boundary condition does not seem to be applicable to gases. It is also well known that the “no-slip" condition is not applicable for the flows of several polymers adjacent to solid boundaries, see <cit.>.
While our discussion in the paper is mainly concerned with the flows of Navier-Stokes fluids, in order to understand the main thesis of the work, it is best to set our discussion within the context of a larger class of fluids.
The main justification for the “no-slip" assumption is supposedly the accordance of experimental predictions for a wide range of problems. That being said, the “no-slip" condition is an assumption, no more and no less, and it is not
based on physical considerations. In fact, the early
savants, such as Coulomb, Girard, Navier, Poisson, Prony and others had competing hypotheses based on physical considerations, see Goldstein <cit.>.
In view of the “no-slip" condition merely being an assumption, three questions immediately offer themselves:
* Can we obtain solutions for flow problems which involve solid boundaries wherein we refrain from making the “no-slip" assumption?
* Based on the above solution, can we assess the validity of the “no-slip" boundary condition?
* What could have prompted the eager acceptance of the “no-slip" condition by both fluid dynamicists and mathematicians?
In this short paper, we address all these questions. Let us address the last question first.
§ PRELIMINARY CONSIDERATIONS
A fluid is referred to as Stokesian fluid if the Cauchy stress is a function of the density ϱ and the symmetric part of the velocity gradient given by
= f⃗(ϱ, ) ,
where
:=1/2[∇v⃗ + (∇v⃗)^T)], v⃗ being the velocity.
We notice that the Navier-Stokes fluids and power-law fluids are subclasses of Stokesian fluids. In the case of incompressible Stokesian fluids, the constitutive relation takes the form
= -p + g⃗() .
We notice that (<ref>) is a special subclass of implicit constitutive relations of the form (see Rajagopal <cit.>)
f̃⃗̃ (ϱ, , ) = 0 ,
the incompressible counterpart being
= -p + g̃⃗̃(, ) .
Classification of incompressible fluids described by the implicit constitutive equations can be found in <cit.>.
The Navier-Stokes fluids are characterized by the linear relation between the Cauchy stress and .
More precisely, the constitutive equation for the compressible and the incompressible Navier-Stokes fluid are expressed, respectively, through
= -p(ϱ) + λ(ϱ)(tr) + 2μ (ϱ) ,
and
= -p + 2μ.
where in the former case ϱ is the density, p(ϱ) is the thermodynamic pressure, μ(ϱ) is the shear viscosity and λ(ϱ) is referred to as the second coefficient of viscosity (3λ + 2μ is referred to as bulk modulus/bulk viscosity), while in the latter case p is the (constitutively) indeterminate part of the stress due to the constraint of incompressibility and μ is the constant shear viscosity.
For the sake of illustrating the answer to the third question above, let us consider (<ref>). On substituting (<ref>) into the balance of linear momentum
ϱdv⃗/dt = + ϱb⃗ ,
where b⃗ is the specific body force, and taking into cognizance that the fluid is incompressible, we obtain
v⃗ = 0 ,
ϱdv⃗/dt = - ∇ p + μΔv⃗ + ϱb⃗ ,
for the two unknowns v⃗ and p. It would thus seem natural to prescribe the boundary conditions for the velocity v⃗. Taking the curl of equation (<ref>), one obtains, in the case of a conservative body forces, a partial differential equation of higher order in the velocity, but the pressure is eliminated and we are left with the equation for the velocity. In problems where free surfaces are involved, we prescribe the condition on the traction at the free surface which in turn is given in the terms of the derivative of v⃗. However, when a rigid wall is involved, the condition that is enforced is the
“no-slip" condition, and prescribing the velocity seems the natural thing to do as only velocity appears in the governing equation. Also, early studies in fluid mechanics invariably appeal to semi-inverse methods where the investigators resorted to using special forms for the velocity and for the pressure; these special forms lead to ordinary differential equations for the velocity components of second or higher order (when the pressure is eliminated), which naturally calls for specification of the velocity values on the boundary. This seems wrong headed for a couple of reasons. The first, from a philosophical point of view, namely prescribing stresses in terms of kinematics as in (<ref>) is a prescription of the “cause" in terms of the “effect", turning causality topsy-turvy, and second it misses the opportunity to test the aptness of the “no-slip" boundary condition. The rest of this short paper is dedicated to discussing the second issue.
If we were to solve the system (<ref>) and (<ref>), in general, we cannot substitute an expression for in terms of ∇v⃗ into (<ref>) and obtain an equation that involves ∇ p and the second derivative of the velocity, we would have to solve (<ref>) and (<ref>) simultaneously. Thus, it would not be natural to seek an additional condition to impermeability, that is v⃗·n⃗=0, for the velocity. Moreover, solving (<ref>) and (<ref>) entails only first derivatives in velocity.
Instead of considering (<ref>) and (<ref>), we confine ourselves to the system (<ref>) and (<ref>) applicable to the Navier-Stokes equation and show how studying the system without assuming the “no-slip" boundary condition allows us to evaluate its validity concerning whether the slip has to take place, thereby answering the first two questions.
It is interesting to note that the incompressible Navier-Stokes model can be expressed as a subclass of (see Málek et al. <cit.>)
= f̂⃗̂(), where := - 1/3(tr),
an expression that meets the requirements of causality. In such a case, one would have to solve the balance of linear momentum and the constitutive relation simultaneously and not think in terms of merely specifying boundary conditions for the velocity.
§ ILLUSTRATIVE EXAMPLES
We will consider five very simple problems to illustrate that the solution to these problems can be obtained without resorting to enforcing the “no-slip" boundary condition, and moreover the methodology of obtaining the solution immediately provides a very simple and ingenious approach to determining whether there is slip!
§.§ Poiseuille flow in a pipe
We study the flow of a Navier-Stokes fluid in a cylindrical pipe of infinite length and assume that b⃗= 0⃗ (no gravity) and v⃗ = v(r) e⃗_z. Then, as a consequence of the constitutive equation (<ref>) we get = -p + T_rz(e⃗_r ⊗e⃗_z + e⃗_z ⊗e⃗_r) where T_rz = T_rz(r). The governing equations (<ref>) also imply that p=p(z).
In the classical approach, the governing equations then reduce to only one scalar equation, namely
p' = μ( v” + 1/r v') .
Note that p= p(z) and v = v(r) and p' means the derivative of p with respect to z, while v' and v” denote the derivatives of v with respect to r. As p=p(z) and v = v(r), one observes that p' = c, where c is a constant (that is negative so that the fluid flows from from a region of higher pressure to one that is of lower pressure), and
v” + 1/r v' = c/μ ,
which implies that
v(r) = cr^2/4μ + c_1 ln r + c_2 . (c_1,c_2 are constants)
The requirement that the velocity is bounded at r=0 implies that c_1 = 0. Finally, the requirement that the fluid exhibits “no-slip" at the wall r=R implies that c_2 = - cR^2/4μ. Hence, (<ref>) can be expressed as
v(r) = - c(R^2 - r^2)/4μ.
In the alternate approach that we advocate, the structure of the quantities v⃗, p and is the same as above. This time, however, the balance of linear momentum (<ref>) is used, which gives (let us stress again that p= p(z), while v = v(r) and T_rz = T_rz(r) and p' means the derivative of p with respect to z, while T_rz' and v' denote the derivative of T_rz and v with respect to r):
-p' + T_rz' + T_rz/r = 0 1/r[r T_rz]' = p' p' = c and [r T_rz]' = r c,
where c is as above a negative constant (pressure gradient). Consequently,
r T_rz = c r^2/2 + c_1 T_rz = cr/2 + c_1/r .
From the constitutive equation T_rz = μ v', we recover the expression (<ref>), and, again, we set c_1 = 0 in order to avoid the singularity at r=0.
Let Q be the volumetric flow rate, i.e., Q = ∫_0^R 2π r v(r) dr. Using (<ref>) with c_1=0 we obtain
Q = π c R^4/8μ + c_2 π R^2 .
Hence
c_2 = 1/π R^2[ Q - π c R^4/8μ] and v(r) = cr^2/4μ + Q/π R^2 - cR^2/8μ .
Subtracting and adding the term cR^2/(4μ) we get
v(r) = - c/4μ(R^2 - r^2) + [Q/π R^2 + c/8μ R^2] .
If Q≠ - cπ R^4/8μ, then v(R) ≠ 0. There is slip. If Q = - c π R^4/8μ, then v(R) = 0. There is “no-slip". Note that the quantities μ, R, c (the pressure drop) and Q can be easily measured.
If we are certain that the pipe is a straight pipe with constant circular cross-section and sufficiently long that the end effects are negligible, then we could use the above condition to conclude whether the fluid slips or adheres at the point of contact with the solid surface. However, an obstruction in the interior of the pipe would make such a conclusion invalid. In fact, abnormality in the pressure gradient versus flow rate relationship is what a cardiologist depends upon to recognize the presence of aneurysms in blood vessels.
§.§ Cylindrical Couette problem
Couette flow, i.e., the flow between concentric cylinders (with radii R_i (inner cylinder) and R_o (outer cylinder), 0<R_i<R_o) rotating with the angular velocities Ω_i and Ω_o, is characterized by the following conditions:
v⃗ = ω(r) e⃗_ϕ and p= p(r) , R_i < r < R_o.
We again assume that b⃗= 0⃗.
In the classical approach, stemming from (<ref>), the governing equations for the Navier-Stokes fluid take the form
p'(r) = ρω^2(r)/r and μ/ϱ[ ω” + ω'/r - ω/r^2] = 0
with the solution given by
ω(r) = Cr + D/r,
p(r) = ϱ( C^2r^2/2 - D^2/2r^2 + 2CD ln r).
From the “no-slip" boundary conditions for the velocity, i.e.,
ω(R_i) = Ω_i and ω(R_o) = Ω_o,
we conclude that
C = Ω_o R_o - Ω_i R_i/R_o^2 - R_i^2 and D= R_iR_o(Ω_i R_o - Ω_o R_i)/R_o^2 - R_i^2.
Then
ω(r) = Ω_o R_o - Ω_i R_i/R_o^2 - R_i^2r + R_iR_o(Ω_i R_o - Ω_o R_i)/R_o^2 - R_i^21/r,
and we get corresponding formula for p from (<ref>) and (<ref>).
In the alternate approach, we start with governing balance equations that take the form
ϱω^2(r)/r = p' and (r^2 T_rϕ)'/r^2 = 2/r T_rϕ + T_rϕ' = 0
together with the constitutive equation of the form
T_rϕ = μ[ ω' - ω/r].
Let us assume that we know the applied torques at the inner and outer cylinders, i.e.,
T_rϕ(R_i) = M_i and T_rϕ(R_o) = M_o.
Then it follows from the second equation
in (<ref>) and these boundary conditions that
T_rϕ(r) = M_o R_o^2/r^2 = M_i R^2_i/r^2.
It means that in order to get the solution of the form (<ref>) the data R_i, R_o, M_i and M_o have to satisfy the following compatibility condition
M_o R_o^2 = M_i R_i^2.
Next, using (<ref>) we observe from
μ(ω/r)' = 1/rμ[ω' - ω/r] = M_o R_o^2/r^3
that
ω(r) = 1/μ[β r - M_oR_o^2/2r] .
To fix the coefficient β, we use the equation for p and assume that
p(R_i)=p_*,
where p_* is a pressure that could be measured by a pressure transducer at the inner part of the inner cylinder. With C=β/μ and D= - M_iR_i/2 (consequence of (<ref>) and (<ref>)), the general formula for the pressure leads to the following quadratic equation for β, namely
R_i^2/2μ^2β^2 - M_i R_i^2 ln R_i/μ^2β - M_i^2R_i^2/8μ^2 = p_*/ϱ R_i^2 β^2 - 2 M_i R_i^2 (ln R_i) β - M_i^2R_i^2/4 = 2μ^2 p_*/ϱ.
We do not provide the explicit formula for β, but note that under certain conditions on the data, the solution of the above equation can have two or one or no solution. Assuming we fix β, referring again to (<ref>), we observe that if 2β = M_i then there is “no-slip" on the inner cylinder (otherwise the fluid slips there), while if 2β = M_o then there is “no-slip" on the outer cylinder. Note that under certain conditions we have two values of pressures that can adjust both these conditions.
Alternatively, we could fix β by prescribing the volumetric flow rate Q across a cross section. We show the calculation with the caveat that for the cylindrical Couette flow Q is not easy to measure.
Using (<ref>), it follows from
Q= ∫_R_i^R_oω(r) dx
that
ω(r) = 1/μ[2μ Q + M_oR_o^2 lnR_o/R_i/R_o^2 - R_i^2 r - M_oR_o^2/2r] .
Now, if M_o = 22μ Q + M_oR_o^2 lnR_o/R_i/R_o^2 - R_i^2, then ω(R_o)= 0, i.e., there is “no-slip". If the condition is not fulfilled there is slip. Similarly, using also the compatibility condition (<ref>), if
M_i = 2 2μ Q + M_iR_i^2 lnR_o/R_i/R_o^2 - R_i^2, then ω(R_i)=0 and the fluid adheres to the inner cylinder. Otherwise, there is slip.
§.§ Plane Poiseuille flow
We consider the flow that takes place between two parallel plates located at y=0 and y=h and assume that b⃗= 0⃗ (no gravity). Furthermore, we assume that v⃗ = u(y)i⃗.
In the classical approach, the second and third equation of (<ref>) imply that p=p(x), while the first equation reduces to
-dp/dx + μd^2u/d^2 y = 0
which implies that
p(x) = cx + b and u(y) = c/2μ y^2 + dy + e ,
where b, c, d and e are constants, c being negative so that the fluid flows from a region of higher pressure to one that is of lower pressure. Requiring “no-slip" boundary conditions on the plates, i.e., u(0) = u(h) = 0 one concludes that
u(y) = C/2μ y(y-h) .
In the alternate (new) approach, we first observe that the assumption v⃗ = u(y)i⃗ and the constitutive equation (<ref>) imply that
= [ -p(x,y,z) τ(y) 0; τ(y) -p(x,y,z) 0; 0 0 -p(x,y,z) ] .
Then the second and third equation in (<ref>) lead to p = p(x), while the first equation of (<ref>) then gives
-dp/dx + dτ/dy = 0.
Consequently,
p(x) = cx + b and τ(y) = cy + d ,
where again b, c and d are arbitrary constants, c being negative. The constitutive equation τ = μ u' then leads to
u(y) = c/2μ y^2 + d/μ y + e .
Now, we impose two new conditions, namely
u(h) = u(0) (symmetric velocity profile) and ∫_0^h u(y) dy = Q.
Applying the first condition from (<ref>) on the formula given in (<ref>), we observe that
c/2μ h^2 + d/μ h + e = e d = - ch/2.
Substituting this in (<ref>) and using the second condition in (<ref>) we obtain
Q = -c/12μ h^3 + eh e = Q/h + ch^2/12μ .
Hence
u(y) = c/2μ y^2 - ch/2μ y + ch^2/12μ + Q/h .
This implies that
u(0) = u(h) = ch^2/12μ + Q/h .
We conclude that if Q≠ -ch^3/12μ, then u(0) ≠ 0 and u(h) ≠ 0 as well. That means that the fluid is slipping. If, however, Q = -ch^3/12μ, then u(0)=0, u(h)=0 and the solution takes the form (<ref>), as obtained above using the classical approach.
As an experiment between two parallel plates is not really feasible, the above calculation has been provided to show that one can obtain the solution to the flow problem without having to specify the “no-slip" boundary condition. If the fluid is flowing through a cylinder of rectangular cross-section due to a pressure gradient (see Fig. <ref>) and if b≫ h, ℓ≫ h, so that end effects can be neglected, we can assume, except regions very close to the vertical walls, the fluid is undergoing plane Poiseuille flow as a reasonable approximation and such an experiment can be performed and Q can be measured. The point to be made is that such an approach of not specifying the "no-slip" condition, but using other alternative conditions could be used in other problems wherein one could indeed carry out experiments, and conclude the validity of the "no-slip" condition.
§.§ Plane Couette flow.
The flow takes place between two parallel plates located at y=0 and y=h. We shall assume that b⃗= 0⃗ (no gravity), v⃗ = u(y)i⃗ and p is constant. It then follows from (<ref>) that = (τ', 0, 0).
We first recall the classical approach. This is characterized by the fact that the upper plate moves with the constant velocity V. Under these circumstances, the problem (<ref>)-(<ref>) reduces to u”(y) = 0, which implies that u(y) = Cy+D. If u(0)= 0 and u(h)=V, then u(y) = V/hy.
In the alternate (new) approach we assume that a shear stress τ_app is applied on the fluid by the moving plate in contact with the fluid (this being equal and opposite to the shear stress exerted by the fluid on the plate), i.e.,
τ(h) = τ_app.
The governing system of equations (<ref>) and (<ref>) reduces to
τ = μ u' and τ' = 0 .
This together with the boundary condition (<ref>) yields
τ(y) = τ_app and u(y) = τ_app/μ y + D ,
where D is a constant.
At this juncture, let us assume that we can measure the volumetric flow rate Q and assume that it is given. Thus,
Q= ∫_0^h u(y) dy = τ_app/2μ h^2 + D h ,
which leads to
D = 1/h[Q - τ_app/2μ h^2].
Thus
u (y) = τ_app/μ y + 1/h[Q - τ_app/2μ h^2].
Notice that u(0) = 1/h[Q - τ_app/2μ h^2]. If [Q - τ_app/2μ h^2]≠ 0, then we can conclude that there has to be slip! If [Q - τ_app/2μ h^2] = 0, then there is “no-slip" at the lower plate.
One can do the same at the upper plate (located at y=h). As
u(h) = τ_app/μ h + 1/h[Q - τ_app/2μ h^2] = τ_app/2μ h + Q/h,
we observe that if the speed V of the upper plate associated with the applied shear stress τ_app is such that
τ_app/2μ h + Q/h≠ V ,
then we can conclude that there is slip!
As in the previous case, the main purpose of the calculation is to show that one does not need the "no-slip" boundary condition to obtain a solution to the flow problem.
§.§ Flow down an inclined plane due to gravity
In the co-ordinate system associated with the inclined plane, the gravitational force takes the form ϱb⃗ = (ϱ g sinθ, -ϱ g cosθ, 0), where g is the acceleration due to gravity and θ is the angle of inclination.
In the classical approach, starting from the assumption v⃗=u(y)i⃗ and p=p(y) one deduces from the governing equations (<ref>)-(<ref>) that the only non-diagonal element of
is T_xy = T_yx = τ(y). The equations stemming from (<ref>) takes the form
μ u” + ϱ g sinθ = 0 ,
-p' - ϱ g cosθ = 0 .
After integrating (<ref>) we obtain (ℓ is a constant)
μ u' = -ϱ g (sinθ) y + μℓ .
This implies that (m is yet another constant)
u(y) = -ϱ g (sinθ)/2μ y^2 + ℓ y + m .
At the free surface, one assumes that n⃗ vanishes. Using (<ref>) and the above assumptions n⃗ = (0,1,0) = (τ(h), -p(h), 0) = (μ u'(h), -p(h), 0). Consequently, n⃗=0⃗ implies that
u'(h) = 0 and p(h)=0 .
The first condition together with (<ref>) leads to ℓ= ϱ g (sinθ) h/μ. Hence,
(<ref>) takes the form
u(y) = ϱ g sinθ (h/2 -y) y + m .
The required “no-slip" condition on the bottom, i.e., u(0)=0 gives m=0 and thus
u(y) = ϱ g sinθ (h/2 -y) y .
Note that the condition p(h)=0 together with (<ref>) implies
p(y) = ϱ g cosθ (h-y) .
In the alternate (new) approach, we have v⃗ = u(y) i⃗, p=p(y) and τ = τ(y).
The balance of linear momentum (<ref>) implies
τ' + ϱ g sinθ = 0 ,
-p' - ϱ g cosθ = 0 .
After integrating (<ref>) we obtain (ℓ̃ is a constant)
τ(y) = -ϱ g (sinθ) y + μℓ̃ .
The constitutive equation (<ref>) implies that (m̃ is a constant)
u(y) = -ϱ g (sinθ)/2μ y^2 + ℓ̃y + m̃ .
At the free surface, we assume that n⃗ = (τ(h), -p(h), 0) vanishes, i.e.,
τ(h) = 0 and p(h)=0 .
The first condition together with (<ref>) gives ℓ̃= ϱ g (sinθ) h/μ. Hence,
(<ref>) takes the form
u(y) = ϱ g sinθ/μ (h/2 -y) y + m̃ .
Assuming that we know the volumetric flow rate Q, we can fix m̃. As Q= ∫_0^h u(y) dy, we conclude from the above formula for u that
Q = ∫_0^h [ ϱ g sinθ/μ (h/2 -y) y + m̃] dy = ϱ g (sinθ) h^3/3 μ + m̃ h ,
which implies that
m̃ = Q/h - ϱ g (sinθ) h^2/3 μ .
Clearly, if Q≠ϱ g (sinθ) h^3/3 μ, then m̃≠ 0 and consequently u(0) ≠ 0! The fluid has the slip along the inclined plane. On the other hand, if Q = ϱ g (sinθ) h^3/3 μ, then m̃ = 0 and u(0)=0.
§ CONCLUSION
In this paper, we have articulated a methodology for obtaining solutions to flow problems of the Navier-Stokes fluid without appealing to the “no-slip" boundary conditions and we have also advanced a procedure for testing the validity of the “no-slip" boundary condition based on easily experimentally measurable quantities. On knowing that the fluid slips at the solid boundary, we can make additional assumptions for the nature of the slip, say Navier's slip etc., and in fact determine the extent of slip. We intend to address this in a forthcoming study.
In general, we show that it might be best, in the absence of certain knowledge concerning the boundary condition for the velocity at a solid boundary to use the system of equations consisting of the balance laws as well as the constitutive relation simultaneously as in the case of implicit constitutive relations.
10
Baundryetal2001
J. Baudry, E. Charlaix, A. Tonck and D. Mazuyer, Experimental evidence for a large slip effect at a nonwetting fluid-solid interface, Langmuir 17 (2001) 5232–5236.
blechta2020
J. Blechta, J. Málek and K. R. Rajagopal, On the classification of
incompressible fluids and a mathematical analysis of the equations that
govern their motion, SIAM J. Math. Anal. 52 (2020), no. 2,
1232–1289.
Churaevetal1984
N. V. Churaev, V. D. Sobolev and A. N. Somov, Slippage of liquids over lyophobic solid surfaces, J. Colloid Interface Sci. 97 (1984), 574–581.
Craigetal2001
V. S. Craig, C. Neto and D. R. M. Williams, Shear-dependent boundary slip in an aqueous Newtonian liquid, Phys. Rev. Lett. 87 (2001), 054504.
goldstein1938
S. Goldstein, Modern Developments in Fluid Dynamics, Vol. 2, Oxford University Press, Oxford (1938).
H1
S. G. Hatzikiriakos and J. M. Dealy,
Wall slip of molten high density polyethylene. I. Sliding plate rheometer studies,
Journal of rheology 35 (1991), 497-523.
H2
S. G. Hatzikiriakos and J. M. Dealy,
Wall slip of molten high density polyethylenes. II. Capillary rheometer studies,
Journal of rheology 36 (1991), 703-741.
MPrKRR
J. Málek, V. Pr23uša and K. R. Rajagopal, Generalizations of the Navier-Stokes fluid from a new perspective,
Internat. J. Engrg. Sci. 48 (2010) 1907–1924.
Pitet2000
R. Pit, H. Hervet and L. Léger, Direct experimental evidence of slip in hexadecane: solid interface, Phys. Rev. Lett. 85 (2000), 980–983.
Raj.2003
K. R. Rajagopal, On implicit constitutive theories, Appl. Math. 48
(2003) 279-319.
Raj.2006
K. R. Rajagopal, On implicit constitutive theories for fluids, J. Fluid Mech. 550
(2006) 243–249.
Stokes1845
G. G. Stokes, On the theories of the internal friction of fluids in motion, and of the equilibrium and motion
of elastic solids, Trans. Cambridge Phil. Soc. 8 (1845), 287-305.
Vinogradova1999
O. I. Vinogradova, Slippage of water over hydrophobic surfaces, Int. J. Miner. Process. 56 (1999), 31–60.
|
http://arxiv.org/abs/2307.02778v1
|
20230706051633
|
Not gone with the Wind: Survival of High-Velocity Molecular Clouds in the Galactic center
|
[
"Mengfei Zhang",
"Miao Li"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
firstpage–lastpage
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering
[
==================================================================================================================================
High-velocity atomic clouds in the Galactic center have attracted significant attention due to their enigmatic formation process, which is potentially linked to the starburst or supermassive black hole activities in the region. Further, the discovery of high-velocity molecular clouds (HVMCs) presents a greater puzzle, because they are much denser and more massive. If the HVMCs were accelerated by the strong activities in the Galactic center, they are expected to be destroyed before they reach such a high velocity. To shed light on this phenomenon, we perform three-dimensional numerical simulations to investigate the origin and hydrodynamic evolution of HVMCs during a starburst in the Galactic center. We find that the presence of a magnetic field provides effective protection and acceleration to molecular clouds (MCs) within the galactic winds. Consequently, the MCs can attain latitudes of approximately 1 kpc with velocities around 200 km s^-1, consistent with the observed characteristics of HVMCs. The consistency of our findings across a wide parameter space supports the conclusion that HVMCs can indeed withstand the starburst environment in the Galactic center, providing valuable insights into their survival mechanisms.
methods: numerical – Galaxy: centre – magnetohydrodynamics – ISM: clouds – galaxies: starburst
§ INTRODUCTION
Galactic feedback, especially the nuclear wind, is now commonly accepted as an important process affecting the galactic evolution <cit.>, which is, however, pretty weak in our Milky Way at present <cit.>.
Therefore, it is expected that Milky Way had been active before, but quenched after that, which should produce some corresponding relics.
Over the past tens of years, these feedback relics possibly have been discovered at radio, X-ray and γ-ray band, such as the Galactic Center Lobe (GCL; ), the microwave haze <cit.>, the polarized lobes <cit.>, the Fermi bubbles <cit.>, the radio bubbles <cit.>, the X-ray chimneys <cit.> and the eROSITA bubbles <cit.>.
These structures have scales ranging from ∼100 pc to ∼10 kpc, indicating that they originated from a series of violent activities.
In addition, in the Galactic center, many high-velocity clouds (HVCs) were detected both above and below the Galactic plane <cit.>. Especially, two high-velocity molecular clouds (HVMCs) are also discovered insides the HVCs <cit.>,
The altitudes of the two MCs are 0.6 and 0.9 kpc, respectively.
Their velocities along z-axis are ∼ 180 and 150 km s-1, while the radial velocities are ∼ 240 and 300 km s^-1 based on a biconical model <cit.>.
Their molecular mass are both ∼ 380 M_⊙, and their atomic mass are 220 and 800 M_⊙.
The HVMCs show good coincidence with some aforementioned relics, so they possibly originate from similar process, e.g., accelerated by the Galactic nuclear wind.
Although these relics and HVCs/HVMCs are commonly suggested to be produced by the feedback activity, the detailed mechanism is still the subject of intense debate.
Several models have been proposed to explain their formation, some of which focus on one structure <cit.>, while others attempt to simultaneously explain multiple structures <cit.>.
Most of these models exhibit self-consistency, and some following simulations have provided further validation of their viability <cit.>.
However, simulating the acceleration of HVMCs still presents a challenge for their formation models.
Compared to atomic clouds, molecular clouds are denser and cooler, making it more difficult to accelerate them to high velocity without disruption <cit.>.
Some simulations for extra-galactic interaction between clouds and nuclear winds show that clouds can be protected by magnetic field <cit.>, cooling <cit.> and thermal conduction <cit.>, which confirms that cool clouds can survive acceleration by a hot wind.
Nevertheless, these simulations usually involve a constant hot wind, which is completely different from the unpredictable nuclear wind produced by starburst or AGN.
Moreover, most of them focus on high-latitude atomic or even ionized clouds, so they cannot clearly explain the formation of HVMCs at ∼ 1 kpc in our Milky Way.
It is therefore necessary to perform robust simulations to see whether the HVMCs observed in the Galactic center can be reproduced.
The formation of HVMCs is closely linked to the other feedback relics, and could potentially be used to distinguish among different models for their origin.
While the activity of active galactic nuclei (AGN) has the capability to accelerate molecular clouds (MCs) to high velocities, it is often so powerful that the clouds usually diffuse to atomic/ionized form.
Unless, there are some periodic bursts, such as those arising from accretion onto the supermassive black hole (SMBH) Sgr A* <cit.>.
These bursts should be weaker than normal AGN, but still release comparable amounts of energy, allowing the MCs to be efficiently accelerated without being quickly destroyed.
Relatively speaking, a starburst is a more feasible explanation for the formation of HVMCs, as the Galactic center exhibited a higher star formation rate about 30 million years ago <cit.> and the molecular outflow is universal in active star-forming galaxies <cit.>.
In fact, although the supernova feedback is important in galaxy formation <cit.>, its working mechanism has not been fully understood, which leads to a difficulty to understand the role of HVMCs and also limits the cosmological simulations <cit.>.
Currently, it is known that randomly distributed SNe in the disk only drive inefficient galactic winds because most supernova remnants lose their energy radiatively before breaking out of the disc <cit.>, leading to a difficulty to push the HVMCs to high latitude in such a galactic wind.
Nevertheless, a starburst in the Galactic center can produce much stronger galactic wind and more efficiently accelerate the HVMCs.
It is expected that the starburst ended recently, but left these feedback relics and HVMCs.
It is difficult to tell which model is correct, because the hydrodynamical evolution of AGN and starburst activity can be similar at large scale.
Their energy input rate can be similar, as a result, the wind driven by these activities can reach a comparable velocity at high latitude.
However, there should be noticeable differences at smaller scale (≤ 1 kpc), such as the acceleration process of HVMCs and the morphology of relics, because the starburst can happen more randomly in a much larger region than the AGN activity.
Moreover, the metallicity of HVMCs is possibly different for AGN and starburst models, since starburst can produce more heavy elements.
Although <cit.> indeed found different metallicitiy distribution of HVCs in the Galactic center, they explain that HVCs originate in Milky Way's disk and halo.
These models can be further examined through simulations.
In this paper, we investigate whether HVMCs observed in our Milky Way can be accelerated to high latitudes by a starburst. To this end, we perform a detailed simulation of the process.
We start by simulating a series of random core-collapse supernova explosions in the Galactic center with a frequency estimated based on a past star formation rate <cit.>. We set a molecular cloud above the explosion region to study how the cloud is accelerated by the outflow wind and whether it can survive until it reaches 1 kpc, a position similar to the clouds detected by <cit.>. The explosion region is believed to be adjacent to the central molecular zone (CMZ), where more giant molecular clouds steadily exist.
Next, we check the density, temperature, and velocity of the clouds obtained from the simulations and modify initial conditions to study the influence of different parameters.
We will try to identify various HVMCs candidates obtained from the simulation by comparing with the observation and study their properties in detail.
Finally, we investigate the mixture of clouds and the ejecta of supernovae to disentangle the metallicity in HVMCs.
This paper will describe the simulation setup in Section <ref> and show the results in Section <ref>.
The formation of the HVMCs, their metallicity and their relation with feedback relics will be discussed in Section <ref>.
The Section <ref> is a summary.
§ SIMULATION
To perform the simulations, we utilized the publicly available, modular magnetohydrodynamic (MHD) code PLUTO[http://plutocode.ph.unito.it/] <cit.> to perform the simulations.
This grid-based MHD code employs a second-order Runge–Kutta time integrator and a Harten-Lax-van Leer Riemann solver for middle contact discontinuities, making it well-suited for simulating the interaction between the SN shock and the molecular clouds.
§.§ Basic configuration
The simulation is based on a three-dimensional (3D) MHD cartesian frame with a grid of 200 × 200 × 2000, equivalent to a physical volume of 100 × 100 × 1000 pc^3 and a linear resolution of 0.5 pc pixel^-1.
We set the z-axis to be perpendicular to the Galactic disk (north as positive), the y-axis to run along decreasing Galactic longitude, and the x-axis to be parallel to the line-of-sight (the observer at the negative side).
We adopted an outflow boundary condition for all directions, which means that some of the clouds' material may flow outside of the simulation box.
The simulation is governed by the ideal MHD conservation equations,
∂ρ∂ t + ∇· (ρv) = 0 ,
∂ (ρv)∂ t+∇·[ρvv+1p]^T=-ρ∇Φ,
∂ E_t∂ t+∇·[(ρv^22+ρϵ+p+ρΦ) v- v×B×B4π]
= -∂( ρΦ)∂ t,
∂B∂ t - ∇× (v×B) = 0,
where ρ is the mass density, p the thermal pressure, v the velocity, B the magnetic field, 1 the dyadic tensor, Φ the gravitational potential, and E_t the total energy density, defined as:
E_t = ρϵ + (ρv)^2/2ρ + B^2/8π,
where ϵ is the internal energy.
We use an ideal equation of state, i.e., ϵ = p/ (Γ -1), in which the ratio of specific heats Γ = 5/3.
To accurately model the gravitational potential in the simulation volume, we assume that it is static and fully determined by the SMBH, the nuclear star cluster (NSC), and the nuclear disk (ND).
A point mass of 4×10^6 M_⊙ is taken to represent the SMBH.
For the NSC and the ND, we adopt a spherical distribution following
<cit.>.
To incorporate radiative cooling in the simulation, we use a piece-wise cooling function with a lower limit of the cooling temperature set to 100 K.
We assume a solar abundance (H abundance X_⊙=0.711, He abundance Y_⊙=0.2741, metallicity Z_⊙=0.0149) for the ISM and the initial MC.
The multiphase gas in the Galactic center includes hot ionized (∼ 10^6 K) <cit.>, warm ionized (10^4 to 10^5 K) <cit.> and cool atomic (10^3 to 10^4 K) gas<cit.>, etc., in which the gas lower than 100 K is usually taken as molecular gas.
In the simulation, temperatures below 100 K are typically not due to cooling, but rather due to adiabatic expansion.
§.§ Supernova explosion and molecular clouds
The initial conditions for our simulations are based on both observations and analytical models. Observationally, the high-velocity molecular clouds HVMCs typically exhibit densities ranging from 10 to 300 cm^-3 and outflow velocities between 200 and 300 km s^-1 <cit.>. However, to account for the significant gas loss that occurs during their propagation, we assume that the initial densities of the MCs should be higher. In addition, we need to consider other parameters such as the supernova explosion frequency and the initial latitude of the MCs to ensure that they reach the observed velocities without being completely destroyed. Therefore, we perform a systematic exploration of the parameter space to identify the most plausible initial conditions for our simulations.
Here, we introduce the cloud crushing time,
t_ cc = r_ mcv_ sn√(ρ_ mcρ_ sn),
to quantify the timescale of cloud crushing <cit.>, in which r_ mc is the radius of the initial molecular cloud, ρ_ mc the density of the cloud, v_ sn the wind velocity produced by supernovae, and ρ_ sn the wind density.
Based on general understanding, a cloud should begin to crush when the evolution time is longer than t_ cc, and should totally crush after a period of 2t_ cc. However, this estimation does not take into account the effects of the magnetic field and cooling mechanisms, which can play an important role in the cloud's evolution.
In a cylindrical region with a radius of 35 pc and a height of 10 pc, the fiducial SN birth rate is set to be 10 kyr^-1 <cit.>, which is estimated by assuming an SFR of 1 M_⊙ yr^-1, a <cit.> initial mass function (IMF) and a minimum mass of 8 M_⊙ for the progenitor star of a core-collapse SN.
The center of the cylindrical region is set to be located at the western 100 pc of Sgr A*.
<cit.> and <cit.> estimated a current SFR of 0.1 M_⊙ yr^-1 inside the CMZ, while <cit.> found that star formation in the ND (which has a similar radial extent as the CMZ) has been relatively active in the past 30 Myr, with an SFR of 0.2-0.8 M_⊙ yr^-1.
Our assumed SFR of 1 M_⊙ yr^-1 is compatible with a local starburst, which may be the case if SN events have been episodic and clustered on a ≲ Myr timescale.
This SFR is actually larger than the typical value in such a small region, so we also test a run with lower SN birth rate of 5 kyr^-1.
We have neglected Type Ia SNe, which have a birth rate of ≲0.05 kyr^-1 according to the enclosed stellar mass in the ND/NSC <cit.>.
The SNe are set to randomly explode in the cylindrical region, and we use same random seed in all runs.
The density of the MCs follows an inverse square law, n_mc = n_0/r^2, in which n_0 is the central density, r the radius.
In the fiducial simulation, n_0 = 1500 H cm^-3, the maximum radius of the initial MC is 10 pc, and the height of the MC from the Galactic plane is 50 pc.
Based on these settings (r_ mc = 10 pc, v_ sn = 1000 km s^-1, n_ mc = 15∼50 H cm^-3, n_ sn = 0.01 H cm^-3), we can estimate the t_ cc 1∼2 Myr, so the cloud will totally crush after 4 Myr in the classical analysis.
However, in our preliminary tests, we find the cloud can survive beyond 7 Myr by including a vertical magnetic field and the cooling effect. In this scenario, after around 7 Myr, the cloud will run outside of the simulation box.
Therefore, the simulation results are presented up until around 7 Myr.
In addition, once injected, the ejecta will eventually partially mix with the molecular clouds, and change their metallicity.
To study the mixture, we introduce two tracer parameters, Q_1 and Q_2, which are both evaluated at each pixel in the simulation and obey a simple conservation law:
∂ (ρ Q_i)/∂ t + ∇· (ρ Q_i v) = 0.
Q_1 has a value of 1 for pure SN ejecta and 0 for the unpolluted molecular clouds and ISM, while
Q_2 has a value of 1 for pure molecular clouds and 0 for the unpolluted SN ejecta and ISM.
The values in between indicate a mixed gas.
These tracer parameters allow us to track the mixing process over time and analyze the distribution of metals in the simulated system.
§.§ The ISM and the magnetic field
We initialize our simulation with a uniform distribution of ISM density and temperature, with values of 0.01 H cm^-3 and 10^6 K, respectively, over the entire simulation box. Although thermal pressure is expected to be higher at lower latitudes due to rough hydrostatic equilibrium against gravity, our preliminary tests suggest that this effect is unimportant since the shock wave from the supernovae breaks this equilibrium early on.
Moreover, the stellar wind in the Galactic center is also strong and can unremittingly break this equilibrium.
The distribution of magnetic fields in the Galactic center remains a challenging problem, particularly in the central tens of parsecs <cit.>, with many different components, influencing the strength and direction of the magnetic field.
There is actually a general model for the whole Milky Way <cit.>, in which the magnetic field is parallel to the Galactic plane at lower latitude and gradually tend to be perpendicular at higher latitude, but this is only an approximation in the Galactic center.
Therefore, we in this work test different runs, respectively with parallel, perpendicular and no magnetic field.
The magnetic strength range from ∼ 1 mG in the central tens of parsecs <cit.> to few μG at 1 kpc above the Galactic plane <cit.>.
For simplicity, we adopt a homogeneous magnetic strength of 10 μG over the whole simulation box.
The initial parameters are summarized in Table <ref>.
§ RESULTS
In this section, we present the simulation results.
We first describe in detail the evolution of the MCs in the vertical magnetic field in the fiducial run (Section <ref>) .
We then examine the role of the magnetic field in the two additional runs, one with horizontal magnetic field (Section <ref>) and the other with no magnetic field (Section <ref>), to illustrate how the change affects the formation of the HVMCs.
Finally, we study the influence of the cloud density and the supernovae explosion frequency (Section <ref>).
To quantitatively compare with the observation, we here parameterize the main features of the observed MCs, MW-C1 and MW-C2 <cit.> .
The altitudes of the two MCs are 0.6 and 0.9 kpc, respectively, so we choose ∼ 1 kpc as the standard position to guarantee the simulated clouds can indeed reach the height.
Their velocities along z-axis are ∼ 180 and 150 km s-1, while the radial velocities are ∼ 240 and 300 km s^-1 based on a biconical model <cit.>.
Our simulation focuses on the propagation vertical to the Galactic plane, so we take 200^+100_-50 km s^-1 as the typical value.
The molecular mass of MW-C1 and MW-C2 are both ∼ 380 M_⊙, but their atomic mass are 220 and 800 M_⊙.
Thus we pay more attention to match the molecular mass, and the atomic mass can vary in a large range.
With a diameter of ∼30 pc, their mean molecular number densities are 130 and 190 H_2 cm^-3, and the mean atomic number densities are 1 and 3 H cm^-3.
In the work, we take the clouds denser than 10 H cm^-3 as MCs, and the clouds with a density between 1 and 10 H cm^-3 as atomic clouds.
In addition, there are also some qualitative features which are worth reproducing.
Surrounding the HVMCs, there are always some atomic clouds with lower density and larger volume, which were usually taken as HVCs before the discovery of HVMCs.
The number of detected HVCs is much larger than HVMCs, and most of HVCs are uniformly distributed above 250 pc <cit.>.
There are possibly more HVMCs hidden in the HVCs, so more high-resolution and high-sensitivity molecular observations are necessary.
§.§ The run for the fiducial set
The column density and velocity evolution of f100n1500v is shown in Figure <ref>, in which the clouds will reach 1 kpc at 7 Myr.
In the following text, we call the time, at which the results well match the observation, as the fiducial time.
At the early stage, the supernova shock wave would blow the initial MC to be a thin filament, because the central density of the cloud was much higher than the boundary.
The filamentary structures have also been investigated by <cit.>, who claim the filaments are only formed in magnetized environment and the cloud will crush to small clumps without magnetic field, consistent with our results.
When the peripheral low-density material was blown to higher latitude, the central dense core was being slowly accelerated.
Some pioneer high-velocity clumps broke away from the main cloud at 3 Myr, and run outside of the simulation box at 4 Myr.
At this stage, the main cloud became more irregular, but kept as one cluster.
After 7 Myr, the cloud would reach 1 kpc, a position consistent with the observation.
During the propagation of the cloud, the supernovae shock was always being reflected by the cloud and gradually produced stronger reverse shock.
This process leads to the obvious dividing line both for the density and velocity at 7 Myr.
The reverse shock could roughly balance the forward shock, as a result, the cloud acceleration rate largely decreased.
We also show the temperature and magnetic field evolution in Figure <ref>.
The outflow wind interact with the MCs, heating the surrounding ISM and compressing the magnetic field, while the central cores of the clouds still contain cool gas and low magnetic field at the early stage.
The shock wave from the supernovae can sweep the whole simulation box at ∼ 1 Myr and heat the ISM to high temperature.
However, with the receding of a part of the outflow wind at higher latitude, the magnetic field becomes much weaker.
Figure <ref> & <ref> also show the starburst wind is not constant, especially at low latitude, because we adopt the random supernovae explosions in the simulations.
The ever-changing wind will significantly influence the evolution of the initial cloud.
However, the variation of the starburst wind is small at high latitude, where it can be taken as a constant wind.
To study whether the clouds at 7 Myr can be still taken as MCs with a velocity of ∼ 200 km ^-1, we show the density-velocity and density-temperature maps in Figure <ref> and <ref>.
At this moment, the simulation box contains three components: the clouds, ISM-dominated and SNR-dominated region, respectively corresponding to the lower right, lower left and central part of Figure <ref>, and the lower right part, the central and the upper left band of Figure <ref>.
Figure <ref> is similar to the Figure 8 of <cit.>, but we replace their constant wind with the simulated starburst wind.
As a result, Figure <ref> contains the SNR-dominated region, i.e, the upper left band, which is absent in their work.
The clouds selected based on criteria of n≥ 1 H cm^-3 and T≤ 10^4 K have a total mass of ∼ 1500 M⊙, while those selected based on criteria of n≥ 10 H cm^-3, T≤ 200 K, and z ≥800 pc are taken as molecular clouds and have a molecular mass of ∼ 850 M⊙.
However, these clouds cover a region larger than the MW-C1 and MW-C1, and we should compare parameters at same scale.
If we choose the densest central clouds (diameter ∼30 pc, i.e., 60 cells) as the counterpart, the mass can better match the observation.
The clustering of the clouds is also considered in the estimation, in which some cells with appropriate density and temperature will still be excluded, if there is not any cloud cell within the surrounding 0.5 pc.
We also estimate the present mass-weighted mean velocity of ∼ 190 km s^-1 for all clouds (n≥ 1 H cm^-3 and T≤ 10^4 K), while the mean velocity over the past 7 Myr is ∼ 130 km s^-1, both a little lower than the observation.
In the vertical magnetic field, the clouds can propagate to 1 kpc without destruction, and still keep a considerable mass even larger than the observed HVMCs.
However, the mean velocity is a little smaller than the typical value.
To increase the velocity, a straightforward method is to increase the supernovae explosion frequency, but the frequency used in our work is already a little higher than the standard value.
In addition, it is unexpected that including the horizontal magnetic field can also increase the velocity, which will be illustrated in the next section.
Assuming a lower MCs or ISM density is also practical, so we test a case with a lower density of the initial MC in Section <ref>.
In summary, the fiducial run can indeed explain the acceleration of MCs at high latitude, while some features cannot be reproduced perfectly.
§.§ The run with horizontal magnetic field
We show the column density evolution of f100n1500h in Figure <ref>, while the density-velocity distribution at 5 Myr is shown in Figure <ref>.
Similar to f100n1500v, the MC was blown to be a thin filament initially, but gradually some gas was stripped.
At 2 Myr, a pioneer high-velocity clump separated from the main cloud, but run outside the simulation box at 3 Myr.
With the gas stripping, the MC showed a more irregular shape and finally crushed to several clumps.
These clumps have lower densities and higher velocities, but can be still taken as molecular clouds.
Especially, there is the second high-velocity clump separating from the main cloud after 4 Myr and reaching ∼ 1 kpc after 5 Myr, with a mass-weighted mean velocity of ∼ 340 km s^-1, a total mass of ∼ 700 M_⊙ (n≥ 1 H cm^-3) and a molecular mass of ∼ 100 M_⊙ (n≥ 10 H cm^-3).
The velocity is higher, but the masses are both lower than the observation's.
By comparing with f100n1500v, we find a horizontal magnetic field can stimulate the acceleration and the crushing of the MCs, which is possibly caused by the magnetic tension force vertical to the Galactic plane, i.e., the magnetic draping, a ubiquitous mechanism already found in the launching of clouds <cit.>.
The outflow wind can compress the MCs and the surrounding ISM, then amplify the local magnetic field, i.e., the magnetic tension.
The magnetic field can help to efficiently accelerate the MCs, while some MCs material will flow along the magnetic field, even run outside of the simulation box.
As a result, the MCs can be pushed to high velocity at high latitude, but lost much mass.
In addition, if the magnetic field includes more horizontal components, the clouds can be further dispersed at large scale, which can produce some smaller clouds than those in f100n1500v.
These clouds may be more similar to the observed MW-C1 and MW-C2.
In fact, the magnetic field in the Galactic center is complicated, while the vertical component is more important <cit.>.
At present, there is no a standard magnetic field model, so we test the two runs to study the influence of the magnetic direction on the simulation.
In terms of the two runs, a mixed magnetic field would likely better explain the observed properties of the HVMCs.
§.§ The run without magnetic filed
We show the column density evolution of f100n1500n in Figure <ref>, and the density-velocity distribution after 5 Myr in Figure <ref>.
There are no large clumps separation. Instead, lots of small clumps gradually diffuse from the main cloud, consistent with the simulation results of <cit.>.
The main cloud is slower and will be depleted after 5 Myr, roughly consistent with the crushing time estimation, as a result, the MCs will not reach 1 kpc.
In other words, in comparison with f100n1500v and f100n1500h, the the magnetic field can indeed well protect the clouds.
However, Figure <ref> illustrates the densest regions are significantly denser, and Figure <ref> also shows there are more high-density clouds (n ≥ 1000 H_2 cm^-3) than the runs with magnetic field, which indicates the magnetic field stimulates the destruction of the high-density clouds.
This effect may be attributed to the increased turbulence resulting from the presence of the magnetic field, facilitating a more efficient mixing of the MCs and interstellar medium (ISM).
As a consequence, high-density clouds share material with low-density regions.
Additionally, the clumps grow larger and exhibit prolonged survival but possess lower densities.
Consequently, the local density of the clouds is diminished in the presence of a magnetic field.
In conclusion, the magnetic field plays a crucial role in the formation of HVMCs.
However, it should be noted that the magnetic field does not always increase the density of MCs and can also disperse some of the densest MCs at smaller scales.
§.§ The run with lower density and lower explosion frequency
We show the results of f100n1000v in Figure <ref> and Figure <ref>.
The initial cloud was also blown to a filament, but a little wider than previous runs.
A large amount of gas were stripped at 2 Myr, and dissipated at 3 Myr.
Then the main cloud was divided to two clouds, in which the faster one almost reached 1 kpc at 5 Myr, but the left one gradually disappeared.
After 5 Myr, the simulation box has a total mass of ∼ 1100 M_⊙ (n≥ 1 H cm^-3, T≤ 10^4 K) and a molecular mass of ∼ 500 M_⊙ (n≥ 10 H cm^-3, T≤ 200 K, z ≥800 pc), roughly consistent with the MW-C2.
The mass-weighted mean velocity is ∼ 290 km s^-1, also similar to the observation.
However, the diameter of the whole cloud is larger than the observation, so the density is lower.
By comparing with f100n1500v, we can estimate a central density between 1000 and 1500 cm^-3 for the initial cloud.
Meanwhile, if the magnetic field includes more horizontal components, the clouds can be further dispersed to some small clouds.
In other words, if f100n1500v uses a lower central density and more horizontal magnetic field, it can better match the observation.
However, the primary focus of this study is to investigate whether the MCs can be accelerated to high velocities at high latitudes, and the current findings adequately address this inquiry.
Moreover, it is important to note that the parameters for MW-C1 and MW-C2 are only approximations derived from a simplified biconical wind model, and the completeness of the HVMCs sample remains uncertain.
There are only two detected HVMCs in the Galactic center, so the main feature of HVMCs is actually still ambiguous.
As a result, conducting an exhaustive search of the parameter space is unnecessary at this stage.
The results of f200n1000v are shown in Figure <ref> and Figure <ref>.
The gas were gradually stripped, but the cloud can still survive to reach 1 kpc at 8 Myr.
At 8 Myr, the simulation box has a total clouds mass of ∼ 640 M_⊙ ( n≥ 1 H cm^-3, T≤ 10^4 K), a molecular mass of ∼ 400 M_⊙ (n≥ 10 H cm^-3, T≤ 200 K, z ≥800 pc) and a mass-weighted mean velocity is ∼ 180 km s^-1, roughly consistent with MW-C1, which indicates the cloud can also well survive, even if the explosion frequency of the supernovae is lower than 10 kyr^-1.
In fact, the explosion frequency should vary with the evolution of the cloud, and the features of resultant clouds are also dependent on the variation.
We summarize all results in Table <ref> and visualize it in Figure <ref>, which will be further discussed in Section <ref>.
The criteria (n≥ 10 H cm^-3, T≤ 200 K, z ≥800 pc) used to choose the molecular components is not always reasonable, and some hydrogen atoms can also survive on such a criteria.
We here show the results with a strict criteria (n≥ 100 H cm^-3, T≤ 150 K, z ≥800 pc) for an error estimation.
It is actually difficult to accurately estimate the realistic velocity of the clouds based on the current observation.
We take the velocity along the sightlines as the lower limit, and the outflow velocity as the standard velocity.
The outflow velocity is estimated based on a biconical model, which is also an important reference for our simulations, so we use the outflow velocity to directly compare with the simulations.
§ DISCUSSION
In the preceding sections, we have presented 3D simulations that illustrate the long-term hydrodynamic evolution of MCs propelled by subsequent supernova explosions. These simulations incorporate simplified, yet sufficiently realistic physical conditions of both the MCs and the surrounding environment. The first three simulation runs, which represent the evolution with vertical, horizontal, and no magnetic field, exhibit varying degrees of success and shortcomings in replicating the primary observed characteristics of MW-C1 and MW-C2. The last two runs show the simulations work well in a wide parameter space. In this section, we analyze the outcomes of these simulations and discuss their implications for our comprehension of the enigmatic ecosystem in the Galactic center.
§.§ Formation and evolution of the HVMCs
Table <ref> demonstrates that the total mass of the four runs with magnetic field aligns with the observed HVMCs, while f100n1500h exhibits a lower molecular mass and higher velocity.
Figure <ref> clearly indicates that f100n1000v provides the closest match to the two HVMCs, though the other two runs also show rough consistency with the observations.
However, the key point we want to make is that the HVMCs can indeed be accelerated to high velocities without disruption, which is also reflected by the other three runs.
We can assume the position of the central point shown in Figure <ref> can be interpolated accordingly, if we change one of the parameters, such as the direction of the magnetic field, the supernovae explosion frequency and the density of the initial cloud, based on which we can roughly estimate the dependence of their positions on these parameters.
For example, by comparing f100n1500v with f100n1500h and drawing a line between the two central points, we can expect a point will be located between the two MCs, when we only modify the direction of the magnetic field.
Similarly, we can get a cloud with higher density and velocity than the two MCs by properly increase the supernovae frequency or decrease the initial cloud density.
All four runs with magnetic field can reproduce the HVMCs, so the HVMCs can be indeed formed by the acceleration of the starburst in the Galactic center.
On the other hand, these results indicate the magnetic field is important and the MCs can well survive the shock of supernovae even at a scale of ∼ 1 kpc.
<cit.> claim the cold gas with temperature of 10^2 ∼ 10^4 K cannot survive a hot Galactic wind, but they neglect the magnetic field and pay more attention to study the process at larger scale, which will not conflict with our results.
Of course, in our simulations, there are also some features inconsistent with the observations, so we will try to clarify them in this section.
To better study the evolution of the HVMCs, we show the mass evolution of all runs in Figure <ref>.
The criterion for distinguishing between various components follows the description presented in Section <ref>.
There are three kinds of mass, the total gas mass (atoms + molecules), the total molecular mass and the mass of molecular gas with a latitude higher than 800 pc.
For simplicity, we take the last one as the molecular mass of the HVMCs.
This analysis of the mass evolution of the HVMCs shows that the total gas and molecular mass of all five runs gradually decrease over time due to ionization, stripping by the hot wind and outflows from the simulation box.
However, f100n1500h and f100n1000v show a rapid declination respectively after 3.5 Myr and 5.5 Myr.
For f100n1500h, this is caused by the dissipation of the pioneer high-velocity clump which quickly diffuse and run out of the left and right edges of the simulation box along the horizontal magnetic field.
Similar to f100n1500h, f100n1000v also has a high-velocity clump running out of the simulation box, but from the upper edge after 5.5 Myr.
As for the total molecular clouds, they are stripped and dissociated rapidly at the beginning, and maintain a steady decrement.
At last, f100n1500h and f100n1000v lost most of molecular gas after 6 Myr, while a large amount still survive in f100n1500v, f100n1500n and f100n1000v.
In f100n1500n, the clouds, almost totally crushed after 5 Myr, cannot approach 800 pc, so they are impossible to form the HVMCs.
In f100n1500v, the total gas mass and HVMCs mass are much higher than MC-C2 after 7 Myr, while in f200n1000v, they are comparable to MC-C1 at 8 Myr.
At this stage, the total molecular mass istotally composed of the HVMCs mass, so all of the molecular components have propagate beyond 800 pc.
In f100n1500v, the clouds have lower-velocity and larger volume than the observed HVMCs, but the mean density is similar.
Therefore, if the clouds crushed as some higher-velocity small clouds similar to the observed HVMCs, this run can better match the observation.
It happens that the clouds will diffuse to be some small clumps with a mass-weighted mean velocity of ∼ 340 km s^-1 in f100n1500h, though the velocity becomes a little higher than the observations.
Therefore, it is natural to expect a magnetic field including both vertical and horizontal components, will help to produce the better-matched HVMCs in the simulation. Such a configuration is actually more reasonable for the real magnetic field in the Galactic center,
A general model for the whole Milky Way also shows the magnetic field is parallel to the Galactic plane at lower latitude and gradually tend to be perpendicular at higher latitude <cit.>, so the expectation is sensible.
In addition, if a higher resolution (4 times) is applied in the simulation, the clouds will also crush to be smaller clumps <cit.>, of which velocity and total mass are similar to those in the lower resolution.
Therefore, it will be more consistent with the observations, since MW-C1 and MW-C2 are both smaller than the clouds produced in f100n1500v, f100n1000v and f100n1500h.
In other words, the resolution used in our work is adequate to explain the formation of HVMCs, if we do not take the volume of the HVMCs as an essential feature.
Of course, using a low resolution, the simulations cannot accurately describe the instability and the mixing between the cold gas and the hot wind, which may stimulate the crushing of clouds, but the advection of hot high-enthalpy gas into the mixing layer actually can result in growth and acceleration of the cold phase <cit.>.
The observations show many HVCs distributed over a large latitude from ∼ 100 pc to ∼ 10 kpc <cit.>, though most of the HVCs are located in the lower 2 kpc.
In our simulation, we only consider the starburst happening in a small region and include only one initial cloud, which limits the number of HVCs formed in the simulation box. However, the main focus of our work is to investigate the formation mechanism of HVMCs, rather than reproducing the exact number and distribution of observed HVCs. The fact that we can reproduce the key features of HVMCs observed in the Milky Way, such as their high velocity and high density, suggests that our proposed formation mechanism is plausible and can contribute to the understanding of the origin of HVCs in general. Further studies including more initial clouds and considering the starburst happening over a larger region would be needed to fully reproduce the observed distribution of HVCs.
The MCs in the run without magnetic field will be crushed in a short term, so the magnetic field is essential for the formation of the HVMCs.
The magnetic field can wrap and protect the MCs, a mechanism named as the magnetic draping, which is significant at a large scale range, from the small scale of comets to the large scale of galaxy clusters <cit.>. Therefore, it is possibly contributed to the survival of our HVMCs.
Nevertheless, Figure <ref> shows the magnetic field surrounding the cold clouds is chaotic and does not well wrap the clouds, and our zoom-in check also shows same results, which is possibly caused by the low resolution and the wrapping is only obvious at much smaller scales.
<cit.> try to study the survival of HVCs in the Galactic halo, and claim that magnetic fields suppress hydrodynamic instabilities and the growth of small-scale structures, which is also responsible for the protection of the HVMCs in our simulations.
In addition, the direction of the magnetic field can also influence the evolution of the HVMCs, which can be read from Figure <ref>, <ref> and <ref>.
In a vertical magnetic field, the clouds can keep high density and propagate to high latitude.
<cit.> also conclude that the vertical magnetic field can well protect a cold cloud, but the cloud they used actually has a temperature of 10^4 K and a density of 0.1 cm^-3, totally different from the parameters used in our simulations.
In a horizontal magnetic field, the clouds will lose an amount of mass, but still can propagate to high latitude without crushing.
If there is not magnetic field, the clouds cannot propagate to high latitude.
The importance of direction is also discussed by <cit.> & <cit.>, though the properties of clouds, winds, magnetic field and ISM they used are different from ours.
Additionally, our simulations consistently demonstrate that the reverse shock generated by the interaction between the clouds and the Galactic wind effectively balances the forward shock at later stages. As the clouds propagate, the forward shock of the Galactic wind encounters resistance from the clouds, leading to the gradual formation of stronger reverse shocks. This phenomenon is clearly observed in the density-velocity distribution plots presented in Figure <ref> and <ref>. It is expected that at this late stage, the clouds have attained their maximum velocity within the framework of our model, and further acceleration becomes inefficient. Furthermore, we note that the star formation rate (SFR) employed in our model represents an upper limit within reasonable estimations, ensuring that the supernova explosion frequency is also maximized. Among the runs, f100n1500h stands out with the highest velocity exceeding 400 km s^-1, although it should be noted that the assumption of a complete horizontal magnetic field in the Galactic center is not physically realistic. Thus, if our model accurately captures the physics, we predict that the maximum velocity attainable by the HVMCs would be approximately 400 km s^-1.
Overall, the simulation results provide a promising framework for explaining the formation of HVMCs and their connection to HVCs. The HVMCs can indeed originate from a starburst in the Galactic center, which is reasonable in a large parameter space.
The magnetic field can protect the MCs and contribute to the acceleration of MCs, but the acceleration of MCs is limited at high latitdue.
However, there are still many uncertainties and complexities involved in the process, such as the role of magnetic fields, the effects of different initial conditions, and the possible interactions with other structures in the Galactic center. Therefore, further investigations are needed to refine and extend the current model, and to test its validity against more detailed observations and simulations.
§.§ The metallicity of HVCs
The formation of HVMCs is tightly associated with the HVCs', but the origin of HVCs is also ambiguous.
The HVCs are usually defined as the interstellar gas clouds that moving at speeds substantially different (up to several hundreds km s^-1) to the rotation of the disk of the Milky Way, and they are mostly distributed in the whole Galactic halo.
Most of them have lower metallicity than what we find in the disk, so they may come from the Galactic halo or intergalactic medium.
However, some of them, especially in the Fermi bubbles, have much higher metallicity, so they may be ejected from the Galactic disk.
The HVCs in the Fermi bubbles are usually called as FB HVCs, which will be primarily discussed in this section.
It has been suggested that the HVCs are composed of diffuse inflowing gas and collimated outflowing material, which are likely manifestations of a galaxy-wide gas cycle triggered by stellar feedback, known as the galactic fountain <cit.>.
The feedback and the interaction with surrounding galaxies both influence the material cycle in our Milky Way, in which, most of the FB HVCs should be taken as a part of the collimated outflow <cit.>, because the stellar activity in the Galactic center is stronger than the disk.
However, <cit.> found the FB HVCs have a wide range of metallicities from ≤ 0.2 of solar to ∼ 3.2 Z_⊙, thus the gas from the halo may also mix with the local ISM and ejecta from the disk.
The supersolar metallicity of ∼ 3.2 Z_⊙ implies that the HVCs are initially metal-rich, or there is a metal-enrichment process during the acceleration of the HVCs, since the Galactic ISM metallicity is usually ∼ 1 solar <cit.>.
Therefore, it is convenient to assume the FB HVCs with high metallicity are formed by the driven of many sequential supernovae explosions which can simultaneously accelerate the clouds and provide heavy elements, a process also possibly happening in other galaxies <cit.>.
The SMBH activity may also drive the HVCs, but a metal-enrichment process, i.e., the supernovae explosions, is always necessary.
The origin of HVMCs is likely analogous to FB HVCs, but this has yet to be confirmed due to the lack of information about their metallicity. To investigate this further, we examined the ratio of ejecta mass to cloud mass in our simulation, as shown in Figure <ref>. The ratio generally increases over time for all runs, but there is a peak at 5 Myr for f2001000v, which may be due to the low-metallicity cloud material flowing out of the simulation box.
In f100n1500n, the clouds contain more ejecta material since they are slow, resulting in a more efficient mixture. Assuming an initial cloud metallicity of 1 Z_⊙ and a supernova ejecta metallicity of 6 Z_⊙, a standard ratio of 0.1 would yield a final cloud metallicity of 1.5 Z_⊙, still lower than the observed 3.2 Z_⊙ in some FB HVCs <cit.>.
This suggests that the initial clouds were possibly already metal-rich before being driven to become HVCs. While SMBH activity may also drive HVCs, a metal-enrichment process such as supernova explosions is possibly necessary to explain the high metallicity of some FB HVCs.
If the model is correct, the role of HVCs in the galaxy-wide gas cycle can be understood. The low-metallicity HVCs originating from the halo or intergalactic medium are pulled by the gravitational potential of the Milky Way and surrounding galaxies, while high-metallicity HVCs are driven by galactic fountains that are energized by supernovae explosions in our Milky Way or the SMBH in the Galactic center. The FB HVCs consist of both types of HVCs, but the HVMCs embedded in FB HVCs should be driven by the fountains, which could be further confirmed by future metallicity analysis based on new ultraviolet absorption observations.
§.§ The relation between HVMCs and feedback relics
It is interesting to ask whether the HVMCs have a causal relation with the radio bubbles <cit.> and X-ray chimneys <cit.> found on smaller scales, or the Fermi bubbles <cit.> and eROSITA bubbles <cit.> found on much larger scales.
We note that the age of the HVMCs inferred from our simulations is a few Myr, roughly consistent with the dynamical timescale of a few Myr for both of the radio bubbles and the Fermi bubbles originally suggested by <cit.> and <cit.>, respectively.
However, their timescales actually have not been resolved, the radio bubbles may be younger <cit.> and the Fermi bubbles may be much older <cit.>.
In particular, <cit.>'s estimation was based on the assumption of a constant expansion velocity of the bubbles, which is implausible, hence a shorter timescale is expected.
In the context of the supernova-based model for the origin of the radio bubbles/chimneys <cit.>, the radio bubbles would be a dynamically younger and independent structure simply evolving in the interior of the Fermi/eROSITA bubbles, which themselves were formed by older activities in the Galactic center.
However, the HVMCs should also originate from a similar activity, which implies there are three independent activities, respectively correlated with the radio bubbles/X-ray chimneys, the HVMCs and the Fermi bubbles/eROSITA bubbles.
The difference is that the HVMCs will be difficult to propagate to much higher latitude in our simulations, because the acceleration rate of HVMCs at high latitude will largely decrease.
If the three independent activities are not related with each other, we have to use three models to respectively explain the structures at three scales, which will lead to an inelegant physical pattern.
Alternatively, as suggested by <cit.>, the X-ray chimney/the radio bubbles may be a channel that transports energy from the Galactic center to the high-latitude region currently occupied by the Fermi bubbles, and the HVMCs are the manifestation of the transportation process, which is a more elegant unified model.
In fact, the HVCs can spread from ∼ 100 pc to ∼ 10 kpc <cit.>, though most of the FB HVCs are located in the lower 2 kpc, which may be the clue connecting the feedback relics at different scale.
In this case, the channel should have existed for tens of Myr, so that star formation in the Galactic center can be sufficient to supply the total energy content of the Fermi bubbles, ∼ 10^56 erg <cit.>.
However, such a picture contradicts with the capped morphology of the radio bubbles (the southern bubble is not obviously capped in X-rays; ), which, according to our simulations, is naturally explained as the expanding shell of a newly born outflow.
This picture may be reconciled if star formation in the Galactic center has been episodic on a timescale of ∼10 Myrs <cit.>, then the X-ray chimney/the radio bubbles are (re)established and the HVCs/HVMCs are (re)accelerated by consecutive generations of mini-starbursts and collapses inbetween.
Of course, over such a long interval, the activity of Sgr A* can also play an important role in contributing to the formation of these relics, especially in view of the fact it was likely much more active in the recent past <cit.>.
In a hybrid scenario, Sgr A*, with supernovae and even stellar winds, can simultaneously sustain the channel and transport energy to larger scales, implying X-ray emission beyond the edge of the radio bubbles, which is also suggested by <cit.>.
For example, a AGN activity produces the large-scale structure and triggers the surrounding starburst, then the newly-formed massive stars drive strong stellar wind and explode as supernovae to produce the small-scale structure.
Possibly, the stellar winds and shock wave of supernovae can also trigger the tidal disruption event of the central SMBH, then produce a smaller-scale structure.
In conclusion, our findings suggest the existence of a potentially stable channel in the Galactic center, driven by a combination of diverse activities, which episodically accelerates gas clouds and transports energy to higher latitudes.
The HVMCs/FB HVCs are also the ingredient of the channel, but the HVMCs usually exist in low latitude due to the higher possibility of crushing at higher latitude. This pattern offers a comprehensive explanation for the interrelation between various feedback remnants, without necessitating the introduction of new models.
§ SUMMARY
To investigate the formation of HVMCs in our Galactic center, we perform simulations utilizing a starburst model, where HVMCs originate from low-latitude molecular clouds accelerated by a subsequent supernovae explosions. Previous studies have raised concerns about the destruction of molecular clouds due to the violent activity in the Galactic center, making it challenging for them to reach higher latitudes and velocities without disruption. However, our simulation results demonstrate that this problem can be resolved within a wide parameter space, given the appropriate local environment.
The main findings are summarized as follows:
* The HVMCs can indeed be formed in a starburst in the Galactic center.
* The magnetic field can protect the molecular clouds.
* The magnetic pressure, enhanced by the compression of shock wave, can contribute to accelerating the clouds.
* The acceleration rate of HVMCs will largely decrease at high latitude, because the reverse shock, generated by the interaction between the shock wave and the molecular clouds, can gradually balance the forward shock from the supernovae. Therefore, we can predict the largest velocity the HVMCs can reach is ∼ 400 km s^-1.
* The mixture between the clouds and the ejecta of the supernovae is more efficient at low latitude, and this process can significantly impact the metallicity of HVCs.
* HVMCs/FB HVCs potentially serve as ingredients in a channel sustained by diverse activities in the Galactic center, intermittently accelerating gas clouds and transporting energy to higher latitudes.
Due to the limited size of the simulation box, the subsequent evolution of HVMCs beyond 1 kpc latitude remains uncertain. Furthermore, the small box size restricts us to initializing only one cloud, resulting in inconsistent HVC number density and distribution compared to observations. Future efforts involve expanding the simulation box, simplifying supernova explosion settings, and implementing adaptive mesh refinement to provide a more comprehensive understanding of the phenomenon.
§ ACKNOWLEDGEMENTS
We acknowledge the cosmology simulation database (CSD) in the National Basic Science Data Center (NBSDC) and its funds the NBSDC-DB-10. We acknowledge the support from the National Key Research and Development Program of China (2022YFA1602903), from the National Science Foundation of China (12147103, 12273010), and from the Fundamental Research Funds for the Central Universities(226-2022-00216).
§ DATA AVAILABILITY
The simulation data underlying this article may be shared upon reasonable request to the corresponding author.
mnras
§ COOLING FUNCTION
The cooling process can significantly influence the evolution of HVMCs, but an accurate tabulated cooling function will spend much more computational resource.
Therefore, we in the simulations adopt a piece-wise cooling function (see Figure <ref>), which can roughly describe the cooling function.
|
http://arxiv.org/abs/2307.00424v1
|
20230701204312
|
Adaptive Algorithms for Relaxed Pareto Set Identification
|
[
"Cyrille Kone",
"Emilie Kaufmann",
"Laura Richert"
] |
stat.ML
|
[
"stat.ML",
"cs.LG",
"68T05"
] |
Primal-Dual Gradient Methods for Searching Network Equilibria in Combined Models with Nested Choice Structure and Capacity Constraints
[
======================================================================================================================================
In this paper we revisit the fixed-confidence identification of the Pareto optimal set in a multi-objective multi-armed bandit model. As the sample complexity to identify the exact Pareto set can be very large, a relaxation allowing to output some additional near-optimal arms has been studied. In this work we also tackle alternative relaxations that allow instead to identify a relevant subset of the Pareto set. Notably, we propose a single sampling strategy, called Adaptive Pareto Exploration, that can be used in conjunction with different stopping rules to take into account different relaxations of the Pareto Set Identification problem. We analyze the sample complexity of these different combinations, quantifying in particular the reduction in sample complexity that occurs when one seeks to identify at most k Pareto optimal arms. We showcase the good practical performance of Adaptive Pareto Exploration on a real-world scenario, in which we adaptively explore several vaccination strategies against Covid-19 in order to find the optimal ones when multiple immunogenicity criteria are taken into account.
§ INTRODUCTION
In a multi-armed bandit model, an agent sequentially collects samples from several unknown distributions, called arms, in order to learn about these distributions (pure exploration), possibly under the constraint to maximize the sample collected, viewed as rewards (regret minimization). These objectives have been extensively studied for different types of univariate arms distributions <cit.>. In this paper, we consider the less common setting in which arms are multi-variate distributions. We are interested in the Pareto Set Identification (PSI) problem. In this pure exploration problem, the agent seeks to identify the arms that are (Pareto) optimal, i.e. such that their expected values for all objectives are not uniformly worse than those of another arm.
We formalize this as a fixed-confidence identification problem: in each round t the agent selects an arm A_t using an adaptive sampling rule and observes a sample _t ∈^D from the associated distribution. It further uses an adaptive stopping rule τ to decide when to stop sampling and output a set of arms Ŝ_τ which is her guess for (an approximation of) the true Pareto set ^⋆. Given a risk parameter δ∈ (0,1), this guess should be correct with high probability, e.g. satisfy (Ŝ_τ = ^⋆)≥ 1 - δ for exact Pareto set identification, while requiring a small sample complexity τ. This generalizes the well-studied fixed-confidence Best Arm Identification (BAI) problem <cit.> to multiple objectives.
Our motivation to study multi-objective adaptive identification stems from the design of adaptive early-phase clinical trials. In phase I/II trials, the effects of a molecule in humans are explored, and several biological endpoints may be assessed at the same time as indicative markers of efficacy. In particular, in the context of vaccine development, early-phase trials usually assess multiple immunogenicity endpoints (i.e. various markers of the effects of the vaccine on the immune system, such as different facets of antibody responses or other immune parameters). In the absence of a known correlate of protection during early clinical development, these endpoints may not have a clear a priori hierarchy, may not all be correlated, which makes an examination of the Pareto set of different vaccinal strategies particularly relevant. In addition, given the availability of various vaccine platforms (such as mRNA vaccines, viral-vector vaccines, protein vaccines), as exemplified by Covid-19 vaccines, there may be a need to adaptively screen the various resulting vaccine strategies to select the most promising ones.
Apart from clinical trials, active Pareto Set Identification can be meaningful in many real-word contexts, and we refer the reader to the various examples given by <cit.>, such as hardware or software design.
For many applications, the sample complexity of exact PSI can be prohibitive, either when there are many close to optimal arms or when the Pareto set is very large, and different relaxations have been considered in the literature <cit.>. Going back to our application, in an adaptive trial that aims at pre-selecting a certain number of treatments or vaccine strategies for further investigations in clinical trials, practical constraints (the cost and feasibility of the trials) impose a constraint on the maximal number of interesting arms that can be identified. This motivates the introduction of a new setting where the agent is asked to identify at most k Pareto optimal arms. Interestingly the sampling rule that we propose for this setting can be used to solve (some generalizations of) other relaxations considered in the literature.
Related work The work most closely related to ours is that of Auer et al. <cit.>, who propose a relaxation, which we refer to as ε_1-PSI: their algorithm returns a set Ŝ that contains w.h.p. all the Pareto optimal arms and possibly some sub-optimal arms, which when increased by _1 coordinate-wise become Pareto optimal. For arms that have sub-Gaussian marginals, they provide an instance-dependent sample complexity bound scaling with some notion of sub-optimality gap for each arm. The work of Zaluaga et al. <cit.> studies a structured variant of fixed-confidence PSI in which the means are regular functions of arms' descriptors. They use Gaussian process modeling and obtain worse-case sample complexity bounds. In particular <cit.> considers the identification of an ε-cover of the Pareto set, which is a representative subset of the (ε)-Pareto set that will be related to our (ε_1,ε_2)-PSI criterion. The algorithms of <cit.> and those of <cit.> in the unstructured setting[PAL relies on confidence intervals that follow from Gaussian process regression, but can also be instantiated with simpler un-structured confidence intervals as those used in our work and in Auer's] have the same flavor: they sample uniformly from a set of active arms and remove arms that have been found sub-optimal (or not representative). <cit.> further adds an acceptation mechanism to stop sampling some of the arms that have been found (nearly-)optimal and are guaranteed not to dominate an arm of the active set.
In this paper, we propose instead a more adaptive exploration strategy, which departs from such accept/reject mechanisms and is suited for different types of relaxation, including our novel k-relaxation.
Adaptive Pareto Exploration (APE) leverages confidence intervals on the differences of arms' coordinates in order to identify a single arm to explore, in the spirit of the LUCB <cit.> or UGapEc <cit.> algorithms for Top-m identification in (one-dimensional) bandit models. These algorithms have been found out to be preferable in practice to their competitors based on uniform sampling and eliminations <cit.>, an observation that will carry over to APE. Besides the multi-dimensional observations, we emphasize that a major challenge of the PSI problem with respect to e.g. Top m identification is that the number of arms to identify is not known in advance. Moreover, when relaxations are considered, there are multiple correct answers. In the one-dimensional settings, finding optimal algorithms in the presence of multiple correct answers is notoriously hard as discussed by the authors of <cit.>, and their lower-bound based approach becomes impractical in our multi-dimensional setting. Finally, we remark that the k-relaxation can be viewed as an extension of to the problem of identifying any k-sized subset out of the best m arms in a standard bandit <cit.>.
Beyond Pareto set identification, other interesting multi-objective bandit identification problems have been studied in the literature. For example <cit.> propose an algorithm to identify some particular arms in the Pareto set through a scalarization technique <cit.>. The idea is to turn the multi-objective pure-exploration problem into a single-objective one (unique optimal arm) by using a real-valued preference function which is only maximized by Pareto optimal arms (see e.g <cit.> for some examples of these functions).
In practice, a family of those functions can be used to identify many arms of the Pareto set but it is not always possible to identify the entire Pareto set using this technique (see e.g <cit.> for weighted sum with a family of weights vectors). In a different direction, the authors of <cit.> introduce the feasible arm identification problem, in which the goal is to identify the set of arms whose mean vectors belong to a known polyhedron P⊂^D.
In a follow up work <cit.>, they propose a fixed-confidence algorithm for finding feasible arms that further maximize a given weighted sum of the objectives.
In clinical trials, this could be used to find treatments maximizing efficacy (or a weighted sum of different efficacy indicators), under the constraint that the toxicity remains below a threshold. However, in the presence of multiple indicators of biological efficacy, choosing the weights may be difficult, and an examination of the Pareto set could be more suitable.
Finally, some papers consider extensions of the Pareto optimality condition. The authors of <cit.> tackle the identification of the set of non-dominated arms of any partial order defined by an ^D polyhedral ordering cone (the usual Pareto dominance corresponds to using the cone defined by the positive orthant ^D_+),
and they provide worst-case sample complexity in the PAC setting.
The work of <cit.> studies the identification of the set of non-dominated elements in a partially ordered set under the dueling bandit setting, in which the observations consists in pairwise comparison between arms.
Outline and contributions
First, we formalize in Section <ref> different relaxations of the PSI problem: ε_1-PSI, as introduced by <cit.>, ε_1,ε_2-PSI, of which a particular case was studied by <cit.> and ε_1-PSI-k, a novel relaxation that takes as input an upper bound k on the maximum number of ε_1-optimal arms that can be returned.
Then, we introduce in Section <ref> Adaptive Pareto Exploration, a simple, adaptive sampling rule which can simultaneously tackle all three relaxations, when coupled with an appropriate stopping rule that we define for each of them. In Section <ref>, we prove high-probability upper bounds on the sample complexity of APE under different stopping rules. For ε_1-PSI, our bound slightly improves upon the state-of-the-art. Our strongest result is the bound for ε_1-PSI-k, which leads to a new notion of sub-optimality gap, quantifying the reduction in sample complexity that is obtained. Then, Section <ref> presents the result of a numerical study on synthetic datasets, one of them being inspired by a Covid-19 vaccine clinical trial. It showcases the good empirical performance of compared to existing algorithms, and illustrates the impact of the different relaxations.
§ PROBLEM SETTING
In this section, we introduce the Pareto Set Identification (PSI) problem and its relaxations.
Fix K,D∈^⋆.
Let ν_1 , …, ν_K be distributions over ^D with means _1, …, _K ∈^D. Let :=[K]:= { 1,…, K} denote the set of arms. Let ν:=(ν_1, …, ν_K) and 𝒳 := (_1,…,_K).
We use boldfaced symbols for ^D elements.
Letting ∈^D, u ∈, for any d∈{1, …, D}, X^d denotes the d-th coordinate of and + u:=(X^1+u, …, X^D+u). In the sequel, we will assume that ν_1, …, ν_K have 1-subgaussian marginals [A random variable X is σ-subgaussian if for any λ∈, (exp(λ(X-(X))) ≤exp(λ^2σ^2/2).].
Given two arms i,j ∈, i is weakly (Pareto) dominated by j (denoted by _i ≤_j) if for any d∈{1, …, D}, μ_i^d ≤μ_j^d. The arm i is (Pareto) dominated by j (_i ≼_j or i≺ j) if i is weakly dominated by j and there exists d ∈{1, …, D} such that μ_i^d < μ_j^d. The arm i is strictly (Pareto) dominated by j (_i≺_j or i ≺ j) if for any d ∈{1, …, D}, μ_i^d < μ_j^d.
For ∈_+^D, the -Pareto set ^⋆_() is the set of -Pareto optimal arms, that is:
^⋆_() := { i ∈ s.t ∄ j ∈: _i + ≺_j }.
In particular, _0^⋆() is called the Pareto set and we will simply write ^⋆() to denote ^⋆_0(). When it is clear from the context, we write ^⋆ (or ^⋆_) to denote ^⋆() (or ^⋆_()). The goal of the learner is to identify the Pareto set. By abuse of notation we write ^⋆_ when ∈^+ to denote ^⋆_, :=(, …, ).
In each round t=1, 2, …, the agent chooses an arm A_t and observes an independent draw _t ∼ν_A_t with (_A_t) = _A_t. We denote by _ν the law of the stochastic process (_t)_t≥ 1 and by _ν, the expectation under _ν. Let _t:= σ(A_1, _1, …, A_t, _t) the σ-algebra representing the history of the process. An algorithm for PSI consists in : i) a sampling rule which determines which arm to sample at time t based on history up to time t-1, ii) a stopping rule τ which is a stopping time w.r.t the filtration (_t)_t≥ 1 and iii) a recommendation rule which is a _τ-measurable random set Ŝ_τ representing the guess of the learner. The goal of the learner is to make a correct guess with high probability, using as few sample τ as possible. Before formalizing this, we introduce the different notion of correctness considered in this work, depending on parameters ε_1≥, ε_2 ≥ 0 and k ∈ [K]. Our first criterion is the one considered by <cit.>.
Ŝ⊂ is correct for _1-PSI if ^⋆⊂Ŝ⊂^⋆__1.
To introduce our second criterion, we need the following definition.
Let _1, _2 ≥ 0. A subset S⊂ is an (_1, _2)-cover of the Pareto set if : S⊂^⋆__1 and for any i∉ S either i∉^⋆ or ∃ j ∈ S such that _i ≺_j + _2.
The -accurate set of <cit.> is a particular case of (_1, _2)-cover for which _1 = _2 =. Allowing _1≠_2 generalizes the notion of -correct set and can be useful, e.g., in scenarios when we want to identify the exact Pareto set (setting _1=0) but allow some optimal arms to be discarded if they are too close (parameterized by _2) to another optimal arm already returned. We note however that the sparse cover of <cit.> in not an (ε_1,ε_2)-cover but a "small" subset of nearly optimal arms that represents well the Pareto set. Identifying a sparse cover from samples requires in particular to identify _ε_1^⋆ hence it can not be seen as a relaxation of ε_1-PSI.
Ŝ⊂ is correct for (_1, _2)-PSI if it is an (_1, _2)-cover of the Pareto set.
Ŝ⊂ is correct for _1-PSI-k if either i) |Ŝ| = k and Ŝ⊂^⋆__1 or ii) |Ŝ| <k and ^⋆⊂Ŝ⊂^⋆__1 holds.
Given a specified objective (_1-PSI, (_1, _2)-PSI or _1-PSI-k), and a target risk parameter δ∈ (0,1), the goal of the agent is to build a δ-correct algorithm, that is to guarantee that with probability larger than 1-δ, her guess Ŝ_τ is correct for the given objective, while minimizing the number of samples τ needed to make the guess, called the sample complexity.
We now introduce two important quantities to characterize the (Pareto) optimality or sub-optimality of the arms. For any two arms i, j, we let
(i,j) := min_1≤ d≤ D(μ_j^d - μ_i^d), and (i,j) := max_1≤ d≤ D (μ_i^d - μ_j^d),
which have the following interpretation. If i≼ j, (i,j) is the minimal quantity α≥0 that should be added component-wise to _i so that _i + α⊀_j, α:=(α, …, α). Moreover, (i,j) > 0 if and only if i ≺ j. Then, for any arms i,j, if i⊀ j, (i,j) is the minimum quantity α' such _i ≤_j + α', α':=(α' ,…, α'). We remark that (i,j)<0 if and only if i≺ j. Our algorithms, presented in the next section, rely on confidence intervals on these quantities.
§ ADAPTIVE PARETO EXPLORATION
We describe in this section our sampling rule, Adaptive Pareto Exploration, and present three stopping and recommendation rules to which it can be combined to solve each of the proposed relaxation.
Let T_k(t)=∑_s=1^t(A_s=k) be the number of times arm k has been pulled by the end of round t and _k(t):= T_k(t)^-1∑_s=1^T_k(t)_k, s the empirical mean of this arm at time t, where _k, s denotes the s-th observation drawn from ν_k. For any arms i,j∈, we let
(i,j,t) := min_d (_j^d(t) - _i^d(t)) and (i,j,t):= max_d (^d_i(t) - μ^d_j(t)).
The empirical Pareto set is defined as
S(t) := {i ∈: ∄ j ∈ : _i(t) ≺_j(t) },
= {i ∈: ∀ j ∈\{i}, M(i,j,t) > 0 } .
§.§ Generic algorithm(s)
Adaptive Pareto Exploration relies on a lower/upper confidence bound approach, similar to UGapEc <cit.> lil'UCB <cit.> or LUCB <cit.>. The idea is to identify at any round two contentious arms and sample both or one of them. To define those, we suppose that there exists confidence intervals [L_i,j^d(t,δ),U_i,j^d(t,δ)] on the difference in expected values for each pair of arms (i,j) and each objective d ∈ D, such that introducing
_t := ⋂_i=1^K ⋂_j≠ i⋂_d=1^D {L^d_i,j(t, δ) ≤μ_i^d - μ_j^d ≤ U^d_i,j(t, δ)}, and = ⋂_t =1^∞_t.
we have ()≥ 1-δ. Concrete choices of these confidence intervals will be discussed in Section <ref>.
To ease the notation, we drop the dependency in δ in the confidence intervals and further define
^-(i,j,t) := max_d L_i,j^d(t) and ^+(i,j,t) := max_d U^d_i,j(t)
^-(i,j,t) := -^+(i,j,t) and ^+(i,j, t) := - ^-(i,j,t).
lemmaineqGene For any round t≥ 1, if _t holds, then for any i,j ∈, ^-(i,j,t) ≤(i,j) ≤^+(i,j,t) and ^-(i,j,t) ≤(i,j) ≤^+(i,j,t).
Noting that ^⋆_ε_1 = { i ∈ : ∀ j ≠ i, M(i,j)+_1 > 0}, we define the following set of arms that are likely to be _1-Pareto optimal:
^_1(t) := {i ∈: ∀ j ∈\{i}, ^-(i,j,t) +_1>0 }.
Sampling rule In round t, Adaptive Pareto Exploration samples a_t, the least pulled arm among b_t and c_t
given by
b_t := _i∈∖^_1(t)min_j ≠ i^+(i,j,t),
c_t := _j ≠ b_t^-(b_t, j, t)
Indeed, b_t can be seen as the arm which is optimistically the most likely to be (nearly) Pareto optimal among the arms that are not yet identified as (nearly) optimal with high probability and c_t can be seen as the most likely to dominate b_t.
In particular, we show in Appendix <ref> that for D=1, this sampling rule takes a simple form that is close (but not identical) to LUCB and UGapEc.
Stopping and recommendation rule(s) Depending on the objective, Adaptive Pareto Exploration can be plugged in with different stopping rules, that are summarized in Table <ref> with their associated recommendations. To define those, we define for all i ∈, ε_1,ε_2≥ 0,
g_i^_2(t):= max_j ≠ i^-(i,j,t) + _2 {j ∈^_1(t)} and
h_i^_1(t) := min_j≠ i^-(i,j,t) + _1.
and let g_i(t):= g^0_i(t). Introducing
Z_1^_1(t) := min_i ∈ S(t) h^_1_i(t), and Z_2^_1(t):= min_i ∈ S(t)^∁max(g_i(t), h^_1_i(t)),
for ε_1-PSI, our stopping rule is τ_ε_1 := inf{t ≥ K : Z_1^_1(t) > 0 ∧ Z_2^_1(t) > 0 } and the associated recommendation is (τ_ε_1) where
(t) :=
S(t) ∪{ i ∈ S(t)^∁: ∄ j≠ i: ^-(i,j,t)> 0}
consists of the current empirical Pareto set plus some additional arms that have not yet been formally identified as sub-optimal. Those arms should be (_1)-Pareto optimal.
For (ε_1,ε_2)-PSI we define a similar stopping rule τ__1,_2 where the stopping statistics are respectively replaced with
Z_1^_1, _2(t) := min_i ∈ S(t)max(g_i^_2(t), h^_1_i(t)) and Z_2^_1, _2(t):= min_i ∈ S(t)^∁max(g_i^_2(t), h^_1_i(t))
with the convention min_∅ = +∞, and the recommendation is OPT^ε_1(τ_ε_1,ε_2).
To tackle the _1-PSI-k relaxation, we propose to couple τ__1 with an additional stopping condition checking whether
^_1(t) already contains k arms. That is, we stop at τ_ε_1^k := min(τ__1,τ^k) where
τ^k := inf{t ≥ K : |^_1(t)| ≥ k } with associated recommendation ^_1(τ^k). Depending of the reason for stopping (τ__1 or τ^k), we follow the corresponding recommendation.
lemmacorrectnessGeneric Assume holds. For _1-PSI (resp. (_1,_2)-PSI , _1-PSI-k), Adaptive Pareto Exploration combined with the stopping rule τ__1 (resp. τ__1,_2, resp. τ__1^k) outputs a correct subset.
We decoupled the presentation of the sampling rule to that of the “sequential testing” aspect (stopping and recommendation). We could even go further and observe that multiple tests could actually be run in parallel, for free. If we collect samples with APE (which only depends on _1), whenever one of the three stopping conditions given in Table <ref> triggers, for any values of ε_2 or k, we can decide to stop
and make the corresponding recommendation or continue and wait for another “more interesting” stopping condition to be satisfied. If holds, a recommendation made at any such time will be correct for the objective associated to the stopping criterion (third column in Table <ref>).
§.§ Our instantiation
We propose to instantiate the algorithms with confidence interval on the difference of pair of arms.
For any pair i,j∈, we define a function β_i,j such that for any d∈ [D], U^d_i,j(t) = _i^d(t) - ^d_j(t) + β_i,j(t) and L^d_i,j(t) = _i^d(t) - ^d_j(t) - β_i,j(t). We take from <cit.> the following confidence bonus for time-uniform concentration:
β_i,j(t):= 2√((C^g(log(K_1/δ)/2) + ∑_a∈{ i ,j}log(4 + log(T_a(t)))) (∑_a∈{ i ,j}1/T_a(t))),
where K_1 := K(K-1)D/2 and C^g ≈ x + log(x) is a calibration function. They result in the simple expressions ^±(i,j,t) = (i,j,t) ±β_i,j(t) and ^±(i,j,t) = (i,j,t) ±β_i,j(t). As an example, we state in Algorithm <ref> the pseudo-code of APE combined the stopping rule suited for the k-relaxation of ε_1-PSI, which we refer to as ε_1-APE-k.
In Appendix <ref>, we also study a different instantiation based on confidence bounds of the form U_i,j(t) = U_i(t) - L_j(t) where [L_i(t),U_i(t)] is a confidence interval on μ_i. This is the approach followed by LUCB for D=1 and prior work on Pareto identification <cit.>. In practice we advocate the use of the pairwise confidence intervals defined above, even if our current analysis does not allow to quantify their improvement. For the LUCB-like instantiation, we further derive in Appendix <ref> an upper bound on the expected stopping time of for the different stopping rules.
§ THEORETICAL ANALYSIS
In this section, we state our main theorem on the sample complexity of our algorithms and give a sketch of its proof. First let us introduce some quantities that are needed to state the theorem.
The sample complexity of the algorithm proposed by <cit.> for (_1)-Pareto set identification scales as a sum over the arms i of 1/(Δ_i∨ε_1)^2 where Δ_i is called the sub-optimality gap of arm i and is defined as follows. For a sub-optimal arm i∉^⋆(),
Δ_i := max_j ∈^⋆(i,j),
which is the smallest quantity that should be added component-wise to _i to make i appear Pareto optimal w.r.t {_i: i∈}.
For a Pareto optimal arm i ∈^⋆(), the definition is more involved:
Δ_i := min_j∈∖{i}Δ_j if ^⋆:={i}
min (δ_i^+, δ_i^-) else,
where
δ_i^+:= min_j∈^⋆∖{i}min((i,j), (j,i)) and δ_i^-:= min_j∈∖^⋆{((j,i))^+ + Δ_j}.
For x∈, (x)+:= max(x, 0).
We also introduce some additional notion needed to express the contribution of the k-relaxation. Let 1≤ k≤ K.
For any arm i, let ω_i = min_j≠ i (i,j) and define
ω^k:= kmax_i ∈ ω_i, ^⋆, k := 1… k_i ∈ ω_i,
with the k-th max and first to k-th argmax operators. Observe that w^k > 0 if and only if |^⋆()| ≥ k.
theoremmainTheorem
Fix a risk parameter δ∈ (0, 1), _1≥ 0, let k≤ K and ν a bandit with 1-subgaussian marginals. With probability at least 1 - δ,
_1-APE-k recommends a correct set for the _1-PSI-k objective and stops after at most
∑_a ∈88/Δ̃_a^2log(2K(K-1)D/δlog(12e/Δ̃_a)),
samples, where for each a∈, Δ̃_a := max(Δ_a, _1,ω^k).
First, when k=K, observing that _1-APE-K provides a δ-correct algorithm for ε_1-PSI, our bound improves the result of <cit.> for the _1-PSI problem in terms of constant multiplicative factors and loglogΔ^-1 terms instead of logΔ^-2 and nearly matches the lower bound for the _1-PSI problem (Theorem 17 in <cit.>).
It also shows the impact of the k-relaxation on the sample complexity. In particular, we can remark that for any arm i∈^⋆\^⋆, k, max(Δ_i, ω_k) = ω_k. Intuitively, it says that we shouldn't pay more than the cost of identifying the k-th optimal arm, ordered by the ω_i's. A similar result has been obtained for the any k-sized subset of the best m problem <cit.>. But the authors have shown the relaxation only for the best m arms while our result shows that even the sub-optimal arms should be sampled less.
In Appendix <ref>, we prove a lower bound showing that in some scenarios, _1-APE-k is optimal for the k-relaxation (up to Dlog(K) and constant multiplicative terms).
In Appendix <ref>, we prove that <ref> without the ω_k terms also holds for (_1, _2)-APE. This does not justifies the reduction in sample complexity when setting ε_2>0 in (ε_1,ε_2)-PSI observed in our experiments but it at least guarantees that the ε_2-relaxation doesn't make things worse.
Furthermore, since our algorithm allows _1=0, it is also an algorithm for BAI when D=1, _1=0. We prove in Appendix <ref> that in this case, the gaps Δ_i's matches the classical gaps in BAI (<cit.>) and we derive its sample complexity from <ref> showing that it is similar in theory to LUCB, UGap, LUCB++ <cit.> but have better empirical performance.
Sketch of proof
Using Proposition 24 of <cit.> we prove that the choice of β_i,j in (<ref>) yields () ≥ 1-δ. Combining this result with <ref> proves that _1-APE-k is correct with probability at least 1-δ.
The idea of the remaining proof is to show that under the event , for our different stopping rules, if has not stopped at the end of round t, then a_t has not been explored enough.
lemmasamplwm Let k≤ K. If _t holds and t<τ__1^k then ω^k ≤ 2 β_a_t, a_t(t).
lemmasamplold If _t holds and t<min(τ__1, τ__1^k, τ__1, _2) then Δ_a_t≤ 2 β_a_t, a_t(t).
The following lemma holds for each of the stopping times τ__1, τ__1, _2 and τ__1^k.
lemmasamplComplexEps
If the algorithm has not stopped at the end of round t then _1 ≤ 2 β_a_t, a_t(t).
Combining these lemmas we prove that
τ__1^k{}≤∑_a ∈inf{ n≥ 2 : Δ̃_a > 2 β^n},
where β^n is the expression of β_i,j(t) when T_i(t)=T_j(t) = n. A careful upper-bounding of the RHS of (<ref>) completes the proof of <ref>.
§ EXPERIMENTS
We evaluate the performance of Adaptive Pareto Exploration on a real-world scenario and on synthetic random Bernoulli instances.
For a fair comparison, Algorithm 1 of <cit.>, referred to as and are both run with our confidence bonuses β_i,j(t) on pairs of arms, which considerably improve single-arm confidence bonuses[In their experiments, <cit.> already proposed the heuristic use of confidence bonuses of this form]. As anytime confidence bounds are known to be conservative, we use K_1=1 in (<ref>) instead of its theoretical value coming from a union bound. Still, in all our experiments, the empirical error probability was (significantly) smaller than the target δ=0.1.
Real-world dataset COV-BOOST <cit.> is phase 2 trial which was conducted on 2883 participants to measure the immunogenicity of different Covid-19 vaccines as third dose (booster) in various combinations of initially received vaccines (first two doses). This resulted in a total of 20 vaccination strategies being assessed, each of them defined by the vaccines received as first, second and third dose. The authors have reported the average responses induced by each candidate strategy on cohorts of participants, measuring several immunogenicity markers.
From this study, we extract and process the average response of each strategy to 3 specific immunogenicity indicators: two markers of antibody response and one of the cellular response. The outcomes are assumed to have a log-normal distribution <cit.>. We use the average (log) outcomes and their variances to simulate a multivariate Gaussian bandit with K=20, D=3.
We give in Appendix <ref> some additional details about the processing of the data, and report the means and variances of each arm. In Appendix <ref> we further explain how APE can be simply adapted when the marginals distributions of the arms have different variances.
In this experiment, we set _1=0, δ=0.1 and compare to 0-APE-k (called APE-k in the sequel) for different values of k. The empirical distribution of the sample complexity of the algorithms, averaged over 2000 independent runs, are reported in <ref>. The results are shown in log-scale (-axis is the log of the sample complexity) to fit in the same figure. As |^⋆|=2, we first observe that, without the relaxation (i.e. for k>3), APE outperforms its state-of-the-art competitor . Moreover for k=1 or k=2, the sample complexity of APE-k is significantly reduced. For k=2 when the stopping time τ^k is reached some sub-optimal arms have possibly not yet been identified as such, while for k=3, even if the optimal arms have been identified, the remaining arms have to be sampled enough to ensure that they are sub-optimal before stopping. This explains the gap in sample complexity between k=2 and k=3.
In Appendix <ref>, we compare to an adaptation of for the k-relaxation, showing that APE is always preferable.
Experiments on random instances To further illustrate the impact of the k-relaxation and to strengthen the comparison with , we ran the previous algorithms on 2000 randomly generated multi-variate Bernoulli instances, with K=5 arms and different values of the dimension D. We set δ =0.1 and _1=0.005 (to have reasonable running time). The averaged sample complexities are reported in <ref>.
We observe that APE (with k=K) uses 20 to 25% less samples than and tends to be more efficient as the dimension increases (and likely the size of the Pareto set, since the instances are randomly generated). We also note that identifying a k-sized subset of the Pareto set requires considerably less samples than exact PSI. In Appendix <ref> we also provide examples of instances for which APE takes up to 3 times less samples than .
To illustrate the impact of the ε_2 relaxation, setting _1=0 we report the sample complexity of associated with the stopping time τ_0, _2 for 20 equally spaced values of _2 ∈ [0.01,0.05], averaged over 2000 random Bernoulli instances. <ref> shows the decrease of the average sample complexity when _2 increases (left) and the average ratio of the size of the returned set to the size of the Pareto set (right). Note that for _1=0, we have (τ_0, _2) ⊂^⋆. The average sample complexity reported decreases up to 86% for the instance with K=5, D=2 while the returned set contains more than 90% of the Pareto optimal arms. In Appendix <ref>, we further illustrate the behavior of APE with the ε_2 relaxation on a fixed instance in dimension 2.
§ CONCLUSION AND PERSPECTIVE
We proposed and analyzed APE, an adaptive sampling rule in multi-variate bandits models that when coupled with different stopping rules can
tackle different relaxations of the fixed-confidence Pareto Set Identification problem. Our experiments revealed the good performance of the resulting algorithms compared to the state-of-the-art PSI algorithm as well as the great reductions in sample complexity brought by the relaxations.
In future work, we intend to make our algorithms more practical for possible applications to clinical trials. For this purpose, as measuring efficacy takes time, we will investigate its adaptation to a batch setting, following, e.g. the work of <cit.> for BAI. We will also investigate the use of APE beyond the fixed-confidence setting, to the possibly more realistic fixed-budget <cit.> or anytime exploration <cit.> settings. To the best of our knowledge, no algorithms exists in the literature for PSI in such settings. Finally, following the works of <cit.>, we defined the ε_1,ε_2 relaxations with scalar values, so that the same slack applies to all components.
Although we could easily modify our algorithms to tackle vectorial values _1, _2, so far we could only prove a dependence on min_d_1^d in the sample complexity. We intend to study the right quantification in the sample complexity when _1 and _2 are vectorial.
Cyrille Kone is funded by an Inria/Inserm PhD grant. Emilie Kaufmann acknoweldges the support of the French National Research Agency under the BOLD project (ANR-19-CE23-0026-04).
plain
§ OUTLINE AND NOTATION
In this section, we provide an outline of the supplemental material and define some additional notation. <ref> proves the correctness of ours algorithms and some concentration lemmas. In <ref>, we prove <ref> and the lemmas used in its proof.
In <ref> we analyze the correctness and sample complexity of associated to the stopping time τ__1, _2. <ref> describes our worst-case lower bound and in <ref> we relate our algorithm to other algorithms for BAI. In <ref> we derive an upper-bound on the expectation of the sample complexity of _1-APE-k with a LUCB1-like instantiation and
in <ref> we recall or prove some technical lemmas that are used in the main proofs. Finally,
in <ref> we give further details about the experiments together with additional experimental results.
§ CORRECTNESS FOR DIFFERENT STOPPING RULES
In this section, we gather and prove results that are related to the correctness of our algorithms, either for their generic form (<ref> and <ref>) or some specific calibration. We recall the definition of the events
_t = ⋂_i=1^K ⋂_j≠ i⋂_d=1^D {L^d_i,j(t) ≤μ_i^d - μ_j^d ≤ U^d_i,j(t)} and = ⋂_t =1^∞_t .
§.§ Proof of <ref>
*
This result simply follows from the definition of _t. Since
_t := ⋂_i=1^K ⋂_j≠ i⋂_d=1^D {L^d_i,j(t) ≤μ_i^d - μ_j^d ≤ U^d_i,j(t)},
if _t holds, then for any i,j
^-(i,j,t):=max_d L^d_i,j(t)≤(i,j):= max_d (μ_i^d - μ_j^d) ≤max_d U^d_i,j(t):= ^+(i,j,t),
and the second point follows by noting that (i,j) =-(i,j) and ^+(i,j,t) :=-^-(i,j,t); ^-(i,j,t) :=-^+(i,j,t) for any pair of arms.
We remark that when the algorithm uses confidence bonus of form (^d_i(t) - ^d_j(t)) ±β_i,j(t),
^+(i,j,t) := max_d U^d_i,j(t) = max_d (^d_i(t) - ^d_j(t)) + β_i,j(t) = (i,j,t) + β_i,j(t),
^-(i,j,t) := max_d L^d_i,j(t) = max_d (^d_i(t) - ^d_j(t)) - β_i,j(t) = (i,j,t) - β_i,j(t),
and the previous lemma implies that on _t,
|(i,j) - (i,j,t)|≤β_i,j(t) and |(i,j) - (i,j,t)|≤β_i,j(t),
which is extensively used in our sample complexity analyses.
§.§ Proof of <ref>
*
We show the correctness of _1-PSI-k (for any k) and we derive the correctness for _1-PSI which is equivalent to _1-PSI-K. The correctness of (_1, _2)-PSI is shown separately in <ref> (see <ref>).
Assume holds. Let t=τ__1^k and i∈^_1(t). Since i ∈^_1(t), for any j≠ i,
(i,j) + _1 ≥^-(i,j,t) + _1 > 0,
that is i ∈^⋆__1. Therefore, on the event , ^_1(t)⊂^⋆__1.
Thus, if the stopping has occurred because |^_1(t) |≥ k, since in this case (t) ⊂^_1(t) ⊂^⋆__1, all the recommended arms will be (_1)-Pareto optimal. On the contrary, if |^_1(t)| < k, then from the definition of τ__1^k it holds that
Z_1^_1(t) > 0 and Z_2^_1(t) >0,
and the recommended set is then
(t) :=
S(t) ∪{ i ∈ S(t)^∁: ∄ j≠ i: ^-(i,j,t)> 0}.
For any i∈(t)^∁, by the definition of the recommended set and since Z_2(t)>0,
∃ j ∈ such that (i,j) ≥^-(i,j,t)>0,
so i is a sub-optimal arm. Therefore,
^⋆⊂(t).
Moreover, for any i ∈(t) ∩ S(t), since Z_1^_1(t)>0 we have h_i^_1(t) >0, that is
min_j ∈∖{ i}(i,j) + _1 ≥min_j ∈∖{ i}^-(i,j,t) + _1 > 0.
If i ∈(t) ∩ S(t)^∁, by definition of (t), we have g_i(t)<0. However, since Z_2^_1(t)>0, max(g_i(t), h_i^_1(t))>0 so we also have h_i^_1(t)>0 and (<ref>) applies. Thus, for any i∈(t),
min_j ∈∖{ i}(i,j) + _1 > 0,
that is i∈^⋆__1, so ^⋆⊂(t) ⊂^⋆__1. Finally we can conclude that _1-APE-k and _1-APE output a correct subset on .
§.§ Calibration of the confidence intervals
In Section <ref> we proposed to instantiate our algorithms with the confidence intervals
U^d_i,j(t) = _i^d(t) - ^d_j(t) + β_i,j(t) and L^d_i,j(t) = _i^d(t) - ^d_j(t) - β_i,j(t) .
We prove below that is indeed a high-probability event for a suitable choice of β_i,j(t).
Let ν be a bandit with 1-subgaussian marginals. For the confidence intervals defined in (<ref>), with
β_i,j(t)= 2√((C^g(log(K_1/δ)/2) + ∑_a∈{ i ,j}log(4 + log(T_a(t)))) (∑_a∈{ i ,j}1/T_a(t))).
the event
= ⋂_t =1^∞_t with _t = ⋂_i=1^K ⋂_j≠ i⋂_d=1^D {L^d_i,j(t) ≤μ_i^d - μ_j^d ≤ U^d_i,j(t)}
is such that ()≥ 1-δ.
By observing that for any pair of arm β_i,j = β_j,i, _t can be rewritten as
_t = ⋂_{i,j}∈Γ⋂_d=1^D {L^d_i,j(t, δ) ≤μ_i^d - μ_j^d ≤ U^d_i,j(t, δ)},
= ⋂_{i,j}∈Γ⋂_d=1^D {| (^d_i(t) - _j^d(t)) -(μ_i^d - μ_j^d) |≤β_i,j(t)},
where Γ:= 2[K] is the set of pair of 2 elements of [K], which satisfies |Γ| = K(K-1)/2. Therefore, using a union bound,
(^∁) = (∃ t≥ 1: _t^∁ holds ),
= (∃ t≥ 1, d∈ [D], {i,j}∈Γ: | (^d_i(t) - _j^d(t)) - (μ_i^d - μ_j^d) | > β_i,j(t)),
≤ ∑_{i,j}∈Γ∑_d=1^D (∃ t≥ 1: | (^d_i(t) - _j^d(t)) - (μ_i^d - μ_j^d) | > β_i,j(t)),
≤ ∑_{i,j}∈Γ∑_d=1^D δ/K_1 (by Proposition 24 of <cit.> which we recall below in <ref>),
= δ,
since K_1:= K(K-1)D/2 and |Γ| = K(K-1)/2.
Let X, Y be centered 1-subgaussian random variables and δ∈ (0,1).
Let X, X_1, X_2, … be random variables and Y, Y_1, Y_2, … be random variables.
With probability at least 1-δ, for all p,q ≥ 1,
|1/p∑_s=1^p X_s - 1/q∑_s=1^q Y_s |≤ 2√((C^g(log(1/δ)/2) + loglog(e^4 p) + loglog(e^4q)) (1/p + 1/q))
where C^g(x) ≈ x + log(x).
§ SAMPLE COMPLEXITY ANALYSIS
In this section we prove <ref> which is restated below.
*
The correctness follows from <ref> and the fact that ()≥ 1- δ (<ref>).
The upper-bound on the sample complexity is a direct consequence of <ref>, <ref>, <ref> which are proved later in this section.
Indeed, using these lemmas we have that, if _1--k has not stopped during round t i.e t<τ__1^k and the event _t holds, then
* ω^k≤ 2β_a_t, a_t(t) (<ref>),
* Δ_a_t≤ 2β_a_t, a_t(t)(<ref>),
* _1 ≤ 2β_a_t, a_t(t) (<ref>)
hold simultaneously. Then, if we do not count the first K rounds due to initialization, and letting Δ̃_a:=max(ω^k, _1, Δ_a),
τ__1^k {} - 1 ≤ ∑_t=1^∞{}{τ__1^k>t},
≤ ∑_t=1^∞{max(ω^k, _1, Δ_a_t) ≤ 2 β_a_t, a_t(t)}
= ∑_t=1^∞{Δ̃_a_t≤ 2 β_a_t, a_t(t)}
= ∑_t=1^∞∑_a=1^K{{a_t = a}{Δ̃_a≤ 2 β_a, a(t)}}
= ∑_a=1^K ∑_t=1^∞{{a_t = a}{Δ̃_a≤ 2 β_a, a(t)}}
≤ ∑_a=1^K inf{n≥ 2: Δ̃_a > 2 β^n},
where β^n is the expression of β_i,j(t) when T_i(t)=T_j(t) = n, that is
β^n= 2√((C^g(log(K_1/δ)/2) +2 log(4 + log(n)))2/n) .
Then, an inversion result given in <ref> yields
inf{s≥ 2 : 2β^s < Δ̃_a }≤88/Δ̃_a^2log(2K(K-1)D/δlog(12e/Δ̃_a)).
Therefore,
τ__1^k {}≤∑_a∈88/Δ̃_a^2log(2K(K-1)D/δlog(12e/Δ̃_a)).
We will now prove the lemmas involved in the proof of the main theorem. Two of them (<ref>, <ref>) rely on the following result, which is an important consequence of the definition of the APE sampling rule.
If t<min(τ__1, τ__1^k, τ__1, _2) then for any j∈, (b_t,j,t)≤β_b_t, j(t).
The proof is split into two steps.
*
Step 1 If t<min(τ__1^k, τ__1) then for any j∈, (b_t,j,t)≤β_b_t, j(t).
First, note that t<min(τ__1^k, τ__1) implies that Z_1^_1(t) ≤ 0 or Z_2^_1(t) ≤ 0.
By definition of b_t and noting that (i,j) = -(i,j), we have
b_t ∈_i∈^_1(t)^∁max_j≠ i (i,j,t) - β_i,j(t).
so that if there exists j such that (b_t,j,t)>β_b_t, j(t), then
max_j≠ b_t (b_t, j,t) - β_b_t, j(t) >0,
therefore,
∀ i ∈^_1(t)^∁ , max_j≠ i(i,j,t) - β_i,j(t) >0 i.e g_i(t) >0.
Furthermore, for any i ∈^_1(t), h^_1_i(t) >0. Putting things together, if there exists j such that (b_t, j,t)>β_b_t, j(t) then, Z^_1_1(t) > 0 and Z^_1_2(t)> 0.
*
Step 2 If t<τ__1, _2 then for any j∈, (b_t,j,t)≤β_b_t, j(t).
Recall that by definition t<τ__1, _2 implies that
Z_1^_1, _2(t) ≤ 0 or Z_2^_1, _2(t) ≤ 0. Using (<ref>), if there exists j such that (b_t,j,t)>β_b_t, j(t), then
max_j≠ b_t (b_t, j,t) - β_b_t, j(t) >0.
Combining this with
g_i^_2(t) := max_j ∈∖{i}(i,j,t) -β_i,j(t) + _2 {j ∈^_1(t)},
yields
∀ i ∈^_1(t)^∁ , 0<max_j≠ i(i,j,t) - β_i,j(t)≤ g^_2_i(t).
Furthermore, since we have
∀ i ∈^_1(t) , h^_1_i(t) >0,
the initial assumption would yield that for any arm i, max(h^_1_i(t), g^_2_i(t))>0, so Z_1^_1, _2(t)>0 and Z_2^_1, _2(t)>0. Therefore, if t<min(τ__1, τ__1^k, τ__1, _2) then
for any i∈, (b_t,j,t)≤β_b_t, j(t).
§.§ Proof of Lemma <ref>
*
First, note that if k>|^⋆|, then the lemma holds trivially since ω_k <0. In the sequel, we assume _t holds and k≤|^⋆|. If t < τ__1^k then it holds that |^_1(t)| < k. So ^⋆, k∩^_1(t)^∁≠∅. Let i∈^⋆, k∩^_1(t)^∁, we have
ω^k ≤ ω_i = min_j ∈∖{i}(i,j),
(a)≤ min_j ∈∖{i}(i,j, t) + β_i,j(t),
(b)≤ min_j ∈∖{b_t}(b_t,j, t) + β_b_t, j(t),
≤ (b_t,c_t, t) + β_b_t, c_t(t) ,
(c)≤ 2β_b_t, c_t(t),
≤ 2β_a_t, a_t(t),
where (a) uses that _t holds and <ref>, (b) uses the definition of b_t and (c) follows from the definition of c_t and the fact that b_t ∉^
_1(t),which yields (b_t, c_t, t)≤β_b_t, c_t(t). The last inequality follows since a_t is the least sampled among b_t, c_t and β is decreasing.
§.§ Proof of Lemma <ref>
*
Before proving the <ref>, we state the following lemma which is used to derive an upper bound on the gap of an optimal arm. Its proof is postponed to the end of the section.
For any Pareto optimal arm i, Δ_i ≤min_j≠ i(i,j).
Assume that _t holds. We consider four different cases depending on whether b_t and c_t are optimal or sub-optimal.
*
Case 1.1b_t is a Pareto optimal arm.
From the definition of the gap of an optimal arm and using <ref> it follows
Δ_b_t≤(b_t, c_t) which on _t and using <ref> yields
Δ_b_t + _1 ≤(b_t,c_t, t) + β_b_t, c_t(t) + _1
then, noting that there exists j ∈∖{ b_t} such that (b_t, j,t) + _1 ≤β_b_t, j(t), by definition of c_t, we have
(b_t, c_t, t)+ _1 ≤β_b_t, c_t(t),
therefore,
Δ_b_t + _1 ≤ 2β_b_t, c_t(t).
*
Case 1.2b_t is a sub-optimal arm. By definition of c_t and using =-, we have
c_t ∈_j∈∖{ b_t}(b_t, j, t) + β_b_t, j(t),
then, from the definition of the gap of a sub-optimal arm and since _t holds, we know that there exists an arm b_t^⋆ such that
Δ_b_t = (b_t, b_t^⋆) ≤ (b_t, b_t^⋆, t) + β_b_t, b_t^⋆(t),
(a)≤ (b_t, c_t, t) + β_b_t, c_t(t),
(b)≤ 2β_b_t, c_t(t).
where (a) uses the definition of c_t and (b) uses Lemma <ref>.
*
Case 2.1 c_t is a Pareto optimal arm. If b_t is also an optimal arm, it follows that Δ_c_t≤(b_t, c_t) which on _t yields Δ_c_t≤(b_t, c_t, t) + β_b_t, c_t(t), then, similarly to case 1.1, we have (b_t, j,t) + _1 ≤β_b_t, j(t) so
Δ_c_t + _1 ≤ 2 β_b_t, c_t(t).
Now, assume b_t is a sub-optimal arm. Then, by definition, Δ_c_t≤(b_t, c_t)^+ + Δ_b_t. Using a similar reasoning to case 1.2, it holds that Δ_b_t≤(b_t, c_t, t) + β_b_t, c_t(t), so
Δ_c_t ≤ (b_t, c_t)^+ + Δ_b_t,
≤ ((b_t, c_t, t) + β_b_t, c_t(t))^+ + (b_t, c_t,t)+ β_b_t, c_t(t),
= (-(b_t, c_t, t) + β_b_t, c_t(t) )^+ + (b_t, c_t,t)+ β_b_t, c_t(t),
(a)≤ max(2β_b_t, c_t(t), (b_t, c_t,t)+ β_b_t, c_t(t))
(b)≤ 2β_b_t, c_t(t).
where (a) follows from (x-y)^+ + (x + y) ≤max(x+y, 2x) and (b) follows from (b_t, c_t, t)≤β_b_t, c_t(t) (<ref>).
*
Case 2.2 c_t is a sub-optimal arm. We know that there exists an arm c_t^⋆ such that Δ_c_t = (c_t, c_t^⋆). If c_t^⋆ = b_t then, since (j,i) ≤(i,j) (follows from the definition), we have
Δ_c_t =(c_t, c_t^⋆) = (c_t, b_t),
≤ (b_t, c_t),
(a)≤ (b_t, c_t, t) + β_b_t, c_t(t),
(b)≤ 2β_b_t, c_t(t),
where (a) follows from _t and (b) has been already justified in the case 1.1. If b_t≠ c_t^⋆, then by definition of c_t, we have
(b_t, c_t, t) + β_b_t, c_t(t) ≥(b_t, c_t^⋆, t) + β_b_t, c_t^⋆(t),
which implies that there exists d∈ [D] such that
_c_t^d(t) - _b_t^d(t) + β_b_t, c_t(t) ≥_c_t^⋆^d(t) - _b_t^d(t) + β_b_t, c_t^⋆(t) _t≥μ_c_t^⋆^d - μ_b_t^d,
then recalling that β_i,j = β_j,i,
μ_c_t^d - μ_b_t^d + 2β_b_t, c_t(t) _t≥ (_c_t^d(t) - _b_t^d(t) - β_b_t, c_t(t)) + 2β_b_t, c_t(t) ≥μ_c_t^⋆^d - μ_b_t^d.
Put together, there exists d∈ [D] such that
μ_c_t^⋆^d - μ_c_t^d ≤ 2β_b_t, c_t(t),
so
Δ_c_t = min_d (μ_c_t^⋆^d - μ_c_t^d) ≤ 2β_b_t, c_t(t),
Putting the four case together, we have proved that if t<min(τ__1^k, τ__1, _2) then both
Δ_b_t≤ 2 β_b_t, c_t(t) and Δ_c_t≤ 2β_b_t, c_t(t)
holds. Further noting that a_t is the least sampled among among b_t, c_t and β is non-increasing, β_b_t, c_t(t) ≤β_a_t, a_t(t), (<ref>) yields
Δ_a_t≤ 2β_a_t, a_t(t),
which achieves the proof.
The following lemma holds for each of the stopping times τ__1, τ__1, _2 and τ__1^k.
§.§ Proof of <ref>
*
By <ref>, we have (b_t, c_t,t) ≤β_b_t, c_t(t) or equivalently
(b_t,c_t,t) ≥ - β_b_t, c_t(t) .
Then, knowing that b_t ∉^_1(t), there exists an arm j such that _1 + (b_t,j,t) ≤β_b_t, j(t). Using further the definition of c_t, it follows that _1 + M(b_t,c_t,t) ≤β_b_t, c_t(t). Combining this with inequality (<ref>) and noting that a_t is the least sampled among b_t, c_t yields
β_a_t, a_t(t) ≥β_b_t, c_t(t) ≥_1/2.
§.§ Auxiliary results
We state the following lemma which is used to prove <ref>.
For any sub-optimal arm a, there exists a Pareto optimal arm a^⋆ such that _a ≺_a^⋆ and Δ_a = (a, a^⋆)>0. Moreover, For any i∈∖^⋆, j ∈^⋆,
* max_j∈^⋆(i,j) = max_j∈(i,j),
* If i ∈_a∈∖{j}(j,a) then j is the unique arm such that _i ≺_j
Suppose that there are p<n dominated vectors. Without loss of generality, we may assume they are _1, …, _p. Let i_1≤ p. Suppose that no Pareto-optimal arm dominates _i_1. Since _i_1 is not optimal, by the latter assumption, there exists i_2 ≤ p such that _i_1≺_i_2. If _i_2 is dominated by a Pareto optimal arm, this arm also dominates _i_1 (strict dominance is transitive) which contradicts the initial assumption. If not, there exits i_3 ≤ p such that _i_1≺_i_2≺_i_3. Again we can use the same reasoning as before for i_3. In any case we should stop in at most p steps, otherwise we would have _i_1≺_i_2≺…≺_i_p and _i_p should be dominated by a Pareto-optimal arm, otherwise it would be itself Pareto-optimal, which is not the case. Therefore, for any a∈∖^⋆, there exists a^⋆∈^⋆ such that a^⋆≺ a and Δ_a = (a, a^⋆)>0.
Letting i be a sub-optimal arm, since for any a∈∖^⋆, there exists a^⋆∈^⋆ such that a≺ a^⋆, it follows that
∀ d ∈ [D], μ_a^d - μ_i^d < μ_a^⋆^d - μ_i^d,
which leads to (i, a) ≤(i, a^⋆), so
max_j∈(i,j) = max_j∈^⋆(i,j) > 0,
which achieves the proof of the first point i). For the second point, let q ∈∖^⋆ and q'≠ q such that q≺ q' and
q ∈_a∈∖{j}(j,a).
By direct algebra, since q≺ q', we have
(j, q') < (j, q),
which is impossible if q'≠ j (because q belongs to the argmin). Therefore, if
q ∈_a∈∖{j}(j,a)
is a sub-optimal arm, then j is the only arm such that q≺ j (i.e _q ≺_j).
We now prove <ref> which follows from the previous lemma.
If _j≠ i(i,j) ⊂^⋆, then the lemma follows from the definition of the gap of an optimal arm recalled in Section 4. If min_j≠ i(i,j) = (i, a), a∉^⋆, then,
from <ref>, i is the unique arm which dominates a so Δ_a = (a, i) and using the definition of the gap of an optimal arm,
Δ_i ≤ (a, i)^+ + Δ_a,
= 0 + (a, i) ≤(i,a),
where we have used the the fact that (p,q)≤(q, p) for any pair of arms p,q (which follows from the definition). Therefore, for an optimal arm i, we always have
Δ_i ≤min_j≠ i(i,j).
§ ALGORITHM FOR FINDING AN (_1,_2)-COVER
In this section, we analyse the sample complexity of when it is associated to the stopping time τ__1, _2 for identifying an (_1, _2)-cover of the Pareto set. The sampling rule remains unchanged an we prove that the algorithm does not require more samples to find an (_1, _2)-cover than to solve the _1-PSI problem.
We recall the stopping time τ__1, _2.
Stopping rule Let _1, _2≥ 0 and 0<δ<1. Then, by ignoring the first K rounds of initialization,
τ__1, _2:= inf{ t∈^⋆: Z_1^_1, _2(t) > 0 Z_2^_1, _2(t) > 0},
where,
Z_1^_1, _2(t) := min_i ∈ S(t)max(g_i^_2(t), h_i^_1(t))
Z_2^_1, _2(t) := min_i ∈ S(t)^∁max(g_i^_2(t), h_i^_1(t)),
and
g_i^_2(t) := max_j ∈∖{i}^-(i,j,t) + _2 {j ∈^_1(t)}
h_i^_1(t) := min_j∈∖{i}^-(i,j,t)+_1
Recommendation ruleWhen
it is associated to the stopping time τ__1, _2, recommends
(τ__1, _2) := ^_1(τ__1, _2)
,
which can be understood as follows. When τ__1, _2 is reached, the arms that are not yet identified as (nearly) optimal are either _2-dominated by an arm in ^_1(τ__1, _2) or sub-optimal, which is proven formally in <ref>.
Fix δ∈ (0, 1), _1,_2 ≥0 then (_1,_2)-recommends an (_1, _2)-cover of the Pareto set on the event .
Assume holds. Let t=τ__1, _2 and i∈^_1(t). Since i∈^_1(t), for any j≠ i, (i,j) + _1≥^-(i,j,t) + _1> 0 that is i ∈^⋆__1. Therefore, on the event , ^_1(t)⊂^⋆__1. When the stopping time τ_δ^_1, _2 is reached, Z^_1, _2_1(t) > 0 and Z^_1, _2_2(t) >0.
Under this condition,
^_1(t)≠∅.
Indeed, since Z^_1, _2_1(t)>0 and Z^_1, _2_2(t)>0, if ^_1(t) = ∅ then, by the stopping rule and since ^_1(t) = ∅, for any arm i, we would have h_i^_1(t)<0 and g_i^_2(t)>0. That is, for any arm i ∈,
∃ j≠ i such that (i,j)>^-(i,j,t)>0,
so every arm would be strictly dominated, which is impossible since the Pareto set cannot be empty. Then, ^
_1(t)≠∅ and
for any i∈(t)^∁
= ^_1(t)^∁, by the stopping rule it holds that max(g_i^_2(t), h_i^_1(t)) > 0. Further noting that for such arm i∈^_1(t)^∁, h_i^_1(t)<0 , we thus have g_i^_2(t)>0, that is
^-(i,j,t) + _2 {j ∈(t)} >0,
which on the event yields
(i,j) + _2 {j ∈(t)} >0.
Therefore, for such arm i, either
* ∃ j ∈ such that (i,j)>0 that is _i ≺_j or
* ∃ j ∈(t)
such that (i,j) + _2 > 0 that is _i ≺_j + _2 with _2:= (_2, …, _2).
Put together, (t) ⊂^⋆__1 and for any i∉(t), either i ∉^⋆ (i is a sub-optimal arm) or there exists j∈(t) such that _i ≺_j + _2. Thus (t) is an (_1, _2)-cover of the Pareto set and (_1, _2)-is correct for (_1, _2)-cover identification.
Put together, the two lemmas restated below are used to prove identically to <ref>, the main theorem of this section.
*
The following lemma holds for each of the stopping times τ__1, τ__1, _2 and τ__1^k.
*
Fix δ∈ (0, 1), _1, _2 ≥ 0. Then (_1, _2)-outputs an (_1, _2)-cover of the Pareto set with probability at least 1-δ using at most
∑_a ∈88/(Δ_a^)^2log(2K(K-1)D/δlog(12e/Δ^_a))
samples, where for all a∈, Δ_a^ := max(Δ_a, _1).
This is the first problem-dependent sample complexity upper-bound for the (_1, _2)-cover of the Pareto set. In particular, this result holds for the -accurate Pareto set identification <cit.> which corresponds to the particular case _1 = _2 = of the Pareto set cover. Therefore, (, )-could be compared to -PAL for -accurate Pareto set, which however relies on a different Gaussian process modelling assumption.
While this sample complexity result upper bound does not clearly show the dependence in _2, we note that for some problems, we have a nearly matching lower bound that does not depend on _2. In particular, consider the case D=1, _1 = 0, _2 >0 and assume there is a unique best arm (classical assumption) a_⋆. For this setting, an algorithm for (_1, _2)-cover identification is required to output a set Ŝ such that Ŝ⊂^⋆ = { a_⋆} and for any i≠ a_⋆ either μ_i < μ_a_⋆ or μ_i ≤μ_a_⋆ + _2 which trivially holds as long as Ŝ⊂^⋆. Therefore, this problem is equivalent to (exact) Best Arm Identification.
Almost matched lower bounds for BAI are known and does not depend on _2 (<cit.>).
This observation can be generalized to any configuration where there is a unique (Pareto) optimal arm. Letting D≥ 1, _1=0, _2>0 and ν a bandit with one Pareto optimal arm a_⋆, any algorithm for (_1, _2)-covering is required to output a set Ŝ⊂^⋆ = { a_⋆}. And for any i≠ a_⋆ either _i ≺_a_⋆ or _i ≺_a_⋆ + _2 which trivially holds as long as Ŝ⊂^⋆ = { a_⋆}. So, on theses instances, (0, _2)-covering is equivalent 0-PSI and the nearly matched lower of <cit.> for 0-PSI does not depend on _2 (Theorem 17 therein).
In our experiments (see <ref>), we will see that in configurations with multiple Pareto optimal arms, the parameter ε_2 can still help to empirically reduce the sample complexity. Quantifying its precise impact on the sample complexity is left as future work.
§ LOWER BOUND
In this section, we give a gap-dependent lower-bound for the k-relaxation in some configurations. We use the change of distribution lemma of <cit.> (lemma 1).
There exists a bandit instance ν with |^⋆| = p≥ 3 such that for k ∈{ p, p-1, p-2} any δ-correct algorithm for 0-PSI-k verifies
_ν(τ_δ) ≥1/Dlog(1/δ) ∑_a=1^K1/(Δ_a^k)^2,
where Δ_a^k := Δ_a + ω^k and τ_δ is the stopping time of the algorithm.
Let p = K = |^⋆|.
w.l.o.g assume ^⋆ = {1, …, p} and ^⋆,k = {1, …, k}.
Let _0 ∈^D and for (p-2)≤ i≤ p, define
μ_i^d := -2^p-iω if d = 1
-2^p-iω else if d = 2
-μ_0^d else. ,
for 1≤ i≤ p-3,
μ_i^d := -(4 +2i) ω if d = 1
- (4 +2i)ω else if d = 2
-μ_0^d else.
Let ν be a bandit where each arm i is a multivariate Gaussian
with mean _i and covariance matrix I_D i.e ν_i ∼(_i, I_D) (with I_D the identity matrix in dimension D). By direct calculation, for 1≤ i,j≤ p -3,
(i,j) = (j,i) = 2ω| i - j| ,
and for p-2≤ i, j≤ p,
(i, j) = (j,i) = 2^pω| 2^-i - 2^-j|,
for i≤ p-3 and (p-2)≤ j≤ p,
(i, j) = (j, i) = (4 + 2i - 2^p-j)ω≥ 2ω.
Therefore, computing ω_i and δ_i^+ for any i∈ [p] yields
δ_i^+ := min_j∈ [p]\{i}min((i,j), (j,i)),
= ω if i=p,
2^p-i-1ω if i ∈{p-2, p-1}
2ω else,
additionally, for any i≤ p,
ω_i := min_j≠ i(i,j),
= ω if i=p,
2^p-i-1ω if i ∈{p-2, p-1}
2ω else.
Thus,
ω^(p) =ω^(p-1) = ω and ω^(p-2) = 2ω.
Let γ>0. For any optimal arm i, since (i, i+1) = (i+1, i) = δ_i^+, the vector
_i + δ_i^+ + γ
Pareto dominates _i+1 or _i-1 and _i - δ_i^+ - γ≺_i+1 or _i-1. Moreover, it is easy to observe that for k ∈{p-2, p-1, p} and any i ∈ [p],
_i + δ_i^+ + ω^(k) + γ
Pareto dominates 1 (if k∈{p,p-1}) or 2 (if k=p-2) other optimal arms. Letting k∈{p-2, p-1, p}, for any i∈ [p], we define the alternative bandit ν^(i) which is also Gaussian with the same covariance matrix I_D and means given by
^(i)_j = _j if j≠ i
_j - δ_i^+ - ω^(k) - γ if j = i and _ν( j ∈Ŝ) ≥1/2
_j + δ_i^+ +ω^(k) + γ if j = i and _ν( j ∈Ŝ) <1/2.
Therefore, since is δ-correct, and by what precedes,
* if _ν(i ∈Ŝ) ≥1/2 then _ν^(i)(i ∈Ŝ) ≤δ and
* if _ν(i ∈Ŝ) <1/2 then _ν^(i)(i∈Ŝ) ≥ 1-δ.
The first point follows simply from the definition of δ_i^+ and the fact that by design (i,j) = (i, j) for i,j∈ [p], For the second point, if k∈{ p, p -1}, in the bandit ν^(i), at least one arm of ^⋆(ν) is no longer optimal, then |^⋆(ν^(i)) |≤ p-1 ≤ k. So _ν^(i)(i∈Ŝ)≥ 1-δ. If k=p-2 since two arms of ^⋆(ν) are now dominated, we have |^⋆(ν^(i))|≤ p-2 = k, hence _ν^(i)(i∈Ŝ)≥ 1-δ
Letting denote the KL divergence and using lemma 1 of <cit.>, on _τ-measurable event
E_i = { i ∈Ŝ} if _ν(i∈Ŝ)≥1/2,
{i ∉Ŝ} if _ν(i∈Ŝ)<1/2,
for which _ν(E_i) ≥1/2 and _ν^(i)(E_i)≤δ, it comes that
∑_a∈_ν(T_a(τ_δ))(ν_a, ν_a^(i)) ≥ d(_ν(E_i), _ν^(i)(E_i)),
hence
_ν(T_i(τ_δ))(ν_i, ν_i^(i)) ≥ d(_ν(E_i), _ν^(i)(E_i)),
where d(x,y) = xlog(x/y) + (1-x)log((1-x)/(1-y)) is the binary relative entropy. Since _ν(E_i)≥1/2 and _ν^(i)(E_i) ≤δ,
(<ref>) yields (see <cit.>),
_ν(T_i(τ_δ)) ≥ 1/(ν_i, ν_i^(i))1/2( log(1/2δ) + log(1/2(1-δ)))
= 1/2(ν_i, ν_i^(i))log( 1/δ(1-δ))
≥ 1/2(ν_i, ν_i^(i))log(1/δ).
By direct algebra, we compute (independent marginals since the covariance is diagonal I_D),
(ν_i, ν_i^(i)) = 1/2_j -δ_i^+ - ω^(k) - γ -_j_2^2 = D/2(-δ_i^+ - ω^(k) - γ )^2.
Noting that on this instance all the arms are optimal, we have for any arm i, Δ_i = δ_i^+. Finally, letting γ⟶ 0 proves that for any arm i,
_ν(T_i(τ_δ)) ≥1/D(Δ_i^k)^2log(1/δ),
further noting that (τ_δ) = ∑_i=1^K (T_i(τ_δ))
achieves the proof. We have chosen a diagonal matrix matrix simplicity, we believe that choosing carefully correlated objectives like in <cit.> could give a tighter lower bound especially regarding the dependence in the dimension D.
§ BEST ARM IDENTIFICATION
In this section, we discuss the sample complexity and the performance of associated to the stopping rule τ_0^1 for BAI. Noting that when D=1, the Pareto set is just the argmax over the means, BAI and PSI are the same for uni-dimensional bandits. For this setting we should expect algorithms for PSI to be competitive with existing algorithms for BAI. We will show that it is actually the case for . Let D=1 and ν be a one-dimensional K-armed bandit. Letting a_⋆ denote the unique optimal arm of the bandit ν, i.e ^⋆ ={a_⋆}, one can easily check that the gaps defined for PSI matches the common notion of gaps for BAI. Indeed, for any a≠ a_⋆,
Δ_a := max_j ∈^⋆(a, j),
= (a, a_⋆)
= μ_a_⋆ - μ_a,
and
Δ_a_⋆ = min_j≠ a_⋆{(j,a^⋆)^+ + Δ_j } = min_j≠ a_⋆Δ_j,
which matches the definition of the gap in the one-dimensional bandit setting (<cit.>). Therefore, the sample complexity of for BAI can be deduced from <ref>.
Let δ∈ (0, 1), K≥ 2 and ν a K-armed bandit with a unique best arm a_⋆ and 1-subgaussian distributions. associated to be stopping time τ_0^1 identifies the best arm a^⋆ with probability at least 1-δ using at most the following number of samples
∑_a=1^K 88/Δ_a^2log(2K(K-1)/δlog(12e/Δ_a)).
In particular, the k relaxation is not meaningful in this setting. Under the unique optimal arm assumption, the algorithm will always stop when the best arm has been identified. And we remark that from the definition of ω_i's
ω_1 = min_j≠ a_⋆(a_⋆, j) = min_j≠ a_⋆Δ_j = Δ_a_⋆ and ∀ i≠ a_⋆, ω_i < 0,
so for any k≤ K, max(ω_k, Δ_a) = Δ_a.
<ref> could be slightly improved. On the event we consider that for any pair of arms the difference of their empirical mean does not deviate too much from its actual value. For BAI, since we know that there is a unique optimal arm (enforced by assumption), it is sufficient to control the difference between the best arm and any other arm, therefore we could replace the K(K-1)/2 term due to union bound in the confidence bonus by K-1 and we could show that this will reflect in the sample complexity by replacing K(K-1) by 2(K-1). However, this cannot be done in general for PSI since we do not know in advance the number of optimal arms.
When D=1, reduces to sample at each round t, the least sampled among
b_t := _i {min_j≠ iU_i,j(t)},
c_t := _j≠ b_t L_b_t, j(t),
where U_i,j(t) := _i(t) - _j(t) + β_i,j(t) and L_i,j(t):= _i(t) - _j(t) - β_i,j(t) are upper and lower bounds on the difference μ_i - μ_j. To be in the same setting as LUCB and UGapEc which uses confidence interval on single arms, we would have β_i,j(t) := β_i(t) + β_j(t), where β_i's are confidence bonuses on single arms such that L_i(t) := _i(t) - β_i(t) and U_i(t):= _i(t) + β_i(t) are lower and upper confidence bounds on μ_i. Then (<ref>) and (<ref>) rewrite as
b_t := _i {U_i(t) -max_j≠ iL_j(t)},
c_t := _j≠ b_t U_j(t),
This resembles the sampling rule of UGap, which defines
b_t^UGap := _i{L_i(t)-max_j≠ i U_j(t) },
c_t^UGap := _j≠ b_t U_i(t),
and also pulls the least sampled so far. We note that a variant of our algorithm in which both b_t, c_t would be sampled (in the spirit of LUCB <cit.>) could also be analyzed using the same arguments employed in the proof of <ref>.
Note that when _1=0, for any i∈ S(t)^∁, g_i(t) > h_i^0(t). Indeed, by definition,
h_i^0(t) = min_j≠ i ((i,j,t) - β_i,j(t) ) = min_j≠ i (-(i,j,t) - β_i,j(t))
and since i∈ S(t)^∁, there exists i^⋆ such that (i,i^⋆,t)>0 (i.e _i(t) ≺_i^⋆(t)) and so
-(i,i^⋆,t) - β_i,i^⋆(t) < (i,i^⋆,t) - β_i,i^⋆(t).
Therefore,
min_j≠ i (-(i,j,t) - β_i,j(t)):= h_i^0(t) < max_j≠ i ((i,j,t) - β_i,j(t)):= g_i(t).
Thus for _1=0,
Z^0_2(t) = min_i∈ S(t)^∁ g_i(t).
In the sequel, for this section, we remove the dependence on _1 to write Z_i(t) instead of Z_i^0 for i=0 and i=1.
In particular, when D=1, _1=0, the stopping time τ_0 can be simplified to
τ_0 = inf{t∈^⋆ : Z_1(t)>0},
which is a consequence of the following lemma.
For D=1, _1=0,
inf{t∈^⋆ : Z_1(t)>0} = inf{t∈^⋆ : Z_1(t)>0 Z_2(t)>0} .
Let S(t) = {â_t}. Using the definition of h_i^0, g_i and (<ref>), Z_1(t) and Z_2(t) simplifies to
Z_1(t) = min_i≠â_t{_â_t(t) - _i(t) - β_â_t, i(t)},
Z_2(t) = min_i ≠â_t{max_j≠ i [ _j(t) - _i(t) - β_i,j(t)]}.
We have :
Z_1(t) > 0 ∀ i≠â_t, _â_t(t) - _i(t) - β_â_t, i(t)>0,
∀ i≠â_t, max_j≠ i[ _j(t) - _i(t) - β_j, i(t)]>0,
Z_2(t) >0.
Thus, Z_1(t) > 0 (Z_1(t)>0 Z_2(t)>0) and the reverse holds trivially. So
Z_1(t) >0 (Z_1(t)>0 Z_2(t)>0).
Letting â_t denote the empirical best arm after t rounds, the stopping rule of APE (with the instantiation proposed in Section <ref> based on confidence intervals on pairs of arms) reduces to
τ_0 = inf{t ∈^⋆ : ∀ i ≠â_t, (μ̂_â_t(t) - μ̂_i(t))^2/2(1/T_â_t(t) + 1/T_i(t))≥ 2C^g(log(K_1/δ)/2) + 2∑_a ∈{â_t,i}log(4+log( T_a(t)))}
which is very close to a Generalized Likelihood Ratio (GLR) stopping rule assuming Gaussian distributions with variance 1 for the rewards (which is known to be also correct for sub-Gaussian rewards) <cit.>. This modified stopping rule compared to LUCB1 and UGapEc can partially explains the empirical improvement observed in Section <ref>.
§ LUCB1-LIKE INSTANTIATION OF
In this section we derive an upper bound on the expectation of the sample complexity τ__1^k when is run with confidence bonuses similar to LUCB1 <cit.>. This is different from <ref> for which the sample complexity is bounded only on the high-probability event but as, for many algorithms in pure-exploration <cit.> we do not control what happens on ^∁. Therefore,our goal here is to upper-bound (τ) instead of ({}τ) which we did in <ref>. To adapt the strategy employed in <cit.>, we use similar confidence bonuses, thus we define for any arm i,
β_i(t) = √(2/T_i(t)log(5KDt^4/2δ)),
and for any pair i,j∈, β_i,j(t) = β_i(t) + β_j(t). Recalling the definition of and _t introduced in Section 3.1,
_t := ⋂_i=1^K ⋂_j≠ i⋂_d=1^D {L^d_i,j(t, δ) ≤μ_i^d - μ_j^d ≤ U^d_i,j(t, δ)}, and = ⋂_t =1^∞_t,
the lemma hereafter shows that with the choice of β_i's in (<ref>) and for 1-subgaussian marginals, () ≥ 1-δ.
It holds that () ≥ 1 - δ.
Letting
_t := ⋂_i=1^K⋂_d=1^D |_i^d(t) - μ_i^d|≤β_i(t),
we have _t ⊂. Indeed, on _t, for any i,j∈ and d≤ D,
_i^d(t) - _j^d(t) - β_i(t) - β_j(t) ≤μ_i^d - μ_j^d ≤_i^d(t) - _j^d(t) + β_i(t) + β_j(t),
which combined with β_i,j(t) = β_i(t) + β_j(t) yields _t ⊂_t so
(^∁
) ≤∑_t=1^∞(_t^∁) ≤∑_t=1^∞(_t^∁).
Applying Hoeffding's inequality to the 1-subgaussian marginals yields
(_t^∁) ≤ ∑_i=1^K ∑_d=1^D (|_i^d(t) - μ_i^d| > β_i(t)),
≤ ∑_i=1^K ∑_d=1^D ∑_s=1^t (|^d_i, s - μ_i^d| > β^t, s) where β^t, s = √(2/slog(5KDt^4/2δ)),
≤ ∑_i=1^K ∑_d=1^D ∑_s=1^t 4δ/5KDt^4,
= 4δ/5t^3.
Finally,
(^∁
) ≤ ∑_t=1^∞(_t^∁),
≤ 4δ/5∑_t=1^∞1/t^3,
≤ δ.
We can now state the main theorem of this section.
Let _1 ≥ 0, k≤ K and ν a bandit with 1-subgaussian marginals.
run with the β_i's of (<ref>) and associated to the stopping time τ__1^k outputs a valid set and its expected sample complexity is upper bounded as follows :
_ν(τ__1^k) ≤64√(e) Hlog( 5KD/2δ) +
256√(e) Hlog(256H) + 8π^2/15 + 1,
with H:= ∑_a=1^Kmax(Δ_a_t, _1, ω_k)^-2.
The correctness follows from <ref> combined with <ref>. It remains to upper-bound (τ__1^k). Note that this proof technique has been already used in <cit.> for LUCB-like algorithms.
Let n≥ 1 to be specified later and
(n) = ⋂_t∈ [1/2 n, n]_t.
Remark that
(n) ∩{τ__1^k > n} holds
∑_t=1^n {{τ__1^k> t}∩(n)} = n .
We will show that for some choice of n, the RHS of (<ref>) will be strictly less than n so the LHS does not hold. We proceed by upper-bounding the RHS
∑_t=1^n {{τ__1^k > t}∩(n)} ≤ n/2 + ∑_t=n/2^n {{τ__1^k > t}∩(n)},
≤ n/2 + ∑_t=n/2^n {{τ__1^k > t}_t }.
From <ref>, <ref> and <ref>, we have that for any t∈ [n/2 , n],
{τ__1^k > t}∩_t max(Δ_a_t, _1, ω_k) ≤ 2β_a_t, a_t(t),
with β_a_t, a_t(t) = 2β_a_t(t). Therefore,
using this result back in (<ref>) and letting c_δ := (5KD/(2δ))^1/4, Δ̃_a := max(Δ_a, _1, ω_k) yields
∑_t=1^n {{τ__1^k > t}∩(n)}
≤ n/2 + ∑_t=n/2^n{Δ̃_a_t≤ 4β_a_t(t) }
≤ n/2 + ∑_t=n/2^n∑_a=1^K{ (a_t = a) Δ̃_a ≤ 4β_a(t) },
≤ n/2 + ∑_a=1^K∑_t=1/2 n^n {{a_t = a}{T_a(t) ≤128/Δ̃_a^2log(c_δ t)}}
≤ n/2 + ∑_a=1^K ∑_t=n/2^n {{a_t = a}{T_a(t) ≤128/Δ̃_a^2log(c_δ n)}}
≤ n/2 + ∑_a=1^K 128/Δ̃_a^2log(c_δ n)
≤ n/2 + 128 H log(c_δ n),
where H:= ∑_aΔ̃_a^-2. Then, choosing n such that
n/2 + 128 H log(c_δ n) < n,
that is
n > T^⋆ := inf{ s∈^⋆ : 128 H log(c_δ s)/s< 1/2},
would yield
∑_t=1^n{{τ__1^k > t}∩(n)} < n,
so
(n) ∩{τ__1^k > n} = ∅,
which means
{τ__1^k > n}⊂(n)^∁.
Therefore, for any n>T^⋆,
{τ__1^k > n}⊂(n)^∁.
Thus,
_ν(τ__1^k) = _ν(τ__1^k {τ__1^k ≤ T^⋆} + τ__1^k {τ__1^k > T^⋆})
≤ T^⋆ + _ν(τ__1^k {τ__1^k> T^⋆}
≤ T^⋆ + ∑_n=T^⋆ + 1^∞_ν(τ__1^k > n)
≤ T^⋆ + ∑_n=T^⋆ + 1^∞_ν((n)^∁),
using (<ref>) and union bound yields,
((n)^∁) ≤ ∑_t=n/2^n 4δ/5t^3,
≤ 4δ/5(1/2)n/(1/2)^3n^3,
= 16δ/51/n^2.
Then,
_ν(τ__1^k) ≤ T^⋆ + 16δ/5π^2/6
≤ T^⋆ + 8π^2/15.
Upper-bounding T^⋆ will conclude the proof.
It holds that
T^⋆ -1 ≤1/c_δexp(-W_-1( -1/256c_δ H)) ≤256√(e) Hlog(256c_δ H ).
Finally,
_ν(τ_δ) ≤ 256√(e) Hlog(256(5KD/(2δ))^1/4 H) + 8π^2/15 + 1,
≤ 64√(e) Hlog( 5KD/2δ) +
256√(e) Hlog(256H) + 8π^2/15 + 1,
which achieves the proof.
The same technique could be applied to upper-bound _ν(τ__1, _2).
Now we prove <ref>.
We have
128 Hlog(c_δ s)/s < 1/2 log(c_δ s)/ s < 1/256H
then, using <ref> yields
(<ref>)
s > 0 if 1/256H <c_δ / e
0<s≤1/c_δ or s ≥ N^⋆ else,
with
N^⋆ = 1/c_δexp(-W_-1( -1/256c_δ H )).
Therefore,
T^⋆ = inf{s ∈^⋆ : 128Hlog(c_δ s)/s < 1/2}≤
1 if 1/256H <c_δ / e
N^⋆ else.
Using <ref>, to upper bound N^⋆ yields
T^⋆ - 1 ≤256√(e) Hlog_+( 256c_δ H ),
where log_+(x) = max(0, log(x)).
§ TECHNICAL LEMMAS
Let a, b>0. If b < a/e then
log(ax)/x < b 0<x ≤1/a or x ≥1/aexp(-W_-1(-b/a)).
Moreover, if b≥ a/e, then for any x>0, log(ax)/x ≤ b.
We have
log(ax)/x < b -1/axlog(1/ax) < b/a
y log(y) > - b/a, y = 1/ax
1/ax≥ 1 or -b/a<ylog(y) < 0
since -b/a>-1/e and the negative branch W_-1 of the Lambert function is decreasing on [-1/e, 0],
log(ax)/x < b 0<x ≤1/a or W_-1(ylog(y)) ≤ W_-1(-b/a)
0<x ≤1/a or log(y) ≤ W_-1(-b/a)
0<x ≤1/a or ax ≥exp(-W_-1(-b/a))
0<x ≤1/a or x ≥1/aexp(-W_-1(-b/a)).
Proving the second part of the lemma just follows from log(x) ≤ x/e.
The following lemma is taken from <cit.>
For any x ∈ [0, -e^-1],
-log(-x) + log(-log(-x)) ≤ - W_-1(x) ≤ -log(-x) + log(-log(-x)) + min{1/2, 1/√(-xlog(-x))}
Let 0<a<1/e. It holds that
exp(-W_-1(-a)) ≤e^1/2/alog(1/a).
We recall the following lemma which is taken from <cit.>.
Using <ref> yields,
- W_-1(-a) ≤ -log(a) + log(-log(a)) + 1/2,
and taking exp on both sides gives the result.
Let Δ^2>0. Then, for t≥ 2,
t ≥1/Δ^2log(2log(3e^2/2Δ^2)) loglog(e^4t)/t < Δ^2.
We note that if Δ^2 ≥e/3, then the result follows trivially since it can be easily checked that for t≥ 2,
loglog(e^4t) ≤e/3t.
Therefore, in the sequel, we assume Δ^2 < e/3.
Let
t_Δ := 1/Δ^2log(2log(3e^2/2Δ^2)),
and
g(t) = t - 1/Δ^2log(log(e^4t)).
Then,
g'(t) = 1 - 1/Δ^2 t log(e^4t),
and g'(t) ≥ 0 for t such that Δ^2 tlog(e^4t)≥ 1. Using the Lambert function W_0, which is increasing on [0, ∞),
Δ^2 tlog(e^4t) ≥ 1 e^4tlog(e^4t) ≥e^4/Δ^2
log(e^4t) ≥ W_0(e^4/Δ^2)
t≥ t^0:=1/e^4exp(W_0(e^4/Δ^2))
and by definition of W_0, we have
W_0(x)exp(W_0(x)) = x,
so
exp(W_0(e^4/Δ^2)) = e^4/Δ^21/W_0(e^4Δ^-2),
and therefore,
t^0 = 1/Δ^21/W_0(e^4Δ^-2).
We will show that t_Δ > t^0. Indeed, since W_0 is increasing,
1/Δ^2 > 3/e W_0(e^4/Δ^2) ≥ W_0(3e^3) = 3
1/Δ^21/W_0(e^4Δ^-2)≤1/31/Δ^2 ,
that is
t^0 ≤1/31/Δ^2 .
On the other side,
1/Δ^2 > 3/e log(2log(3e^2/2Δ^2)) > log(2log(9e/2))
t_Δ≥1/Δ^2log(2log(9e/2))> 1/31/Δ^2
Therefore,
t^0 ≤1/31/Δ^2 <log(2log(9e/2))/Δ^2≤ t_Δ.
Thus, we have shown that t^0 ≤ t_Δ and for any t≥ t_Δ, g'(t) ≥ 0 so
∀ t ≥ t_Δ, g(t) ≥ g(t_Δ).
Showing that g(t_Δ) > 0, will conclude the proof.
Letting a = 3e^2/2, we have
g(t_Δ) > 0 1/Δ^2log(2log(a/Δ^2)) - 1/Δ^2log(log(e^4t_Δ)) > 0
log(2log(a/Δ^2)) - log(log(e^4t_Δ)) > 0
2log(a/Δ^2) - log(e^4t_Δ) > 0
log(a/Δ^2) - log(e^4t_ΔΔ^2 /a) >0
a/Δ^2 - e^4/aΔ^2 t_Δ > 0
a/Δ^2 - e^4/alog(2log(a/Δ^2))> 0,
then, observing that for x≥ 12,
log(2log(x)) ≤x/e^2,
and since
a/Δ^2 > (3e^2/2) × (3/e) > 12,
we have
log(2log(a/Δ^2)) ≤1/e^2a/Δ^2
so, using (<ref>) yields that the LHS of (<ref>) is larger than
3/2e^2 1/Δ^2 - e^2 1/Δ^2,
which is always positive.
Therefore,
∀ t ≥ t_Δ, g(t_Δ) > 0 ,
that is
∀ t ≥ t_Δ, loglog(e^4t)/t < Δ^2.
Let δ∈ (0, 1), Δ> 0 and c>0. Let f : t ↦√(g(δ) + cloglog(e^4t)/t)
where g is a non-negative function. Then, for any α∈ (0,1) and t≥ 2,
t ≥1/Δ^2( 1/α g(δ) + c/1-αlog_+(2log(c/(1-α)Δ^2))) f(t) < Δ.
Letting t≥ 2, we have
t ≥ t_1:= 1/α1/Δ^2 g(δ) g(δ)/t≤αΔ^2.
Furthermore, using <ref> yields
t ≥ t_2:= c/(1- α)Δ^2log_+(2log(3e^2/2(1-α)Δ^2)) loglog(e^4t)/t≤ (1- α)Δ^2/c.
Combining (<ref>) and (<ref>) yields for t≥ 2,
t ≥max(t_1, t_2) f(t)^2 < Δ^2,
so
t ≥ t_1 + t_2≥max(t_1, t_2) f(t) < Δ.
Let Δ > 0 and δ∈ (0, 1). Let
f(t) := 4√(2 C^g(log(1/δ)/2) + 4loglog(e^4t)/t).
Then
inf{n≥ 2 : f(n) < Δ}≤88/Δ^2log(4/δlog(12e/Δ)).
We have
f(t) = √(32 C^g(log(1/δ)/2) + 64loglog(e^4t)/t).
Therefore, letting
g(δ):= 32 C^g(log(1/δ)/2) and c= 64
and further using <ref> yields for any α∈ (0, 1) and t≥ 2,
t≥ t_α f(t) < Δ,
where
t_α := 1/Δ^2(32/αC^g(log(1/δ)/2) + 64/1-αlog(2log(96e^2(1-α)^-1Δ^-2))).
Since C^g(x)≈ x + log(x) <cit.>, and log(x) ≤ x/e we have
Δ^2 t_α≤(16 + 16/e)1/αlog(1/δ) + 64/1-αlog(2log(96e^2(1-α)^-1Δ^-2)).
Taking α = α^⋆ such that
(16 + 16/e)1/α^⋆ = 64/1-α^⋆,
that is setting
α^⋆ = 1+e/1+5e,
yields
Δ^2 t_α^⋆≤64/1 - α^⋆log( 2/δlog( 96e^2/(1-α^⋆) Δ^2)).
By numerical evaluation,
64/1 - α^⋆≈ 86 < 88 and 96/1-α^⋆≈ 130 < 12^2,
so
Δ^2 t_α^⋆ < 88log( 4/δlog( 12e/Δ)).
Therefore,
putting these results together, for t≥ 2
t≥ t_⋆ := 88/Δ^2log( 4/δlog( 12e/Δ)) f(t) < Δ
which yields
inf{n≥ 2: f(n) < Δ}≤max(2, t_⋆).
§ IMPLEMENTATION AND ADDITIONAL EXPERIMENTS
In this section, we give additional details about the experiments and additional experimental results.
§.§ Implementation
Setup We have implemented the algorithms mainly in interfaced with through the package. The experiments are run on an ARM64 8GB RAM/8 core/256GB disk storage computer.
For the function C^g we have used the approximation C^g(x) ≈ x + log(x) which is usually admitted <cit.>.
For the experiments on real-world scenario we generate a certain number of seeds (usually 2000) and we use a different seed for each run on the same bandit. This procedure is identical for every experiment where we report the average sample complexity on the same bandit. To assess the robustness of our algorithm, the experiments on the synthetic dataset consisted in randomly uniformly sampling some bandit means for each configuration. For each sampled bandit, the algorithms compared are run once on the same instance and we note their empirical sample complexity. Finally, we report the average sample complexity across all the bandits of the same configuration.
Adaptation to bandits with marginals of different scaling We have presented the algorithm and the results specialized to the case where all the marginals are 1-subgaussian. Indeed our results can be simply extended where the marginals are instead all σ-subgaussian. Furthermore, there is a simple way to adapt the algorithm to the case where the marginals have different known subgaussianity parameter (i.e different scaling) but they are the same for every arm. The idea is to rescale each observation with the subgaussianity parameters. Let σ:= σ_1, …σ_D, σ_i>0. Assuming that the marginal distributions of each arm are respectively σ_1, …σ_D-subgaussian, each observation _A_t, s from arm A_t will be rescaled component-wise to X_A_t,s^d / σ_d before being given to the algorithm. It is easy to see that this rescaling does not change the Pareto set since all the means are divided by the same values coordinate-wise.
Furthermore, by defining
^σ(i,j) := max_d (μ_i^d - μ_j^d/σ_d),
and ^σ(i,j) =: - ^σ(i,j), all the results proved for 1-subgaussian distributions still holds using ^σ and ^σ in the definition of the gaps (Section 4).
§.§ Data processing
Dataset The dataset is extracted from <cit.> and some processing steps are applied to compute the covariance matrix of the distribution. First, as observed in <cit.>, the 3 immunogenicity indicators extracted are weakly correlated, therefore, we assume the covariance matrix to be diagonal. To compute the variance of the marginals, we use the log-normal assumption as assumed for the data reported in <cit.>.
Using this log-normal assumption
the authors have provided for each arm and each indicator: the geometrical mean, the sample size and a 95% confidence interval on the geometrical mean based on the central limit theorem.
For each of the K=20 arms (combination of three doses), we use these information to compute the sample variance of each immunogenicity indicator.
Moreover, we compute the arithmetic average of the log outcomes which is obtained by taking the log (base e) of the geometrical empirical mean:
x̅ = log(x̅_geometrical),
= log( (∏_i=1^n x_i)^1/n),
= n^-1∑_i=1^n log(x_i),
where x_1, …, x_n are the observations which are assumed to be log-normal.
x̅ represents by assumption the empirical mean of a Gaussian distribution, which we use as a proxy for its true, unknown mean. From this, we built a bandit model where each arm is a 3-dimensional Gaussian distribution with independent coordinates, whose means are given by the corresponding mean estimates (reported in <ref>) and in which the variance of each indicator is the pooled variance over the different arms (given in <ref>).
Sampling an arm in this bandit simulates the measurement of the (log of the) 3 immunogenicity criteria in consideration on a new patient.
The 20 arms are classified into two groups. Each three/four-letters acronym denote a vaccine candidate. Prime BNT/BNT corresponds to giving BNT as first and second dose and similarly for Prime ChAd/ChAd. For example ChAd in the group Prime BNT/BNT means to give BNT as first and second dose and ChAd as third dose (booster).
§.§ Additional experiments
§.§.§ Additional experiments for _1-APE-k
In this section we show that for some instances our algorithm can require up to 3 times less samples compared to . This is due to the strategy of which continue sampling arms identified as optimal until there are shown not to dominate any arm in the active set. For example on <ref>, the optimal arm 2 is "easy" to identify as such. However, since it slightly dominates the sub-optimal arm 1, should continue sampling arm 2 until arm 1 is removed from the active set ( likely this will happen when the algorithm "sees" that arm 1 is dominated by arm 3). We would expect our adaptive sampling rule to avoid this behaviour.
<ref> shows that APE takes nearly half the average sample complexity of on this instance. In particular, <ref> shows the average number of pulls taken by divided by the average number of pulls taken by 0-APE-K for each arm. We can observe that the major difference in sample complexity is due to arm 2 being pulled nearly 6 times more by w.r.t .
By increasing the number of arms and the dimension we can generate instances similar to <ref> where the gap between our algorithm and is even larger. We chose a specific instance where K=12, D=10 and there are 11 optimal arms.
On this instance (<ref>), we can see that our algorithm uses 3 times less samples than .
Finally, combining these additional experiments with the results of <ref> we observe that on average _1-APE-k performs nearly 20% better than but there are some instances where the gap can be even larger. Of course, this also means that there should exist instances in which the improvement is smaller than 20% to compensate for instances like <ref>. But we note that instances like <ref> are very unlikely to be generated randomly so they should only be a few in the 2000 instances used in <ref>.
§.§.§ (_1, _2)-APE
We investigate the empirical behavior of (_1, _2)-APE for identifying an (_1, _2)-cover. We set _1=0, δ=0.01 and we test different values of _2 ∈{0, 0.05, 0.1, 0.2, 0.25}. We average the results over 2000 independent trials with different seeds on the same instance (<ref>). We use multi-variate Bernoulli with independent marginals. The instance of <ref> is a toy example where (_1, _2)-covering can be meaningful and reduce the sample complexity. The 3 Pareto optimal vectors are chosen by hand and the last 2 vectors are randomly uniformly generated.
We can observe on <ref> that the sample complexity decreases as _2 increases.
This is further confirmed in <ref> which shows the empirical sample complexity and the average size of the recommended cover versus _2 for 50 equally-spaced values of _2 between 0 and 1/2. The drops observed in <ref> correspond to the values of _2 for which removes an optimal arm from the cover to save some samples (<ref>). A major decrease in the sample complexity corresponds more or less to an arm being removed from the recommended set. We observe in <ref> the histogram of occurrence of each arm in the recommended set for 3 values of _2 corresponding more or less to the middle of each plateau. We can see that for _2 = 0.15, arm 0 is always recommended, but the others are recommended on half of the runs. For _2 =0.4, the algorithm nearly always recommend arm 0, which as the largest ω_i term (i.e the easiest to identify as optimal).
The plateau in the sample complexity for large values of _2 (>0.3) is explained by the fact the algorithm needs to identify at least one optimal arm (which is reflected in the size of the returned set <ref>).
Indeed, for _1=0 fixed, an algorithm for (_1, _2)-covering still need to assert that the arms in the recommended set are truly optimal which will require some samples even when _2 is very large. Thus, for _2 >0.3 the algorithm need to identify at least one optimal arm and we can see on <ref> that for these values of _2, the recommended set contains only one optimal arm. Actually we can observe empirically that the "limit" sample complexity observed in <ref> is close to the average sample complexity of 0-APE-1 on the same instance (4073 samples).
§.§.§ Comparing _1-APE-k to an adaptation of
In this section, we compare _1-APE-k to an adaptation of which stops earlier if at least k optimal arms have been identified. The pseudo-code of the algorithm is given in <ref>. As shown in <cit.>, arms in P_1(t) are already identified as optimal but when the goal is to identify the Pareto set, some of them (namely P_1(t)\ P_2(t)) need to be sampled again until all the arms they potentially dominate are removed from A(t) and only then those arms will belong to P_2(t) and will be removed from the active set. However, for the k-relaxation, identifying k optimal arms is enough so the algorithm can stop as soon as | P(t-1) ∪ P_1(t)|≥ k. It this never occurs, then the algorithm will follow the initial stopping condition, that is to stop when A(t) = ∅.
We set δ=0.1 and we compare both algorithms on 2 type of randomly uniformly generated Bernoulli instances. For the first type we set K=10, D=2 and for the second one, we set K=50, D=2. For the instances with K=10 we set _1 = 0.05 and run both algorithms on 2000 random Bernoulli instances with. For the second type of instances K=50, we set _1 = 0.1 and we benchmark the algorithms on 500 random instances. The average size of the Pareto set was 2.90 (for K=10, D=2) and 4.51 (K=50, D=2).
<ref> shows the average sample complexity of the algorithms for different values of k ∈{1, …, 5}. We can observe that the difference between _1-APE-k and for the 5 values reported is more important for K=50 than for K=10. Put together, theses experiments show that our algorithm is still preferable for the k-relaxation.
§.§.§ Comparison to some BAI algorithms
We evaluate the performance of for Best Arm Identification (D=1) on two randomly generated instances: one with K=5 and means (rounded)
_1 := (0.25, 0.16, 0.87, 0.22, 0.98), and the second one with K=10 and means (rounded) _2 := (0.43, 0.33, 0.56, 0.85, 0.20,
0.93, 0.70, 0.82, 0.56, 0.78) We use the instantiation of with confidence bonuses on pair of arms but without the possible improvement invoked in <ref>.
UGap and LUCB are implemented with the tightest known (to our knowledge) confidence bonus taken from <cit.> (in spirit of the finite-time law of the iterated logarithm<cit.>). LUCB++ is used with the improved scheme given in <cit.>.
We set δ=0.01 but the empirical error was way smaller. The results are averaged over 1000 independent trials.
On the -axis is the sample complexity in units of (an approximation of) the lower bound of BAI for Gaussian rewards with σ=1/2[as Bernoulli distributions are 1/2-subgaussian] (<cit.>) :
Hlog(1/2.4δ) with H =∑_i=1^K 1/2Δ_i^2.
|
http://arxiv.org/abs/2307.02367v1
|
20230705153239
|
Distance Preserving Machine Learning for Uncertainty Aware Accelerator Capacitance Predictions
|
[
"Steven Goldenberg",
"Malachi Schram",
"Kishansingh Rajput",
"Thomas Britton",
"Chris Pappas",
"Dan Lu",
"Jared Walden",
"Majdi I. Radaideh",
"Sarah Cousineau",
"Sudarshan Harave"
] |
cs.LG
|
[
"cs.LG",
"physics.acc-ph"
] |
1]Steven Goldenbergcor1
[email protected]
1]Malachi Schram
[email protected]
1]Kishansingh Rajput
[email protected]
1]Thomas Britton
[email protected]
2]Chris Pappas
[email protected]
2]Dan Lu
[email protected]
2]Jared Walden
[email protected]
3]Majdi I. Radaideh
[email protected]
2]Sarah Cousineau
[email protected]
4]Sudarshan Harave
[email protected]
[1]organization=Thomas Jefferson National Accelerator Facility, city=Newport News, postcode=VA 23606, country=USA
[2]organization=Oak Ridge National Laboratory, city=Oak Ridge, postcode=TN 37830, country=USA
[3]organization=Department of Nuclear Engineering and Radiological Sciences, The University of Michigan, city=Ann Arbor,
postcode=MI 48109, country=USA
[4]organization=SLAC National Accelerator Laboratory, city=Menlo Park, postcode=CA 94025, country=USA
[cor1]Corresponding author
Providing accurate uncertainty estimations is essential for producing reliable machine learning models, especially in safety-critical applications such as accelerator systems.
Gaussian process models are generally regarded as the gold standard method for this task, but they can struggle with large, high-dimensional datasets. Combining deep neural networks with Gaussian process approximation techniques have shown promising results, but dimensionality reduction through standard deep neural network layers is not guaranteed to maintain the distance information necessary for Gaussian process models.
We build on previous work by comparing the use of the singular value decomposition against a spectral-normalized dense layer as a feature extractor for a deep neural Gaussian process approximation model and apply it to a capacitance prediction problem for the High Voltage Converter Modulators in the Oak Ridge Spallation Neutron Source.
Our model shows improved distance preservation and predicts in-distribution capacitance values with less than 1% error.
Accelerators, Spallation Neutron Source, Machine Learning, Uncertainty Quantification, Gaussian Processes
Distance Preserving Machine Learning for Uncertainty Aware Accelerator Capacitance Predictions
[
August 1, 2023
==============================================================================================
§ INTRODUCTION
Machine learning (ML) and deep neural networks (DNNs) provide extraordinary accuracy for prediction of complex systems when paired with large datasets like those produced by the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source (SNS).
Models trained with these vast amounts of available accelerator data can improve accelerator availability by forecasting anomalies and pending failures in complex systems.
However, as ML and DNNs become increasingly relevant to these complex and critical applications, it is important to provide models that are both reliable and trustworthy.
In order to do this, ML models need to provide a well calibrated uncertainty estimations to avoid catastrophic system failures if poor predictions arise from previously unseen input data.
These poor predictions are particularly likely when the data-driven ML model is applied on out-of-distribution (OOD) data.
This work focuses on using ML with uncertainty quantification (UQ) to predict the capacitor degradation within the High-Voltage Converter Modulators (HVCMs) in the SNS.
A simplified schematic of an HVCM <cit.> and klystron load is shown in Figure <ref>.
The resonant capacitors, Ca, Cb, and Cc at the secondaries of the pulse transformers have recently caused significant downtime due to failure. For brevity, we call these capacitors A, B and C respectively in the rest of this paper.
These capacitors have historically been a film/foil type of construction, but they are now being replaced by metallized film capacitors which show a loss of capacitance as they age before eventual failure <cit.>.
As capacitance measurements require nearly a full week to collect, we only have access to a very limited set of labeled real data. Instead, we leverage a LTSpice simulation of the HVCM to learn a relationship between capacitance and waveforms available in the real data from existing sensors <cit.>. There were several deficiencies with earlier simulations when compared to real HVCM data due to differences in switch timing between simulated and real, as well as differences in the voltage settings <cit.>. The LTSpice model now includes tuning of the IGBT switches timing to match that of real HVCM waveform data.
We propose using a DNN for producing accurate predictions of the capacitance because they are known to scale well to the large and high-dimensional datasets available.
Recent work on neural Gaussian processes (GP) has shown significant promise for a single forward pass model when paired with approximation techniques like random Fourier features (RFF) or inducing point methods <cit.>.
Additionally, Gaussian process approximations (GPA) have been used to provide UQ for other accelerator systems at the SNS and Fermilab National Accelerator Laboratory <cit.>.
Deep Neural Gaussian Process Approximation (DNGPA) methods also benefit from explicit input distance awareness when care is taken to preserve pairwise distances from the input layer to the latent space used to build the Gaussian process covariance matrix which is used predicting OOD uncertainty.
With high-dimensional data, feature reduction is often required, but doing so may result in drastic changes in the distances between samples in the reduced space, which potentially results in unreliable uncertainty estimations. Ideally, our DNN appropriately preserves meaningful distances from the input space, however previous work by <cit.> mainly focuses on distance preservation within dimensionally identical layers.
This work focuses on maintaining sample distances while performing dimensionality reduction within the context of DNGPA methods.
To do this, we evaluate the use of the singular value decomposition (SVD) for performing the dimensionality reduction to maintain sample distances and improve uncertainty estimates when the model is applied to both in-distribution (ID) and OOD data. This work is a novel extension of <cit.> and further emphasizes the importance of distance awareness for uncertainty quantification and OOD prediction.
We compare our model to three other methods for producing uncertainty estimates in DNNs and show significant improvements in predictive power and highly accurate ID uncertainty estimates.
Our model allows for nearly real-time predictions of the three capacitance levels in a non-invasive way, reduce system downtime, and avoid the cost of additional sensors.
Trends in this predicted capacitance data could also inform the performance of preventative maintenance including replacing worn components before failure.
§ DATA PREPARATION
Given the challenges in gathering training data for HVCMs throughout its capacitors' lifetime, we relied on synthetic data sets gathered from LTSpice simulations <cit.>.
These simulations contain a variety of artifacts; discontinuities born from how the simulation converges to it's final solution. These artifacts, especially those that have magnitudes greater than the underlying waveforms, profoundly impact the data normalization and the ability of the ML method to “learn” the correlations between the waveforms themselves and the values of capacitance present in the HVCM.
To combat this problem, the traces representing the currents in the HVCM (the only traces containing excursion artifacts) were cleaned.
Cleaning was performed by applying a LULU filter <cit.>, which provides a fast and idempotent algorithm to effectively remove impulsive errors in the simulation data.
This kind of filter may alter accurate data points by a minimal amount, but drastically improves all excursions and provides a stable range for training.
As an example, Figure <ref> shows a portion of an example trace from the real system as well as the range of simulation samples (min/max) before and after cleaning.
The total dataset contains 1792 samples with 7 waveforms containing 5261 timesteps each.
These seven waveforms include “V_out” and six current waveforms: “IAPS”, “IAP”, “IBPS”, “IBP”, “ICPS”, “ICP”.
The six current waveforms are measured from the positive buses of the three H-bridge phases (“IAP”, “IBP”, “ICP”) with a star (“S”) and non-star waveform determined by the direction of power flow in each phase (controlled by pairs of transistors).
These simulations come from two sets of capacitor values.
In 100 picofarad (pF) increments, the first set ranges from 2500 to 2800 pF for OOD testing (64 samples), while the other ranges from 2900 to 4000 pF and is randomly split for in-distribution training and testing (1382 and 346 samples respectively).
Three distinct regions were identified in the simulated waveforms: “boot”, “stable”, and “ring-down” (see Figure <ref>).
The boot region extends from index zero to approximately index 860.
The stable region comprises the bulk of the waveform from index 860 to index 3260.
The remaining timesteps comprise the ring-down region.
Only the final 1000 timesteps of the stable region is considered in this analysis as it is the region with the best agreement with real data and reduces the quantity of highly repetitive portions of the stable waveform.
§ METHODS AND TECHNIQUES
The review by Abdar et al.<cit.> gives a comprehensive background of currently available methods for UQ in deep learning models.
While there are many available options, our work compares methods such as Monte Carlo (MC) Dropout <cit.>, Deep Quantile Regression (DQR) <cit.> and Spectral Normalized Gaussian Processes (SNGP) <cit.>.
Our new approach modifies an SNGP model by altering the feature reduction step necessary for high-dimensional data.
We provide more detailed descriptions of these models in the following subsections.
Our work does not consider ensemble methods for producing UQ, however we use the ensemble method to estimate the variability and robustness of each solution.
Specifically, this provides a mean and standard deviation for our accuracy and uncertainty calibration metrics in Section <ref>.
§.§ Deep Quantile Regression
Unlike standard linear regression which approximates the conditional mean over the training data, quantile regression attempts to estimate quantiles of a response variable.
Therefore, it can be more robust to outliers as it uses the conditional median as a prediction.
Additionally, because the outputs are conditioned on the desired quantile, we can estimate multiple quantiles at once to obtain uncertainty estimations.
Deep quantile regression (DQR) extends this idea by applying quantile regression techniques to deep learning models to learn more complex functions.
Given a conditional quantile τ, and input feature vector x, we can define the conditional quantile function:
Q_y(τ | x) = G_τ(x , w)
where G_τ(x,w) is a non-linear function described by a DNN. For each desired quantile, the loss function used for regression is given by
ℒ(y,ŷ) = max(τ(y - ŷ), (τ - 1)(y - ŷ))
where y and ŷ are the label and prediction of the model respectively.
In this paper, we define our desired quantiles as τ = [0.159, 0.5, 0.841].
The median provides a robust prediction while the first and last quantiles match the expected proportions for one standard deviation from the mean in a normal Gaussian distribution.
To obtain a single uncertainty estimation, we take the average of G_0.5(x, w) - G_0.159(x, w) and G_0.841(x, w) - G_0.5(x, w).
We note that the differences between these quantiles may be large as they are not guaranteed to be symmetric around the median. Nonetheless, these choices allow for direct comparisons with the other models in this paper.
§.§ Bayesian Neural Networks
MC-Dropout <cit.> has become a popular model for estimating prediction uncertainty in DNNs due to its ease of implementation and benefits as a model regularizer which prevents over-fitting of the network.
Additionally, it can be viewed as a slightly cheaper approximation to a Bayesian model, however its relationship to a true Bayesian model has recently been called into question <cit.>.
MC-Dropout estimates the uncertainty by computing the standard deviation from a set of inferences where each inference differs by the location of randomly removed nodes between layers (by setting values to 0).
Since uncertainty predictions for BNNs come from multiple inferences, as opposed to a single model output, this model is significantly slower for both training and final inferences when compared to the other models we implemented.
Our implementation of MC-Dropout includes a trainable parameter for the dropout probability, which allows the model to fit the uncertainty estimations using the log-loss:
ℒ_dropout = 1/N∑_i=1^N y - ŷ^2/2σ(x)^2 + log(σ(x)^2)/2
This loss function is based on the negative log-likelihood and appropriately balances the standard deviation with the squared residual <cit.>.
§.§ GP Methods
GP methods are often considered ideal for uncertainty estimation as they become more uncertain as test input samples move away from the training data.
However, the cost of a standard GP regression model on n training samples is O(n^3) which is infeasible for large datasets.
Approximation methods like RFFs or inducing point methods can significantly improve this by bounding computation through limiting the size of the covariance matrix used for UQ.
In order to produce uncertainty estimates, GPs calculate the covariance of the conditional joint Gaussian distribution given by
cov(f^*) = K(X_*, X_*) - K(X_*, X)[K(X, X) + σ^2_n I]^-1 K(X, X^*),
where X and X_* are the training and testing input data, σ_n^2 is a noise variance term, and K represents a chosen kernel function.
The kernel function is very flexible and may have many parameters in order to change the covariance calculations.
Our model calculates an RFF approximation to the Gaussian radial basis function (RBF) kernel with a trainable parameter (λ) that controls the width of distribution.
The formula for the standard Gaussian RBF is given below:
K(x_1,x_2) = e^-x_1 - x_2^2/2λ^2.
The noise variance σ_n is also a trainable parameter within our model.
Like our BNN model, we train these two parameters using the loss from Equation <ref>.
It is important to note that, unlike a typical GPA model, the kernel that produces the uncertainty estimations is not used to compute predictions.
Instead, the predictions are computed through a dense neural network layer without bias that takes the RFFs as an input.
§.§ Distance Preservation Techniques
GP rely on the distance between inputs to accurately represent uncertainties due to their inclusion in the kernel function <ref>.
If a DNN is used to reduce the dimensionality of the input prior to a GP layer, the latent representation may distort these distances, causing worse performance.
Ideally, we would like the DNN to maintain bi-Lipschitz constraints such that:
L_1 x_1 - x_2≤h(x_1) - h(x_2)≤ L_2 x_1 - x_2.
Here h(x_1) is the latent representation of input sample x_1.
§.§.§ Spectral Normalization
<cit.> demonstrated the issues caused by lack of distance preservation and attempted to solve them by limiting the spectral norm of dense layers inside of residual networks.
This works because of the skip connections that enforce the latent representation to be h(x)=x+f(x).
Since spectral normalization enforces a bound on the spectral norm of f(x), these layers avoid changing distances too drastically. In fact, <cit.> presents a proof that given a maximum spectral norm, α≤ 1, the bi-Lipschitz constraints in Equation <ref> can be maintained for l layers with L_1=(1- α)^l-1, L_2=(1+ α)^l-1.
One aspect not explored in their work was distance preservation when reducing the dimension of the input space, as the residual network can no longer directly use the input for the skip connections. Instead, the output of a residual layer will be h(x)=g(x)+f(x) where we require a g(x) that provides dimensionality reduction while maintaining distances.
<cit.> references several other works that describe methods of preserving approximate isometry after significant dimensionality reduction, however, these methods were not used in their work.
The work presented in this paper explores the effects of two different methods for dimensionality reduction: a spectral-normalized dense layer and the singular value decomposition (SVD).
§.§.§ Singular Value Decomposition
The singular value decomposition along with other orthogonal projection methods like principal component analysis and the eigenvalue decomposition, are well-studied algorithms for producing a rank-reduced representation of datasets.
Specifically, the SVD produces a decomposition of the data matrix, X ∈ℝ^d× n such that X=UΣ V^T where U and V are orthonormal bases and Σ is a diagonal matrix containing the singular values of X.
This decomposition can be truncated to produce a weight matrix with a given output dimension by using only the first k columns of V.
Unlike a spectral normalized dense layer, these methods produce orthonormal weight vectors from the columns of V which guarantees both the spectral norm and the smallest singular value of the weight matrix are 1.
While this is a stricter requirement than spectral normalization, we can maintain model expressiveness through the subsequent residual network.
Additionally, this constraint significantly improves our distance preservation in many cases by optimizing the norm preservation for the training data.
Specifically, given a sample x (i.e. row of X), we have
x - √(∑_j=k+1^n σ_j^2)≤xW ≤x.
The proof of this inequality is provided in <ref>.
The lower bound on xW can provide guarantees for sample norm preservation as the last n-k singular values are often very small for approximately low-rank data matrices.
Additionally, this bound is very conservative as it assumes the last n-k entries of the row of U corresponding to x are all 1, which cannot be true due to the orthonormal columns of U.
A more reasonable assumption would be that each entry is a 𝒩(0,1√(d)) Gaussian random variable which would scale the expectation of the summation in the bound by d^-0.25.
In order to compute a truncated SVD, we require multiplications with the entire training dataset on a set of k+b vectors (where b is some relatively small oversample) which requires O(nd(k+b)) time.
With very large and high-dimensional data, this may seem impractical.
However, there are algorithms like incremental SVD <cit.> that compute the SVD in a batched way that matches the way most DNNs are trained.
Assuming batches larger than k, we expect this method to produce similar results to the ones we present in this paper.
§.§ Model Parameters
All models used in this work include a feature extraction layer on the flattened dataset, a 5-layer residual network (ResNet) with Dropout, an RFF layer, and a Dense output layer for predictions as seen in Figure <ref>.
Even though the RFF layer is not required by the DQR and BNN models to provide UQ, we chose to include it to maintain a consistent forward path for each model.
For information about model parameters, see Table <ref>.
However, our models differ in a few key areas.
The initial feature reduction layer for the SVD-DNGPA model utilizes the SVD while all other models tested used a Dense layer.
Moreover, we apply the spectral normalization constraint only for Dense layers used by the DNGPA models, since distance preservation is not required for the DQR and BNN models to produce accurate uncertainties.
Lastly, our BNN model uses a trainable value for the Dropout layer to optimize its uncertainty estimates, while all other models use a consistent 10% dropout rate.
§ RESULTS
In this section, we test our new model against a more standard DNGPA model, a BNN approximation (MC-Dropout), and DQR.
Each model is trained 15 times with different random initial seeds to create an ensemble of models.
This ensemble provides a mean and standard deviation for each metric we report.
Additionally, we quantify their performance for both ID and OOD predictions and uncertainties.
For OOD testing, we introduce a shift in the output labels.
The best models should be able to generalize to these OOD samples and should increase their uncertainty estimations as the sample labels move further away from the training basis.
§.§ GP Model Distance Preservation
For GP models, as uncertainties are derived from a kernel based on the input to the GP layer, it is critical to maintain a strong correlation between the GP input in the latent space and the original input.
As an initial test for the bound we derived for the SVD norm preservation, we computed the first 64 singular values and compared them with the Frobenius norm of the data matrix.
Specifically, on our training data and k = 64, we compute:
√(∑_j = k+1^n σ_j^2) = √(X_F^2 - ∑_j=1^k σ_j^2) = 11.97.
This can be compared with the norm of our original samples which is bounded by x > 39.10.
The true amount of norm degradation caused by the SVD alone is even smaller than the scaled bound with max(x - xW) ≈ 0.015.
This is further reinforced in Figure <ref>, which shows how using the SVD in our model significantly improves the correlations between input distances and distances in the latent space compared to a more standard spectral-normalized dense layer. As desired, both the spread of the correlation plot and the deviations from the optimal correlations (dotted line) are reduced.
§.§ In-Distribution Results
Deep learning models are specifically trained to obtain accurate predictions and uncertainty estimations on ID evaluation samples as they are generally consistent with the training data.
To compare the accuracy of our models, we calculate the coefficient of determination (R^2) and the Root Mean Squared Error (RMSE). Given a perfect model, these values should be 1 and 0 respectively.
For our uncertainty estimations, we calculate the Root Mean Squared Calibration Error (RMSCE) and the Mean Absolute Calibration Error (MACE) using the Uncertainty Toolbox (Chung et al. 2021), where values closer to zero indicate a well-calibrated model.
In order to ensure our results are robust, we trained an ensemble of 15 models for each method with varying random seeds to produce a mean and standard deviation for each metric.
Tables <ref> and <ref> show the results of these tests.
While all models perform well on our ID test set, SVD-DNGPA has slightly better average performance for every metric.
All models have an RMSE less than 10pF which is significantly less than the 10% manufacturer tolerances for these capacitors (≈ 300pF).
We note that while DQR is the second-best method in terms of the average calibration performance, it has a smaller standard deviation indicating that this method may produce marginally more robust uncertainty estimations.
To further illustrate the performance of these methods for uncertainty estimations, we provide Figure <ref>, which shows a modified miscalibration area plot with the mean and standard deviation over 15 trials for each model.
SVD-DNGPA provides very accurate uncertainties with very little variation from one trial to the next.
This result is a significant improvement on the standard DNGPA model and rivals DQR for our in-distribution testing.
To be thorough, we also tested our models by changing the size of the latent space to see if BNN would improve or if other trends were present.
We found that smaller models (e.g. latent size of 32) often exhibited more variability in the test results although DQR seemed less affected by the change.
We believe this may be due to the loss function used by the BNN and DNGPA models, which can explode if the standard deviation approximation is too small.
On the other hand, we saw very little difference between a latent space size of 64 and 128 and therefore chose the smaller model for efficiency.
§.§ Out-of-Distribution Results
While it is always preferable to train on the full range of possible inputs/outputs, practical limitations often prevent full exploration.
For example, obtaining real data where outputs represent an unstable system may create a safety risk.
Furthermore, in a dynamic system, distribution drift and anomalous behaviors can arise which may not be easily known prior to training the ML model.
In theory, the best models should be able to generalize well to OOD data and provide higher uncertainty when moving away from the training basis.
To test our models for this behavior, we measured the same statistics as our ID test, but on a set of samples where all three capacitors had capacitance values of 2800pF and below.
We don’t expect any of our models to provide accurate uncertainties, as our models have not been tuned for this OOD data.
However, it is still very desirable to have reasonable predictions and uncertainties that grow consistently based on their distance from the original training data.
Tables <ref> and <ref> show the results of our models over 15 random initializations.
We see that all models perform significantly worse than the training data for predictions as the RMSE scores increased by nearly an order of magnitude and the standard deviation for our results increased by nearly two orders.
However, SVD-DNGPA still provides reasonable results with an R^2 value greater than 0.8 and an RMSE that was two to four times smaller than all of the other models we tested.
For completeness, we also provide the RMSCE and MACE scores, but all models perform within a factor of approximately 1.5 and have large standard deviations relative to the score, which is expected given the OOD nature of the test set.
To further analyze the models, we plot the RMSE in Figure <ref> for each triplet of capacitance values in our test set.
As expected, the RMSE degrades as we move towards smaller output labels (moving right to left, top to bottom).
Additionally, the RMSE is significantly smaller for the two DNGPA models, suggesting that these models are much better at generalizing to OOD data for this application.
We assess whether the models are well calibrated for OOD values in Figure <ref>.
All models perform equally poorly and show no significant statistical difference, which coincides with the large RMSCE and MACE values reported in Table <ref>.
We note that the BNN model consistently has higher variance for in both ID and OOD testing, suggesting that it may be less reliable overall.
While these results are somewhat disappointing, it is important to recognize that none of these models have been tuned to appropriately estimate uncertainty for OOD data.
It may be possible to improve results for the GP methods by choosing a better prior, but the BNN and DQR models cannot be improved without further training data.
§ DISCUSSION AND FUTURE WORK
While our results are very promising especially for ID samples, it is important to discuss potential short-comings as well as interesting new ideas generated by our work.
For example, the results from this paper are derived from a simulation that while highly representative is ultimately synthetic. The real data does not always map directly to our simulated data, particularly when the SNS configuration is changed. We hope to obtain real labeled data from a single-phase low voltage system similar to the three-phase high voltage SNS HVCM, which we can use to train a model in the future.
We also hope to obtain more labeled data from the real three-phase HVCMs to further validate the accuracy of the models we trained on simulations.
We currently have two sets of data for pulses that were gathered relatively soon after capacitor changes (sufficiently close so as to expect no significant degradation has taken place).
As two labels is insufficient to draw any strong conclusions (especially with many potentially confounding factors), we leave this validation for future work.
Additionally, although our DNGPA models produce accurate predictions and uncertainties, there are opportunities to improve their performance through kernel parameter optimization.
For future work, we would like to further investigate alternative kernel options beyond RBF and whether our learned RBF parameters are similar to those learned through an exact GP regression technique.
Next, our HVCM dataset is amenable to a simple linear dimensionality reduction like the SVD, but many other problems lie in a non-linear space with low rank.
Additionally, Euclidean distance metrics may not capture the true distance of points that lie on a non-linear hyperplane.
We would like to investigate further how these problems could be solved with other approximate isometric maps like Isomap <cit.> or other algorithms as mentioned in <cit.>.
It may also be interesting to investigate how to preserve distances over physics-constrained manifolds.
Lastly, we would like to explore whether RFF layers can improve other large models by reducing the model size.
While this layer is required for our GP models, we found that removing this layer from our non-GP models reduced performance by an order of magnitude.
We believe the difference in performance stems from the RFF layer introducing a significantly larger amount of non-linearity compared to a a standard dense layer with a non-linear activation.
§ CONCLUSION
In systems like the HVCMs at ORNL, providing accurate predictions and uncertainty estimations is a requirement in order to avoid system failure, detect anomalous conditions, and provide information for appropriate prevention measures.
By enforcing stronger distance preservation techniques within a DNGPA model, we have shown improvements in the robustness of both in-distribution uncertainty estimations and OOD predictions when compared with current state-of-the-art model architectures for uncertainty quantification.
In addition, we have illustrated the direct impact of the SVD with regards to distance preservation compared to a spectral-normalized dense layer and proved bounds on the SVD norm preservation.
Our new model architecture handles high-dimensional data well by avoiding feature collapse and only requires a single inference step. We believe these ideas and their future extensions are crucial for providing distance preservation in neural networks with dimensionality reduction.
§ ACKNOWLEDGMENTS
The authors acknowledge the help from David Brown in evaluating Operations requirements, and Frank Liu for his assistance on the Machine Learning techniques.
This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The Jefferson Science Associates (JSA) operates the Thomas Jefferson National Accelerator Facility for the U.S. Department of Energy under Contract No. DE-AC05-06OR23177. This research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
elsarticle-harv
§ PROOF OF SVD NORM PRESERVATION
For any given sample (i.e. row of X) x and the SVD X = U Σ V^T, we have x = u Σ V^T where u is the corresponding row of U. Note that this notation for u is somewhat contrary to the standard where u denotes the orthonormal columns of U. Given a split of V = [ V_1 V_2 ] where V_1 = W are the weights used for dimensionality reduction, applying these weights produces:
x W = u Σ V^T W
= u Σ[ I; 0 ]
= [ u_1 σ_1 ⋯ u_k σ_k ]
Similarly, x V_2 = [ u_k+1σ_k+1 ⋯ u_n σ_n ]. Therefore, xV = [ x W 0 ] + [ 0 x V_2 ]. To simplify notation, we denote these zero extended versions of x W and x V_2 as z_1 and z_2 respectively and note that their norms are equivalent (i.e. z_1 = x W). Additionally, no value in u is greater than 1, otherwise the singular vectors would not have unit norm. Assuming all values in u are 1 gives us z_2≤√(∑_j=k+1^n σ_j^2). which we can combine with the triangle inequality to obtain the following inequality:
x = xV = z_1 + z_2≤z_1 + z_2
≤xW + √(∑_j=k+1^n σ_j^2) .
After rearranging terms, we obtain our desired relation:
x - √(∑_j=k+1^n σ_j^2)≤xW. Also, since W is orthonormal, it is clear that xW≤x.
To strengthen this inequality further, we can assume that each entry in U is 𝒩(0, 1/√(d)), which would provide vectors with an expected unit length. Doing this gives the following equation which can reduce the expected sample norm reduction by a factor based on the number of data samples.
E[√(∑_j=k+1^n σ_j^2 u_j^2)] = √(1/√(d)∑_j=k+1^n σ_j^2)
= d^-0.25√(∑_j=k+1^n σ_j^2),
This equation suggests an interesting balance when adding new data to X. Essentially, increasing the number of samples improves our norm preservation, but new data may add variance to previously under-represented directions which would increase the truncated singular values.
|
http://arxiv.org/abs/2307.03387v1
|
20230707045755
|
A Joint Design for Full-duplex OFDM AF Relay System with Precoded Short Guard Interval
|
[
"Pu Yang",
"Xiang-Gen Xia",
"Qingyue Qu",
"Han Wang",
"Yi Liu"
] |
cs.IT
|
[
"cs.IT",
"eess.SP",
"math.IT",
"94-10",
"H.1.1"
] |
A Joint Design for Full-duplex OFDM AF Relay System with Precoded Short Guard Interval
Pu Yang, Xiang-Gen Xia, Qingyue Qu, Han Wang and Yi Liu
The work of P. Yang, Q. Qu, H. Wang and Y. Liu was supported
in part by 111 Project under Grant B08038. (Corresponding author: Yi Liu.)
P. Yang, Q. Qu, H. Wang and Y. Liu are with the State Key Laboratory
of Integrated Service Network, Xidian University, Xi'an 710071, China
(e-mail:[email protected]; [email protected]; [email protected]; [email protected]).
X.-G. Xia is with the Department of Electrical and Computer
Engineering, University of Delaware, Newark, DE 19716, USA (e-mail: [email protected]).
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In-band full-duplex relay (FDR) has attracted much attention as an effective solution to improve the coverage and spectral efficiency in wireless communication networks.
The basic problem for FDR transmission is how to
eliminate the inherent self-interference and re-use the residual
self-interference (RSI) at the relay to improve the end-to-end
performance.
Considering the RSI at the FDR, the overall equivalent channel can be modeled as an infinite impulse response (IIR) channel. For this IIR channel, a joint design for precoding, power gain control and equalization of cooperative OFDM relay systems is presented. Compared with the traditional OFDM systems, the length of the guard interval for the proposed design can be distinctly reduced, thereby improving the spectral efficiency. By analyzing the noise sources, this paper evaluates the signal to noise ratio (SNR) of the proposed scheme and presents a power gain control algorithm at the FDR. Compared with the existing schemes, the proposed scheme shows a superior bit error rate (BER) performance.
Full-duplex relay, precoding, infinite impulse response (IIR) channel, power control, OFDM.
§ INTRODUCTION
As the ever-increasing demand for the limited wireless resources,
in-band full-duplex relay (FDR) has gained significant
attention due to its potential for improving spectral efficiency and network coverage. Recent progress achieved in self-interference cancellation (SIC) has made the implementation of full-duplex relay possible. After passive and active SIC,
there is still residual self-interference (RSI) existed at the relay.
An important issue for FDR networks is how to model and reuse the RSI to improve the overall system performance.
The formulation of the RSI is studied in <cit.>.
As for the usage of RSI, paper <cit.> indicates that the RSI at the relay is, in fact, a delayed version of the desired signal, and the FDR can be considered as an infinite impulse response (IIR) filter.
In <cit.>, the source-to-destination IIR channel is approximated by a finite impulse response (FIR) channel by choosing an effective length L of the channel impulse responses wherein most of the energy (e.g. 99%).
As for the cyclic prefix (CP) added transmission format, a block-based transmission with a guard interval (GI) length
larger than L symbols can basically avoid the ISI. For example, with a CP length L=16, the scheme proposed in <cit.>, which uses the traditional frequency domain equalization, has a good performance. However in some applications, the impulse response of the IIR channel reduces slowly, and thus cannot be well approximated by a short FIR channel. Recently in <cit.>, a new OFDM system for an IIR channel is presented. By a special design for the GI, an IIR channel can be converted to ISI free subchannels at the receiver.
In this paper, based on the end-to-end equivalent IIR model for FDR transmission and the newly proposed OFDM system for IIR channels in <cit.>, we present a joint design of the precoding method at the source, power gain control algorithm at the FDR, and a low complexity receiver at the destination for a cooperative OFDM system.
The remainder of this letter is organized as follows. In Section <ref>, the system model is presented. In Section <ref>, we consider the precoding method and equalization method for the IIR channel. In Section <ref>, the analysis of the SNR and a power gain control method are presented. Simulation results are given in Section <ref> and the paper is concluded in Section <ref>.
§ EQUIVALENT IIR CHANNEL MODEL
The system consists of a source S, a destination D and an amplify-and-forward (AF) FDR R. In time slot n,n≥0, S transmits x_n and D receives y_n, while the FDR transmits t_n and receives r_n simultaneously.
We assume the point-to-point link channels are quasi-static Rayleigh fading channels.
h_sr, h_rd and h_sd are the channel coefficients for S-to-R channel, R-to-D channel and S-to-D channel, respectively. After the SIC process, the RSI channel can be modeled as a quasi-static Rayleigh fading channel <cit.> of channel coefficient h_rr.
Assume (n_R)_n∼𝒞𝒩(0,σ^2_R) and (n_D)_n∼𝒞𝒩(0,σ^2_D) as the complex-valued white Gaussian noise with mean 0 and variances σ^2_R and σ^2_D at relay R and destination D at time n, respectively.
The information symbols at the source
𝐗=[X_0, X_1, ..., X_N-1]^T,
are assumed to be statistically independent, identically distributed (i.i.d.) random variables.
After the N-point IFFT, the source transmits signals by block 𝐱,
𝐱 = IFFT(𝐗) = [ x_0,x_1, ⋯,x_N - 1]^T.
For large N, the samples x_n of OFDM symbols are asymptotically Gaussian and i.i.d. <cit.>.
The received signal at the FDR at time n is
r_n =h_sr x_n + h_rr t_n +(n_R)_n.
Suppose the amplification factor of the relay is β. Then, the power gain is β^2. Following <cit.> and <cit.>, assume there is one symbol processing delay for the relay to forward its received symbols. The signal transmitted from R is
t_n=β r_n-1.
From (<ref>) and (<ref>), we obtain
t_n =βh_srx_n - 1 + β(n_R)_n - 1 +βh_rrt_n - 1
=βh_sr∑_j = 1^∞( βh_rr)^j - 1x_n - j + β∑_j = 1^∞( βh_rr)^j - 1(n_R)_n - j.
Without the direct link, the received signal at D at time n is
y_n= h_rdt_n + (n_D)_n
= βh_rdh_sr∑_j = 1^∞( βh_rr)^j - 1x_n - j
+βh_rd∑_j = 1^∞( βh_rr)^j - 1(n_R)_n - j + (n_D)_n.
Take the z-transform on both sides of equation (<ref>):
T(z) = β [h_srz^ - 1X(z) + z^ - 1N_R(z)] +βh_rrz^ - 1T(z).
Then, the system transfer function between S and D is
H(z) = h_rdT(z)/X(z)
= βh_srh_rdz^ - 1/1 - βh_rrz^ - 1
= 1/1/βh_srh_rd - h_rr/h_srh_rdz^ - 1z^ - 1, | βh_rr| < | z | < ∞.
Due to the fact that the overall time delay z^ - 1 does not change the system performance, we use the transfer function H_1(z) to describe the equivalent channel without direct link:
H_1(z) = 1/1/βh_srh_rd - h_rr/h_srh_rdz^ - 1
= 1/A(z) = 1/∑_k = 0^1 a_kz^ - k ,
a_0 = 1/βh_srh_rd, a_1= - h_rr/h_srh_rd.
One can see that H_1(z) is a single pole IIR channel. To ensure the stability of the system, the pole of H_1(z), βh_rr, should satisfy
| βh_rr|<1.
When |βh_rr| is close to 1, the impulse response
of the equivalent IIR channel reduces slowly and thus cannot be well approximated by a short FIR channel.
If the direct link is considered, the equivalent channel has the following mixed first-order IIR channel transfer function H_2(z):
H_2(z) = h_sd + βh_srh_rdz^ - 1/1 - βh_rrz^ - 1
= h_sd + (βh_srh_rd - βh_rrh_sd)z^ - 1/1 - βh_rrz^ - 1
= B(z)/A(z)
= ∑_k = 0^1 b_kz^ - k/∑_k = 0^1 a_kz^ - k, | βh_rr| < | z | < ∞ .
§ PRECODING AND EQUALIZATION METHOD FOR EQUIVALENT IIR CHANNEL WITH SHORT GI
Following the recently proposed OFDM system for IIR channels in <cit.>, a precoding method and a corresponding frequency domain equalization method for the above equivalent IIR channel are presented in this section. Due to the special structure of the channels in (<ref>) and (<ref>), i.e., order 1 IIR channels, the designs have simple and closed forms in time domain. Furthermore, the noise terms can be analyzed well as we shall see in the next section.
The main goal of the precoding is to obtain a sequence of standard CP structure after the equivalent IIR channel. Then, the transmitted signal from the source node can be solved by frequency domain equalization without ISI.
Following <cit.>, the length of the GI should be the same or larger than the orders of
polynomials A(z) and B(z). For the cases of H_1(z) and H_2(z) in (<ref>) and (<ref>), respectively, we can set the length of the GI as L=1. We use 𝐱 as the whole transmitted sequence with GI insertion, and 𝐱^i as the ith block without GI. The transmitted sequence at the source node is
𝐱 = [...; x̅_N - 1^i - 1,x_0^i - 1, ... ,x_N - 1^i - 1; x̅_N - 1^i,x_0^i, ... ,x_N - 1^i_𝐱^i; ...]^T,
where x̅_N-1^i is the inserted GI symbol
that will be specially designed later.
The corresponding received sequence 𝐲 at the destination is
𝐲=[...; y̅_N - 1^i - 1,y_0^i - 1,...,y_N - 1^i - 1; y̅_N - 1^i,y_0^i, ... ,y_N - 1^i_𝐲^i; ... ]^T,
where 𝐲^i is the ith received block without GI.
Let
𝐘^i = [Y_k^i]_0≤ k≤N-1= FFT(𝐲^i),
𝐗^i = [X_k^i]_0≤ k≤N-1 = FFT(𝐱^i),
𝐀 = [A_k]_0≤ k≤N-1=FFT(𝐚), 𝐚= [a_0,a_1,0,...,0],
where FFT is the N-point FFT. We first consider the precoding method for the IIR equivalent channel without direct link, H_1(z). The goal is to design the GI x̅_N - 1^i to ensure that the received signals at the destination satisfy the CP structure and in the case of this letter, it is y̅_N - 1^i = y_N - 1^i.
Let X(z) and Y(z) be the z-transforms of transmitted sequence 𝐱 and the corresponding received sequence 𝐲. Then we have
X(z) = 1/H_1(z)Y(z)
= A(z)Y(z)
= a_0Y(z) + a_1z^ - 1Y(z)
↔x_n = a_0y_n + a_1y_n - 1
⇒y_n = x_n - a_1y_n - 1/a_0.
Similar to the conventional OFDM, we have
X_k^i= A_kY_k^i and Y_k^i = X_k^i/A_k , 0 ⩽ k ⩽ N - 1.
Assume A(z) is known at the source. According to (<ref>) and (<ref>), 𝐲^i can be solved for given X_k^i at the source. Finally, the GI at the source can be designed as
x̅_N - 1^i = {[ a_0y_N - 1^i, i = 1,; a_0y_N - 1^i + a_1y_N - 1^i - 1, i > 1, ].
where the term for i=1 is because the 0th block is all 0 in the initialization. By this precoding method, the received sequence 𝐲 at the destination has a standard CP structure without the consideration of the noise. Thus, similar to the traditional OFDM, the frequency domain equalized signal
X̂^i_k at the kth subcarrier is
X̂_k^i= A_kY_k^i, 0 ⩽ k ⩽ N - 1.
Let
IFFT([X̂_k^i]_0 ⩽ k ⩽ N - 1) = ( 𝐱̂^i)^T = [ x̂^i_0,x̂^i_1, ⋯ ,x̂^i_N - 1)]^T,
where IFFT is the N-point IFFT.
Then, the equalized signals in time domain can be expressed as
{[ x̂_0^i = a_0y_0^i + a_1y_N - 1^i,; x̂_n^i = a_0y_n^i + a_1y_n - 1^i, 1 ⩽ n ⩽ N - 1. ].
Due to the design that y̅_N - 1^i=y_N - 1^i, we can see that (<ref>) returns to the original signal x_n^i without the consideration of noise.
As for the equivalent channel with direct link, H_2(z), it is a mixed IIR channel. Following (<ref>), the precoding is the same as the pure IIR channel with polynomial A(z) as above and we then define an intermediate z-domain response C(z) as
Y(z) =H_2(z)X(z)
=B(z)/A(z)X(z)=B(z)C(z),
C(z) =X(z)/A(z).
The corresponding sequence 𝐜 in time domain is
𝐜=[...; c̅_N - 1^i - 1,c_0^i - 1,...,c_N - 1^i - 1; c̅_N - 1^i,c_0^i, ... ,c_N - 1^i_𝐜^i; ... ]^T,
By the precoding for the pure IIR channel above, we can get a sequence 𝐜 with standard CP, i.e., c̅_N - 1^i= c_N - 1^i.
Let
𝐂^i = [ C_k^i]_0≤ k≤N-1 = FFT(𝐜^i),
𝐁= [B_k]_0≤ k≤N-1 = FFT(𝐛), 𝐛= [b_0,b_1,0,...,0].
As 𝐲 is the response of 𝐜 with the FIR channel B(z), C_k^i can be solved by frequency domain equalization without ISI:
C_k^i = Y_k^i/B_k.
Finally, after a two-step frequency domain equalization
X̂_k^i = A_kC_k^i = A_kY_k^i/B_k, 0≤ k≤ N-1,
we can obtain the equalized signal X̂_k^i without ISI.
§ POWER GAIN CONTROL ALGORITHM AT THE RELAY
By the precoding method and the OFDM approach proposed in Section <ref>, ISI free signals can be obtained at the destination. However, considering that the additive noise at the relay is also amplified as shown in (<ref>) during the transmission, improper power gain at the FDR may cause performance degradation.
In this section, the noise during the FDR transmission and the equalization process are analyzed in detail. An optimal power gain control algorithm based on maximum SNR is presented.
The additive noise (n_R)_n in (<ref>) at the relay and the additive noise (n_D)_n in (<ref>) at the destination are the main noise sources during the transmission.
The additive noise at the destination after the frequency domain equalization is filtered by 1/H_1^(z) and its mean power can be expressed as
P_D = 1/2π∫_ - π^π| 1/H_1^(e^jω)| ^2σ _D^2dω
= 1 + | βh_rr|^2/β ^2| h_sr|^2| h_rd|^2σ _D^2.
Next, the noise part caused by the additive noise at the relay is studied.
Let (n_R)_n^i denote the additive noise received by the relay in the ith block at the nth time slot without the GI similar to x_n^i. (n̅_R)_N - 1^i denotes the additive noise at the relay at the GI position of the ith block. The z-domain IIR channel of the relay to the destination is denoted as H_rd(z) that is
H_rd(z) = βh_rd/1 - βh_rrz^ - 1= H_1(z)/h_sr, | βh_rr| <| z | < ∞ .
Let (n_R_y)_n^i represent the received noise at the destination generated by (n_R)_n^i from the relay node through the channel H_rd(z).
Considering the channel in (<ref>), the following holds
(n_R)_0^i =h_sr(a_0(n_R_y)_0^i + a_1(n̅_R_y)_N - 1^i),
(n_R)_n^i = h_sr(a_0(n_R_y)_n^i + a_1(n_R_y)_n - 1^i), 1 ⩽n ⩽N - 1,
where the term (n̅_R_y)_N - 1^i is the noise carried over from the relay and received by the destination at the ith GI position and will be specified later.
Note that
(n̅_R_y)_N - 1^i (n_R_y)_N - 1^i.
This is because the GI design in (<ref>) only uses the transmitted signal x_n but not the noise that is unknown.
After the equalization, the noise caused by (n_R)_n^i from the relay becomes (n̂_R)_n^i. The frequency domain equalization at the destination performs a circular convolution on the received noise, (n_R_y)_n^i. Similar to (<ref>), the noise after the equalization can be expressed as
(n̂_R)_0^i = a_0(n_R_y)_0^i + a_1 (n_R_y)_N-1^i,
(n̂_R)_n^i= a_0(n_R_y)_n^i + a_1 (n_R_y)_n-1^i , 1 ⩽n ⩽N - 1.
From (<ref>) and (<ref>), we can see that the noise terms (n̂_R)_1^i,...,(n̂_R)_N-1^i are in linear relation with the additive noises (n_R)_1^i,... ,(n_R)_N-1^i at the relay:
(n̂_R)_n^i = (n_R)_n^i/h_sr, for 1 ⩽ n ⩽ N - 1.
So the mean power of the noise terms from (n̂_R)_1^i to (n̂_R)_N-1^i is
P_R1 = σ _R^2/| h_sr|^2.
For (n̂_R)_0^i in (<ref>), the exact expressions of (n̅_R_y)_N - 1^i, (n_R_y)_N - 1^i and (n_R_y)_0^i are presented below.
From (<ref>), the received noise at the destination caused by the additive noise at the relay is:
(n_R_y)_n = h_rdβ∑_j = 1^∞( βh_rr)^j - 1(n_R)_n - j + 1.
For simplicity, we define that
h_j=h_rdβ(β h_rr)^j-1, j⩾ 1.
From (<ref>), the expressions of (n_R_y)_n^i are
(n̅_R_y)_N - 1^i = ... + h_2(n_R)_N - 1^i - 1 + h_1(n̅_R)_N - 1^i,
(n_R_y)_0^i = ... + h_3(n_R)_N - 1^i - 1 + h_2(n̅_R)_N - 1^i + h_1(n_R)_0^i,
⋯
(n_R_y)_N - 1^i = ... + h_N + 1(n̅_R)_N - 1^i
+ h_N(n_R)_0^i + ... + h_1(n_R)_N - 1^i,where term (n̅_R)_N - 1^i is the additive noise received by the relay at the ith GI position. Since | βh_rr| < 1, and considering
the length of each frame, N, is large, we have |h_j| ≈ 0 for all j>N.
From (<ref>), we can see that (n̅_R_y)_N - 1^i and (n_R_y)_N - 1^i are approximately independent, so (n̂_R)_0^i cannot be expressed the same form as the other noise terms in (<ref>).
From (<ref>),
the mean power of (n_R_y)_0^i and (n_R_y)_N - 1^i
can be expressed as
P_n = | h_rd|^2β ^2∑_j = 1^∞| βh_rr|^2(j - 1)σ _R^2
= β ^2| h_rd|^2/1 - | βh_rr|^2σ _R^2.
From (<ref>) and considering that |h_j| ≈ 0 for all j>N, (n_R_y)_0^i and (n_R_y)_N - 1^i can be regarded as independent.
Then, from (<ref>), the mean power of (n̂_R)_0^i in(<ref>) is
P_R2 = (| a_0|^2P_n + | a_1|^2P_n)
= 1 + | βh_rr|^2/| h_sr|^2(1 - | βh_rr|^2)σ _R^2.
Thus, the mean power of the noise terms (n̂_R)_0^i ,...,(n̂_R)_N-1^i can be expressed as
P_R = P_R2+(N - 1)P_R1/N
= (N - 1)σ _R^2/N| h_sr|^2 + (1 + | βh_rr|^2)σ _R^2/N| h_sr|^2(1 - | βh_rr|^2).
Next, the power of the useful signals after equalization is calculated. Considering the mean power of transmitted signals should be normalized to 1, the power of the GI, x̅_N - 1^i, should be clarified. From (<ref>), the expression of y_N - 1^i is
y_N - 1^i = h_sr( ... + h_N + 1x̅_N - 1^i + h_Nx_0^i + ... + h_1x_N - 1^i).
Since |h_j| ≈ 0 for all j>N, the impact of x̅_N - 1^i in (<ref>) can be ignored.
The mean power of y_N - 1^i or y_N - 1^i - 1 is:
P_y = | h_sr|^2| h_rd|^2β ^2∑_j = 1^∞| βh_rr|^2(j - 1)σ_x^2
= β ^2| h_sr|^2| h_rd|^2/1 - | βh_rr|^2σ_x^2,
where σ _x^2 is the mean power of the transmitted signals without GI.
From (<ref>), since y_N - 1^i and y_N - 1^i - 1 are determined by X_k^i and X_k^i-1, respectively, they are statistically independent. From (<ref>), the mean power of x̅_N - 1^i is
P_GI = (| a_0|^2P_y + | a_1|^2P_y)
= 1 + | βh_rr|^2/1 - | βh_rr|^2σ _x^2.
Define α as
α≜ |h_rrβ|^2∈(0,1).
The mean power of the transmitted signals is
P_av = P_GI + Nσ _x^2/N + 1
= 1 + α/1 - α + N/N + 1σ _x^2.
If the mean power of the transmitted signals is
normalized to 1, then the mean power of the transmitted signals without GI is
σ _x^2 = N + 1/1 + α/1 - α + N.
Let
P_R1 = σ _R^2/| h_sr|^2, η = | h_rr|^2/| h_sr|^2| h_rd|^2σ _D^2.
Finally, the SNR after equalization is
γ = σ _x^2/P_R + P_D
= N + 1/1 + α/1 - α + N/(N - 1)σ _R^2/N| h_sr|^2 + (1 + α )σ _R^2/N| h_sr|^2(1 - α ) + | h_rr|^2(1 + α )/α| h_sr|^2| h_rd|^2σ _D^2
= N(N + 1)(α - 1)^2α/((α - 1)N - α - 1)(η (α ^2 - 1) N+ P_R1α ((α - 1)N - 2α )).
Since β^2 is in proportional with α, the optimization of the power gain at the relay can be written as
α_opt=arg max_α(γ).
Although in the above, we only considered the case when there is no direct link between S and D, from our simulations in next section, we find that the above power gain control algorithm is still valid when the direct link is not too strong. The general direct link case will be under our future study.
Next, we compare the proposed scheme with a straightforward pre-filtering method.
For the pre-filtering method,
let 𝐬 represent the OFDM sequence with a standard CP structure generated at the source:
𝐬 = [...; s_N - 1^i - 1,s_0^i - 1, ... ,s_N - 1^i - 1; s_N - 1^i,s_0^i, ... ,s_N - 1^i; ...]^T.
The reason why OFDM with one symbol CP is used here is because when there is a direct link, the FIR part B(z) appears as shown before. In this case, the conventional OFDM is used. Let σ_s^2 denote the mean power of the signal s_n^i.
Let s̃_n denote the pre-filtered signal by the FIR filter A(z), and
s̃_n = a_0s_n + a_1s_n - 1.
After the IIR channel H_1(z), s̃_n is converted to the original signal s_n.
The mean power of the transmitted signal s̃_n is
P_f = ( | a_0|^2 + | a_1|^2)σ _s^2
= 1 + | βh_rr|^2/β ^2| h_rd|^2| h_sr|^2σ _s^2.
If the transmit power
is normalized to 1, we have
σ _s^2=β ^2| h_sr|^2| h_rd|^2/1 + β ^2| h_rr|^2.
The power of the received noise at the destination generated by the additive noise from the relay, P_R3, can be calculated by (<ref>). Then, the SNR of the pre-filtering method at the receiver is
γ _pre = σ_s^2/P_R3 + σ _D^2
= β ^2| h_sr|^2| h_rd|^2/1 + β ^2| h_rr|^2/β ^2| h_rd|^2σ _R^2/1 - | βh_rr|^2 + σ _D^2.
Define the difference between the SNR for our proposed method, γ in (<ref>), and the SNR for the straightforward method, γ _pre in (<ref>), as Δ :
Δ ≜γ - γ _pre
= 2(1 - α )α ^2(P_R1α (1 - α )N^2 + (P_R1(α ^2 - α ) + η (α ^2 - 1))N - P_R1(α ^2 - α )^δ (α ))/( α + 1)( (α - 1)N - α - 1)( η (1 - α ) + P_R1α)( η (α ^2 - 1)N + P_R1α (α - 1)N - 2P_R1α ^2).
When 0 <α < 1, the
denominator of Δ is positive. As shown in (<ref>), define a quadratic function δ(α) that has the same sign with Δ.
One can see that since 0<α<1, the coefficient of N^2 in δ(α) is positive. So, when N is large, δ(α)>0, i.e., the SNR γ after the equalization of our proposed precoding method is better than the SNR γ_pre of the straightforward prefiltering method. From the simulation results in next section, γ>γ_pre when N=128.
§ SIMULATION RESULTS
Suppose that the transmitted signal power at the source and the channel gains
of h_sr and h_rd are normalized to 1. The noise powers at R and D are assumed the same, i.e., σ_R^2=σ_D^2. The channel SNRs at R and D are defined as
SNR_c = 1/σ _R^2 = 1/σ _D^2.
The number of sub-carriers is N=128. The length of CP or GI for the following schemes is set as 1.
The SIC ability of the relay is set as 15dB so the power of RSI is -15dB. The constellation for FD schemes at the source is QPSK. We compare the proposed scheme with four FD schemes including Wichman scheme <cit.>, SC-FDE scheme <cit.>, the traditional OFDM frequency domain equalization scheme with a standard CP structure <cit.> and the straightforward pre-filtering scheme mentioned in Section <ref>. The scheme in <cit.> treats RSI as noise.
The traditional OFDM scheme and the SC-FDE
scheme can only deal with the responses within the CP while the responses beyond CP-length are regarded as
interference. Notice that the power gains for above-mentioned schemes are all optimized by their own algorithms.
A half-duplex (HD) relay scheme with frequency division duplex (FDD) is also compared. Considering the fairness of spectral efficiency, the modulation mode of the FDD scheme is 16QAM.
First we present a simulation with fixed channel coefficients to show the SNR (γ and γ_pre) performance with various power gains when SNR_c=10dB, which is shown in Fig. <ref>. The theoretical curves of the proposed scheme and the straightforward pre-filtering scheme are plotted based on (<ref>) and (<ref>), respectively. After simulating an OFDM transmission process, the simulation results are obtained by separately calculating the noise power and the power of useful signals. The theoretical results are concordant with the simulated results. We can see the proposed scheme outperforms the
pre-filtering scheme for any power gain factors. The red star at the top of solid blue line is the power gain factor calculated by the proposed power gain control algorithm.
This simulation result verifies the necessity and effectiveness of the power gain control algorithm.
From the BER results without the direct link shown in Fig. <ref>, consistent with the SNR analysis in Section <ref>, the proposed scheme has better BER performance compared with the straightforward pre-filtering scheme and the other schemes.
In Fig. <ref>, considering the direct link, the path loss of h_sd is set as 10dB. The results in Fig. <ref> indicate that the power gain control algorithm in Section <ref> still works and the proposed scheme outperforms the other schemes, when the direct link is not too strong.
Because the direct link is not taken into consideration in <cit.>,
there exists an error floor for the SC-FDE scheme when SNR_c>20dB.
As for the proposed scheme, the increased improvement of BER performance with SNR_c increase indicates that the direct link is treated as a cooperative signal rather than an interference.
Lastly, without the direct link, we investigate the BER performance with the RSI power when SNR_c=25dB in Fig. <ref>.
Compared with the other FD schemes, the proposed system shows better performance under severe RSI and thus the strict performance requirement of the SIC at relay may be relaxed.
§ CONCLUSION
In this paper, based on an equivalent IIR model for the FDR, following the newly proposed OFDM systems for IIR channels in <cit.>, we have presented a joint system design including precoding, relay power gain control and equalization for OFDM systems.
The simulation results show that the proposed scheme can achieve better BER performance compared with the existing schemes.
As a remark, in this letter we have only considered the case when all the point-to-point channels are flat fading, i.e., single path, for convenience. The idea proposed in this letter may be applied to broadband multipath channels, which is under our current investigation with more detailed analysis.
IEEEtran
|
http://arxiv.org/abs/2307.01525v1
|
20230704072223
|
OTFS-based Robust MMSE Precoding Design in Over-the-air Computation
|
[
"Dongkai Zhou",
"Jing Guo",
"Siqiang Wang",
"Zhong Zheng",
"Zesong Fei",
"Weijie Yuan",
"Xinyi Wang"
] |
cs.IT
|
[
"cs.IT",
"eess.SP",
"math.IT"
] |
OTFS-based Robust MMSE Precoding Design in Over-the-air Computation
Dongkai Zhou, Jing Guo, Member, IEEE, Siqiang Wang, Zhong Zheng, Member, IEEE, Zesong Fei, Senior Member, IEEE, Weijie Yuan, Member, IEEE, and Xinyi Wang, Member, IEEE
Dongkai Zhou, Jing Guo, Siqiang Wang, Zhong Zheng and Zesong Fei, and Xinyi Wang are with the School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China (e-mail: {3120220778, jingguo, 3120205406, zhong.zheng, feizesong, wangxinyi}@bit.edu.cn). Weijie Yuan is with the Department of EEE, Southern University of Science and Technology, Shenzhen 518055, China (e-mail: [email protected]).
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Over-the-air computation (AirComp), as a data aggregation method that can improve network efficiency by exploiting the superposition characteristics of wireless channels, has received much attention recently. Meanwhile, the orthogonal time frequency space (OTFS) modulation can provide a strong Doppler resilience and facilitates reliable transmission for high-mobility communications. Hence, in this work, we investigate an OTFS-based AirComp system in the presence of time-frequency dual-selective channels. In particular, we commence from the development of a novel transmission framework for the considered system, where the pilot signal is sent together with data and the channel estimation is implemented according to the echo from the access point to the sensor, thereby reducing the overhead of channel state information (CSI) feedback. Hereafter, based on the CSI estimated from the previous frame, a robust precoding matrix aiming at minimizing mean square error in the current frame is designed, which takes into account the estimation error from the receiver noise and the outdated CSI. The simulation results demonstrate the effectiveness of the proposed robust precoding scheme by comparing it with the non-robust precoding. The performance gain is more obvious in high signal-to-noise ratio in case of large channel estimation errors.
Over-the-air computation, orthogonal time frequency space, imperfect channel state information, robust precoding.
§ INTRODUCTION
The Internet of Everything is an important application scenario in the future 6G communication systems, which generally requires huge spectrum resources <cit.>. Over-the-air computation (AirComp) is regarded as a promising solution to the problem of limited spectrum resources <cit.>. The AirComp technology allows the concurrent transmission of multiple nodes. Rather than treating the signals from other nodes as noise, it leverages the signal superposition property of co-channels to compute a class of nomographic functions, e.g., weighted sum and arithmetic mean, of the distributed sensing data, whereby improving the efficiency of the wireless communication system.
The idea of AirComp first came from the study on the computation functions in multiple-access channels in <cit.>. Later on, there is much literature investigating the AirComp system from the perspective of signal alignment <cit.>, power control <cit.>, beamforming design <cit.>, etc., under the assumption of perfect channel state information (CSI). In practice, the channel estimation may not be perfect due to the factors such as noise. Hence, some other works designed the AirComp transmission scheme with the inclusion of channel estimation procedure. Specifically, the authors in <cit.> proposed a two-stage architecture, where in the first stage the fusion center obtained the sum channel gain according to the reference signal from sensors and the estimated CSI was used in the second stage for data transmission. Based on the same architecture, in <cit.>, the sensors utilized pilot signals broadcast by the fusion center to obtain local CSI and a low overhead CSI feedback algorithm was designed. In <cit.>, the impact of imperfect CSI on the computation accuracy of AirComp was studied, and a transceiver design on the basis of the statistical error of channel estimation was developed.
The channel estimation and data transmission in aforementioned works <cit.> happened in different phases, which can incur the signaling overhead. Besides, existing works designed the transmission mechanism based on orthogonal frequency division multiplexing waveform or investigated the AirComp system in a static scenario, where the impact of the multipath effect and Doppler shift were not considered. Note that for the high-mobility scenarios (e.g., the fusion center is vehicular or drone), the channel becomes a time-frequency doubly-selective channel, which makes the schemes in the literature fail to work.
For reliable communications over time-frequency doubly-selective channels, orthogonal time frequency space (OTFS), a recently proposed two-dimensional (2D) multi-carrier modulation technique, is a promising candidate <cit.>. OTFS modulates the information symbols in the Delay-Doppler (DD) domain and each symbol is mapped to the entire time-frequency (TF) domain by 2D transformation, which takes advantage of the full TF diversity <cit.>. Additionally, it converts the complex time-varying channel in the TF domain into a sparse and stable channel in the DD domain <cit.>, which helps to perform better channel estimation and equalization. To the best knowledge of the authors, the application of the OTFS signaling to AirComp system has not been investigated in the literature yet.
Inspired by the above discussions, in this paper, we propose an AirComp system based on OTFS waveform, which contains multiple sensors with dual functions of radar and communications and an unmanned aerial vehicle (UAV) as an access point (AP). More specifically, with the advantages of dual-function sensors, we first come up with a novel transmission scheme together with the frame structure. In this scheme, the estimation of CSI no longer occupies a separate phase. Instead, the sensor uses the echo from the AP to assist the CSI estimation. Such implementations can greatly reduce the signaling overhead and improve the system’s efficiency. The estimated CSI in the current frame is utilized to design a precoding matrix for the next frame to eliminate the effect of the time-frequency doubly-selective channel. Hence, by taking into account the errors in the estimated CSI and the error caused by the outdated CSI, we then propose a robust precoding design relying on the statistical characteristics of errors. Our numerical results demonstrate that our developed robust precoder outperforms the non-robust precoder, especially in a high signal-to-noise ratio (SNR) scenario, which indicates the importance of the inclusion of imperfect CSI.
§ SYSTEM MODEL
§.§ Network Model
Let us consider a data aggregation scenario for a wireless sensor network, which composes of Q sensors and a UAV acting as the AP. Both the sensors and the UAV are assumed to be equipped with a single antenna. The sensors residing in a certain region sense the environment information and transmit it to the UAV, while the UAV hovering in this region aggregates and processes the sensing data, e.g., arithmetic mean. Since the computation capability of the UAV is relatively weak, the UAV is assumed to implement data aggregation via the AirComp technology, thereby avoiding the complicated signal processing process at the UAV. Moreover, similar to <cit.>, symbol-level synchronization is assumed. In this work, to eliminate the influence of high mobility channels, the transmission between each sensor and the UAV is carried out in the delay-Doppler domain, i.e., the OTFS waveform is exploited.
§.§ Proposed OTFS-based Transmission Framework
Different from the previous work where the channel estimation at the sensor relies on the signal transmitted from the AP, to avoid the signaling overhead and the transmission of UAV, we develop a simplified OTFS-based transmission framework for our considered system setup which merges the channel estimation together with the AirComp. As illustrated in Fig. <ref>, the proposed framework contains two procedures during each frame, i.e.,
* At the first stage, each sensor performs OTFS modulation including precoding and transmits the signal to the AP.
* At the second stage, each sensor estimates the CSI based on the echo from the AP, and the recovered channel is then used to design a precoding matrix for the next frame.
The frame structure for the proposed framework is depicted in Fig. 2. Therein, the data symbols of the sensors are all arranged in the same position. The pilot of each sensor is placed in different positions on the resource grid, which can eliminate the interference coming from other sensors during channel estimation[In this work, we assume that the number of sensors is not very large such that the position of the pilot for each sensor is orthogonal. The consideration of non-orthogonal placement is left for our future work. ].
The detailed signal models are described below. Let 𝐗_q ∈ℂ^M × N (q=1,2,..., Q) denote the data sent by the q-th sensor. Via vectorization, the transmit data in vector form can be expressed as 𝐱_q=vec (𝐗_q) ∈ℂ^MN × 1. By applying a precoder 𝐅_q ∈ℂ^MN × MN in the DD domain,
the transmit signal 𝐝_q ∈ℂ^MN × 1 of the q-th sensor can be expressed as
𝐝_q = 𝐅_q 𝐱_q.
After the process of OTFS modulation <cit.>, 𝐝_q is converted into the transmit signal in the time domain, denoted as 𝐬_q ∈ℂ^MN × 1, which can be obtained by
𝐬_q = ( 𝐖_N^H⊗𝐈_M) 𝐝_q,
where 𝐖_N^H is the inverse discrete Fourier transform matrix of order N and 𝐈_M denotes the identity matrix of size M × M.
The time domain channel matrix between the q-th sensor and AP is defined as 𝐇^TD_q∈ℂ^MN × MN. According to <cit.>, 𝐇^TD_q can be expressed as
𝐇^TD_q =∑_p=1^P h_p,qΠ^l_p,qΔ^k_p,q,
where P is the number of resolvable paths between the sensor and AP. h_p,q∼𝒞𝒩(0,1/P) is the channel gain of the p-th path; l_p,q and k_p,q denote the delay taps and Doppler taps at the p-th path, respectively.
Π is the permutation matrix characterizing the delay influence, expressed as Π=circ{ [0,1,...,0]^T_MN × 1}, and
Δ is a diagonal matrix characterizing the Doppler influence, which is defined as Δ=diag{[ e^j2π/MN× 0,e^j2π/MN× 1,...,e^j2π/MN×( MN-1 )]^T}.
Due to the wave-addition of the multi-access channel <cit.>, the received signal 𝐲∈ℂ^MN × 1 at the AP can represented as
𝐲 = ∑_q=1^Q(γ_q 𝐇^TD_q𝐬_q+𝐧_q )/γ_q
= ∑_q=1^Q(𝐇^TD_q( 𝐖_N^H⊗𝐈_M)𝐅_q 𝐱_q+𝐧_q/γ_q)
= ∑_q=1^Q( 𝐇_q𝐅_q 𝐱_q+𝐧_q/γ_q),
where 𝐇_q∈ℂ^MN × MN is the equivalent channel matrix of the q-th sensor and 𝐧_q is the additive white Gaussian noise (AWGN), satisfying 𝔼[𝐧_q 𝐧_q^H]=σ_n^2𝐈. γ_q is the power normalization factor that keeps the sensor transmit power constant, which is denoted as γ_q=√(P_t/trace(𝐅_q𝐅^H_q)). P_t is the total power of the transmit data symbol that is assumed to be the same for all sensors.
§ ROBUST PRECODING DESIGN FOR OTFS
In this work, taking the sum function of all sensor data as the computation target, we aim to design a robust precoder based on the estimated CSI for our proposed transmission framework. The channel estimation, the corresponding estimation error modeling, and the precoding design are presented in the following.
§.§ Channel Estimation and Error Modeling
As described in Section II, each sensor estimates the CSI according to the echo from the AP. During this stage, the pilot-based estimation method in <cit.> is adopted to obtain the estimated value of round-trip delay tap l̃_p, Doppler tap k̃_p and channel gain h̃_p (p=1,2,...,P), where the subscript q of
the sensor is omitted for ease of illustration. We assume that the values of the round trip delay taps and Doppler taps are twice as large as those of the one-way communication channel from the sensor to AP, and the channel gains of both taps are the same, that is, l̃_p = 2l_p, k̃_p = 2k_p, and h̃_p = h_p <cit.>. Thus, the estimated value can be obtained by l̂_p=l̃_p/2, k̂_p=k̃_p/2, and l̂_p=h̃_p, respectively.
Due to the random factors such as reciever noise, the channel estimation may not be perfect. In this work, we mainly focus on the estimation error occurred on the channel gain. In terms of the delay taps and Doppler taps, under the proper pilot setup, the estimation can be regarded as accurate <cit.>.
Note that, in our result section, we will show the robustness of our precoding design by including the impacts of the estimation error of delay taps and Doppler taps.
The channel estimation error for the channel gain comes from two factors, i.e., the estimation error due to receiver noise and the outdated estimation error. The latter part comes from the inherent mechanism of our proposed transmission framework. Under our framework, the precoding matrix design in the current frame is based on the estimated CSI from the previous frame. The channel gains in two time frames are not exactly the same but highly correlated, which consequently causes the problem of outdated CSI. The error modeling for these two factors is demonstrated below.
For the estimation error due to receiver noise, according to <cit.>, the channel gain estimation result at the (t-1)-th frame via the pilot-based method is given by
ĥ_p,t-1 = h_p,t-1θ_p x_o +w_p/x_oθ_p = h_p,t-1+w_p/x_o θ_p,
where x_o is the pilot symbol, θ_p is a phase term associated with the pilot position and w_p∼𝒞𝒩(0,σ^2_w) is the complex Gaussian noise at the sensor. The subscript t is added to distinguish different frames. From (<ref>), ĥ_p,t-1 is also a complex Gaussian variable, satisfying ĥ_p,t-1∼𝒞𝒩(0,1/P+σ^2_w/x^2_o).
As for the outdated CSI, akin to <cit.>, the relationship between h_p,t and h_p,t-1 is characterized by
h_p,t = ρ h_p,t-1 + √(1-ρ^2)z_p,
where ρ∈(0,1) is the correlation coefficient, and z_p ∼𝒞𝒩(0,1/P) is a complex Gaussian noise. Assuming that the first term of the correlation coefficient in (<ref>) can be compensated, the exact CSI at the t-th frame h_p,t is related to the estimated CSI at the (t-1)-th frame ĥ_p,t-1 by
h_p,t = ρĥ_p,t-1 + e_p,
where e_p∼𝒞𝒩(0,σ^2_e) and σ^2_e = ρ^2 σ^2_w/x^2_o + (1-ρ^2)/P.
Bringing (<ref>) back to (<ref>), we have the exact channel matrix 𝐇_q at the t-th frame related to the estimated channel matrix 𝐇̂_q recovered by ρĥ_p,t-1 at the (t-1)-th frame written as
𝐇_q = 𝐇̂_q + 𝐄_q,
𝐄_q = 𝐄^TD_q( 𝐖_N^H⊗𝐈_M),
𝐄^TD_q =∑_p=1^P e_p,qΠ^l_p,qΔ^k_p,q,
where 𝐄_q is the error in the channel matrix, and 𝐄^TD_q is the error in the time domain channel matrix.
§.§ Robust MMSE Precoder for OTFS
For AirComp, to analyze the accuracy of the computation, the MSE between the true value of the target and the aggregation value is generally adopted as the performance metric, mathematically
MSE =𝔼[| 𝐲-∑_q=1^Q𝐱_q |^2]
=𝔼[∑_q=1^Q| (𝐇_q𝐅_q-𝐈)𝐱_q +𝐧_q/γ_q|^2 ].
In this work, we aim at minimizing the MSE by designing the precoding matrix 𝐅_q(q=1,...,Q). Since each sensor is independent of the others, we can separate the joint optimization problem into Q independent problems, and the precoding matrix of each sensor shares the same closed-form solution. Furthermore, under the consideration of imperfect CSI, the channel matrix 𝐇_q in (<ref>) needs to be replaced by 𝐇̂_q + 𝐄_q. Therefore, for the q-th sensor, the MSE is written as
MSE_q =𝔼 [ | ((𝐇̂_q+𝐄_q) 𝐅_q-𝐈)𝐱_q +𝐧_q/γ_q|^2 ]
=𝔼 [ tr((((𝐇̂_q+𝐄_q) 𝐅_q-𝐈)𝐱_q +𝐧_q/γ_q) (((𝐇̂_q+𝐄_q) 𝐅_q-𝐈)𝐱_q +𝐧_q/γ_q)^H)].
According to (<ref>), the optimization problem for the q-th sensor can be described as
𝐅_qmin MSE_q.
Since there is no constraint condition for the above optimization problem, the closed-form solution of 𝐅_q can be achieved by derivative, which is presented in the following proposition.
For our proposed transmission framework of the OTFS-based AirComp system, the robust MMSE precoder of the q-th sensor at the current frame is given by
𝐅_q^* = ( Ĥ^H_qĤ_q + ( σ _n^2 + Pσ _e^2)𝐈)^ - 1Ĥ^H_q.
Under the assumption that data symbols are independently and identically distributed (i.i.d.) with zero mean and normalized variance, and the data and the noise are statistically independent, i.e., 𝔼[𝐱_q𝐱_q^H]=𝐈, 𝔼[𝐧_q𝐧_q^H]=σ^2_n𝐈, and 𝔼[𝐱_q𝐧_q^H]=0, the MSE_q in (<ref>) can be further simplified to (<ref>) as follows
MSE_q =𝔼[ tr( (Ĥ_q𝐅_q+𝐄_q𝐅_q- 𝐈) (𝐅^H_qĤ^H_q+𝐅^H_q𝐄^H_q-𝐈)
+σ^2_n𝐅_q𝐅^H_q)]
=tr(Ĥ_q𝐅_q𝐅_q^HĤ_q^H-Ĥ_q𝐅_q-𝐅_q^HĤ_q^H+𝐈+σ _n^2𝐅_q𝐅_q^H.
.
+𝔼[ Ĥ_q𝐅_q𝐅_q^H𝐄_q^H+𝐄_q𝐅_q𝐅_q^HĤ_q^H+𝐄_q𝐅_q𝐅_q^H𝐄_q^H-𝐄_q𝐅_q-𝐅_q^H𝐄_q^H] ).
Since the mean value of e_p is 0, combined with (<ref>)-(<ref>), we can obtain that the mean value of every element in 𝐄_q is 0. That is to say, 𝔼[𝐄_q] =𝔼[𝐄^H_q]=0. Thus, (<ref>) can be further simplified as
MSE_q = tr (Ĥ_q𝐅_q𝐅_q^HĤ_q^H-Ĥ_q𝐅_q-𝐅_q^HĤ_q^H+𝐈
+σ _n^2𝐅_q𝐅_q^H+𝔼[ 𝐄_q^H𝐄_q]𝐅_q𝐅_q^H).
We then target at obtaining the exact expression of 𝔼[ 𝐄_q^H𝐄_q]. From (<ref>), 𝔼[ 𝐄_q^H𝐄_q] can be written as
𝔼[ 𝐄_q^H𝐄_q] = ( 𝐖_N^H⊗𝐈_M)^H𝔼[𝐄^TD_q^H𝐄^TD_q] 𝐖_N^H⊗𝐈_M.
Since 𝐖_N^H⊗𝐈_M is a deterministic matrix, the next step is to calculate 𝔼[𝐄^TD_q^H𝐄^TD_q]. According to the expression for 𝐄^TD_q in (<ref>), 𝐄^TD_q is sparse and has non-zero values only on the diagonal and a few cyclic shifts of the diagonal. It can be expressed as follows
𝐄^TD_q = [ e_1α_1,1 0 … … … 0
0 e_1α_1,2 ⋱ ⋱ ⋱ ⋮
⋮ 0 ⋱ ⋱ ⋱ e_Pα_P,MN
e_Pα_P,1 ⋱ ⋱ ⋱ ⋱ ⋮
0 e_Pα_P,2 ⋱ ⋱ ⋱ ⋮
⋮ … … … … e_1α_1,MN
],
where α_m,n = e^j2π/MNn(m=1,...,P, n=1,...,MN) is the phase term caused by Doppler. Let r_mn denote the (m,n)-th element in the matrix 𝐄^TD_q^H𝐄^TD_q. Since e_p is i.i.d., the mean value of r_mn can be obtained as
𝔼[ r_mn] = {[ Pσ _e^2, m = n; 0, 1pt m n ]..
According to (<ref>), we can obtain 𝔼[𝐄^TD_q^H𝐄^TD_q] as
𝔼[𝐄^TD_q^H𝐄^TD_q] = Pσ _e^2 𝐈_MN.
Bringing (<ref>) back to (<ref>), the term 𝔼[ 𝐄_q^H𝐄_q] can be further simplified into
𝔼[ 𝐄_q^H𝐄_q] = Pσ _e^2 ( 𝐖_N^H⊗𝐈_M)^H𝐖_N^H⊗𝐈_M
= Pσ _e^2 𝐈_MN.
Finally, by substituting (<ref>) into (<ref>), we can obtain the expression for MSE_q in (<ref>) that exploits the statistical properties of the channel estimation error, expressed as
MSE_q = tr (Ĥ_q𝐅_q𝐅_q^HĤ_q^H-Ĥ_q𝐅_q-𝐅_q^HĤ_q^H+𝐈
+(σ _n^2+Pσ _e^2) 𝐅_q𝐅_q^H).
After achieving the simplified exact expression of MSE for the q-th sensor, by taking the first order derivative of MSE_q in (<ref>) with respect to 𝐅_q and setting to zero (i.e., ∂MSE_q/∂𝐅_q = 0), we arrive at the closed-form solution of the robust precoding matrix as shown in Proposition 1.
For the non-robust precoding case (e.g., the precoding design is performed without considering the existence of errors in the estimated channel matrix), under the same derivation procedure, we can obtain the precoding matrix 𝐅_q^nr displayed as
𝐅_q^nr = ( Ĥ^H_qĤ_q + σ _n^2𝐈)^ - 1Ĥ^H_q.
§ SIMULATION RESULTS
In this section, we evaluate the performance of the proposed robust precoding scheme for the considered system by simulations. We show the results for normalized MSE (NMSE) of the computation as the performance metric, which is defined as the ratio of the MSE in (<ref>) to the mean square of the true value. Unless otherwise specified, the simulation parameters are set as follows: the number of Doppler bins N=16, the number of delay bins M=32, the number of sensors Q=6, the number of independent paths between each sensor and the AP P=3. The delay taps and Doppler taps of each path are set to a random integer in [ 0,l _max] and [ -k _max,k _max], respectively, where l_max=4 and k _max=2.
Fig. <ref> plots the computation NMSE versus signal-to-noise ratio (SNR), which is defined as the ratio of the power of each symbol to the noise power of the receiver, under different estimation errors. For the purpose of comparison, the results for the non-robust precoding case and the perfect CSI case are also plotted. Here, the estimation of delay taps and Doppler taps is assumed to be accurate. From Fig. <ref>, it can be observed that the computation NMSE of the robust precoding is lower compared to the computation NMSE of the non-robust precoding, which is more significant when the estimation error is large. In addition, Fig. <ref> shows that, as the increasing of SNR, the computation NMSE under the non-robust precoding design decreases at first and then slightly increases. In other words, the high SNR can deteriorate the computation NMSE under non-robust precoding design. It can be explained as follows. When the SNR is very small, the noise from the receiver is very large, especially compared to the estimation error, and it plays the dominant role in determining the performance of computation NMSE. Hence, the increase of SNR can reduce the impact of the noise, thereby improving the system performance. However, when the SNR becomes very large, the noise intensity becomes very small compared to the estimation error in the channel matrix calculated from the estimated CSI, which worsens the computation NMSE due to the ignorance of the estimation error. This problem can be solved by using the proposed robust precoding scheme. This is due to the fact that the compensation unit matrix is added to guarantee that the error in the channel matrix calculated from the estimated CSI is not dominant, thus ensuring that the computation NMSE maintains a continuous decreasing trend with the increase of SNR.
Fig. <ref> plots the computation NMSE versus SNR by including imperfect estimation of the delay taps and Doppler taps. Therein, 1 grid offset error with 10% probability for delay taps and Doppler taps is assumed. From Fig. <ref> we can find that in this case, the performance of the non-robust
precoding scheme becomes worsen in the high SNR scenarios (e.g., the increasing trend of non-robust precoder is much more obvious when compared with Fig. 3, since the extra error of CSI is involved. As for the proposed robust precoding scheme, the computation NMSE is still decreasing with the increment of SNR, i.e., the precoder design can still maintain convergence. This indicates the robustness of our developed precoding design to some extend, especially compared to the non-robust precoding scheme.
Fig. <ref> plots the computation NMSE from the robust precoding scheme versus the ratio of data power and pilot power, under different noise levels. From Fig. <ref>, the computation NMSE drops at first and then rises with the increasing of the power ratio. Under the considered system parameters, the optimal power ratio happens around 1-1.2.
This is because when the power ratio is small, the data power is so small, e.g., even may be much smaller than the receiver noise power, which consequently results in a large computation error. With the rising of the power ratio, the data symbol power increases, whereby boosting up the computation NMSE. However, a further increase in the power ratio can worsen the system performance. Since the channel estimation error increases as the pilot power decreases. The high enough transmission power for data still cannot compensate the error caused by channel estimation. On the whole, the interplay of these two factors leads to this trend.
§ CONCLUSIONS
In this work, we investigated an OTFS-based AirComp system, where a UAV is deployed to collect the data from a number of dual-functions sensors by AirComp technology. Based on the considered system, a transmission framework without CSI feedback overhead was developed by exploiting the echo from the UAV for channel estimation at the sensor's side. Moreover, the OTFS waveform was adopted to eliminate the effect of the time-frequency dual-selective channel on AirComp.
Then under the consideration of the errors from the noise as well as the outdated CSI, a robust precoding scheme based on the statistical properties of errors was designed. Simulation results show that the proposed robust precoding scheme can effectively reduce the computation MSE, especially in the presence of large channel estimation errors. In addition, a suitable power allocation can also improve the computation accuracy. Our future work can include the power allocation optimization for the estimation and data transmission, and the transmission design with all channel estimation errors considered.
1
IEEEtran
b1 F. Guo, F. R. Yu, H. Zhang, X. Li, H. Ji and V. C. M. Leung, “Enabling massive IoT toward 6G: A comprehensive survey," IEEE Internet Things J., vol. 8, no. 15, pp. 11891-11915, 1 Aug.1, 2021.
b2 G. Zhu and K. Huang, "MIMO over-the-air computation for high-mobility multimodal sensing," IEEE Internet Things J., vol. 6, no. 4, pp. 6089-6103, Aug. 2019
b3 B. Nazer and M. Gastpar, “Computation over multiple-access channels," IEEE Trans. Inf. Theory, vol. 53, no. 10, pp. 3498-3516, Oct. 2007.
b4 W. Liu, X. Zang, Y. Li, and B. Vucetic, “Over-the-air computation systems: Optimization, analysis and scaling laws,” IEEE Trans. Wireless Commun., vol. 19, no. 8, pp. 5488–5502, 2020.
b5 Z. Wang, Y. Shi, Y. Zhou, H. Zhou, and N. Zhang, “Wireless-powered over-the-air computation in intelligent reflecting surface-aided IoT networks,” IEEE Internet Things J., vol. 8, no. 3, pp. 1585–1598, 2021.
b00 X. Li, F. Liu, Z. Zhou, G. Zhu, S. Wang, K. Huang, and Y. Gong, “Integrated sensing and over-the-air computation: Dual-functional MIMO beamforming design," in Proc. IEEE Int. Conf. 6G Netw., Paris, France, 2022.
b6 A. Farajzadeh, O. Ercetin, and H. Yanikomeroglu, “Mobility-assisted over-the-air computation for backscatter sensor networks,” IEEE Wireless Commun. Lett., vol. 9, no. 5, pp. 675–678, 2020.
b7 L. Chen, N. Zhao, Y. Chen, F. R. Yu, and G. Wei, “Over-the-air computation for cooperative wideband spectrum sensing and performance analysis,” IEEE Trans. Veh. Technol., vol. 67, no. 11, pp. 10603-10614, Nov. 2018.
b8 Y. Chen, G. Zhu, and J. Xu, “Over-the-air computation with imperfect channel state information,” in Proc. IEEE Int. Workshop Signal Process. Adv. Wireless Commun., Jul. 2022.
b04 R. Hadani, S. Rakib, M. Tsatsanis, A. Monk, A. J. Goldsmith, A. F. Molisch, and R. Calderbank, “Orthogonal time frequency space modulation," in Proc. 2017 IEEE Wireless Commun. Net. Conf., 2017, pp. 1–6.
b05 Z. Wei, W. Yuan, S. Li, J. Yuan, G. Bharatula, R. Hadani, and L. Hanzo, “Orthogonal time-frequency space modulation: A promising next-generation waveform,” IEEE Wireless Commun., vol. 28, no. 4, pp. 136–144, Aug. 2021.
b06 G. D. Surabhi, R. M. Augustine, and A. Chockalingam, “On the diversity of uncoded OTFS modulation in doubly-dispersive channels,” IEEE Trans. Wireless Commun., vol. 18, no. 6, pp. 3049–3063, June 2019.
b07 S. Wang, J. Guo, X. Wang, W. Yuan and Z. Fei, "Pilot design and optimization for OTFS modulation," IEEE Wireless Commun. Lett., vol. 10, no. 8, pp. 1742-1746, Aug. 2021.
b08 P. Raviteja, Y. Hong, E. Viterbo, and E. Biglieri, “Practical pulse-shaping waveforms for reduced-cyclic-prefix OTFS,” IEEE Trans. Veh. Technol., vol. 68, no. 1, pp. 957–961, Jan. 2019.
bx P. Raviteja, K. T. Phan and Y. Hong, “Embedded pilot-aided channel estimation for OTFS in delay–doppler channels," IEEE Trans. Veh. Technol., vol. 68, no. 5, pp. 4906-4917, May 2019.
bxx S. Li, W. Yuan, C. Liu, Z. Wei, J. Yuan, B. Bai, et al., “A novel ISAC transmission framework based on spatially-spread orthogonal time frequency space modulation," IEEE J. Sel. Areas Commun., vol. 40, no. 6, pp. 1854-1872, Jun. 2022.
bxxx Y. Su, L. Jiang and C. He, “Joint relay selection and power allocation for full-duplex DF co-operative networks with outdated CSI," IEEE Commun. Lett., vol. 20, no. 3, pp. 510-513, Mar. 2016,
|
http://arxiv.org/abs/2307.00210v1
|
20230701032546
|
Projected Tensor Power Method for Hypergraph Community Recovery
|
[
"Jinxin Wang",
"Yuen-Man Pun",
"Xiaolu Wang",
"Peng Wang",
"Anthony Man-Cho So"
] |
math.OC
|
[
"math.OC"
] |
plain
theoremTheorem[section]
lemma[theorem]Lemma
corollary[theorem]Corollary
proposition[theorem]Proposition
definition
definition[theorem]Definition
assumption[theorem]Assumption
example[theorem]Example
problem[theorem]Problem
observation[theorem]Observation
updateUpdating rule
fact[theorem]Fact
remark
remark[theorem]Remark
Projected Tensor Power Method for Hypergraph Community Recovery
|
http://arxiv.org/abs/2307.01296v1
|
20230703190302
|
Unruh entropy of Schwarzschild black hole
|
[
"M. Teslyk",
"L. Bravina",
"E. Zabrodin",
"O. Teslyk"
] |
gr-qc
|
[
"gr-qc",
"hep-th"
] |
=1
|
http://arxiv.org/abs/2307.02554v1
|
20230705180012
|
Viscous hydrodynamic evolution of neutron star merger accretion disks: a code comparison
|
[
"Rodrigo Fernández",
"Oliver Just",
"Zewei Xiong",
"Gabriel Martínez-Pinedo"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.SR",
"gr-qc",
"nucl-th"
] |
[][email protected]
Department of Physics, University of Alberta, Edmonton, AB T6G 2E1, Canada
GSI Helmholtzzentrum für Schwerionenforschung, Planckstraße 1, D-64291 Darmstadt, Germany
Astrophysical Big Bang Laboratory, RIKEN Cluster for Pioneering Research, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
GSI Helmholtzzentrum für Schwerionenforschung, Planckstraße 1, D-64291 Darmstadt, Germany
GSI Helmholtzzentrum für Schwerionenforschung, Planckstraße 1, D-64291 Darmstadt, Germany
Institut für Kernphysik (Theoriezentrum), Technische Universität Darmstadt, Schlossgartenstraße 2, D-64289 Darmstadt, Germany
The accretion disk formed after a neutron star merger is an important
contributor to the total ejecta from the merger, and hence to the kilonova and the
r-process yields of each event. Axisymmetric viscous hydrodynamic
simulations of these disks can capture thermal mass ejection due to neutrino absorption and
in the advective phase—after neutrino
cooling has subsided—and are thus likely to provide
a lower-limit to the total disk ejecta
relative to MHD evolution. Here we present a comparison between
two viscous hydrodynamic codes that have been used extensively on
this problem over the past decade: ALCAR and FLASH.
We choose a representative setup with a black hole at the center, and
vary the treatment of viscosity and neutrino transport. We find good
overall agreement (∼ 10% level) in most quantities. The average
outflow velocity is sensitive to the treatment of the nuclear binding energy
of heavy nuclei, showing a larger variation than other quantities. We post-process trajectories
from both codes with the same nuclear network, and explore
the effects of code differences on nucleosynthesis yields, heating rates, and kilonova
light curves. For the latter, we also assess the effect of varying the number of
tracer particles in reconstructing the spatial abundance distribution for kilonova light curve production.
Viscous hydrodynamic evolution of neutron star merger accretion disks: a code comparison
Gabriel Martínez-Pinedo0000-0002-3825-0131
August 1, 2023
=========================================================================================
§ INTRODUCTION
Production of chemical elements heavier than iron in the universe via the rapid neutron
capture process (r-process) has thus far been established observationally
for neutron star (NS) mergers through the kilonova associated with GW170817
(e.g., <cit.>).
The accretion disk formed during the merger is a significant or even dominant
contributor to the ejecta—depending on binary parameters—launching outflows
on timescales ranging from a few ms to several seconds after the merger
(e.g., <cit.>).
Multiple processes can lead to mass ejection from the disk: dissipation of
magnetorotational turbulence, nuclear recombination, neutrino absorption,
and magnetic stresses if a large-scale magnetic field is present at disk
formation or generated via dynamo action (e.g., <cit.>).
Neutrino cooling is important in
all disks with initial masses
≳ 10^-3M_⊙ (e.g., <cit.>),
but it subsides on a timescale
of several ∼ 100 ms in disks around black holes (BHs) due to the drop in temperature and density associated with
accretion. The absence of cooling leads to
ejection driven by viscous heating and nuclear recombination <cit.>.
When a NS is present, energy deposition by neutrino absorption can also make a significant contribution
to driving the outflow (e.g., <cit.>).
The magnetic field strength and geometry at disk formation determines
the importance of prompt mass ejection due to magnetic stresses and
the possible emergence of a jet (e.g., <cit.>). These magnetic
properties are currently an active
area of research. For BH central objects, the only ab-initio study thus far
<cit.> indicates that large scale field formation is not ubiquitous, with
the corresponding absence of prompt (∼ ms) mass ejection via magnetic stresses.
Thus, in the case of BH central objects, thermal mass ejection due to the drop
in neutrino cooling is the only
outflow channel established as robust thus far.
Long-term viscous hydrodynamic models of the disk outflow have been carried
out for a decade now, and have led to most of our current understanding
of the disk ejecta
<cit.>.
For BH remnants, these simulations are able to capture thermal
mass ejection to good approximation, as demonstrated by detailed comparison with
GRMHD simulations <cit.>. Viscous hydrodynamic simulations thus provide
a good estimate for the lower limit to the mass ejection from post-merger
accretion disks (assuming that magnetic effects can only enhance it).
Despite awareness of broad agreement between groups carrying out
viscous hydrodynamic simulations of NS merger disks, a quantitative
code comparison has never been done. Experience from the
core-collapse supernova modeling community shows that
code comparisons help estimating the uncertainties of theoretical predictions
and they can provide valuable insight into the physics of the system, by
unfolding sensitivities with respect to specific assumptions and approximations
adopted by individual codes or models
<cit.>.
Here we carry out a quantitative code comparison study between
the viscous hydrodynamic setups of Just and collaborators (based on the
ALCAR code) and Fernández and collaborators (based on the FLASH code).
Both setups have been used extensively over the past decade, and model
viscous angular momentum transport, the BH pseudo-Newtonian
potential, and the equation of state in a similar manner. The implementations differ
primarily in the neutrino transport method employed
(multi-group 2-moment [M1] for ALCAR, gray leakage + absorption for FLASH).
For the comparison, we choose the same initial condition, and vary the treatment of viscosity
as well as the number of neutrino species and neutrino production processes considered.
We study the role of the additional binding energy gained by the formation of heavy nuclei
beyond alpha particles, which has been neglected in some accretion disk models,
and can have a non-negligible impact on the outflow velocity.
We also generate tracer particles and perform post-processing nucleosynthesis calculations
to assess the effects of code differences on r-process abundances, heating rates, and kilonova lightcurves.
For the latter, we also explore how changing the number of particles included
influences the light curves through the spatial distribution of lanthanides
and actinides.
The structure of this paper is the following. Section <ref> describes
the codes used, approximations to the physics made, and the models evolved.
Section <ref> presents our results and analysis, followed by a
Summary and Discussion in Section <ref>. The Appendix presents
the equations used to determine the composition and internal energy assuming a mixture of neutrons,
protons, alpha particles, and a representative heavy nucleus in nuclear statistical equilibrium (NSE).
§ METHODS
§.§ Codes and physics included
§.§.§ ALCAR
The ALCAR code <cit.>, which is based on the magnetohydrodynamics code AENUS <cit.>, evolves the viscous hydrodynamics equations along with conservation equations for the 0th and 1st angular moments of the neutrino intensity (energy- and flux-density, respectively) on an axisymmetric spherical-coordinate mesh using finite-volume, high-order shock-capturing methods. ALCAR offers both a Newtonian and special relativistic framework, as well as various schemes for the time-integration, spatial reconstruction, and Riemann solver. Here we adopt the Newtonian version, a 2nd-order Runge-Kutta integrator, the PPM_5 scheme of <cit.>, and the Harten-Lax-van Leer Riemann solver, respectively. Gravity is treated using the pseudo-Newtonian Artemova-Novikov potential <cit.>. The equation of state assumes a Boltzmann gas of four baryonic species (neutrons, protons, helium, and ^54Mn) in nuclear statistical equilibrium (NSE), a Fermi-gas of electrons and positrons, and a thermal bath of photons. The radial domain of r∈ [10^6 cm, 4× 10^11 cm] is discretized by 576 logarithmically spaced zones, and the polar-angle domain, θ∈ [0,π], is sampled by 160 uniform zones.
The neutrino transport adopts the M1 approximation, meaning that all higher angular moments (e.g. the Eddington tensor) appearing in the moment equations are expressed as local functions of the evolved moments using a closure relation. We adopt the closure by <cit.> (in the same form as in Ref. <cit.>). We discretize energy space of neutrinos using 10 energy bins logarithmically spaced between 0 and 80 MeV and evolve the two-moment system for each energy bin. We take into account velocity-dependent terms up to first order in v/c following previous disk studies (see, e.g., <cit.>). The transport follows the evolution of three neutrino species, ν_e, ν̅_e, and ν_x (with ν_x representing the four heavy-lepton neutrinos), which interact with free nucleons via emission and absorption (only ν_e and ν̅_e) as well as iso-energetic scattering, with rates taken from <cit.> and augmented by weak-magnetism corrections <cit.>. The production of heavy-lepton neutrinos proceeds through e^± annihilation <cit.> and Bremsstrahlung <cit.>, while for the corresponding inverse processes we make use of the approximate detailed-balance treatment of <cit.>. Below densities of 10^8g cm^-3, we turn off all pair-process related source terms in the neutrino moment equations, but, in order to still be able to follow energy- and momentum-deposition in the low-density polar funnels, we apply the corresponding source terms for pair annihilation in the hydro equations (see <cit.> for more details on their computation).
For each simulation, 10^4 equal-mass, passive tracer particles are initially placed in the disk, following the density distribution. The particles that exceed r=10^9 cm are considered part of the outflow and set aside for post-processing (cf. <ref>), with typically ∼ 2000 outflow trajectories per model. All outflow particles remain within the computational domain for the duration of the simulations (t=10 s).
§.§.§ FLASH
FLASH is a multi-physics simulation framework for astrophysical fluid dynamics <cit.>.
To simulate long-term disk outflows in viscous hydrodynamics, we use the
dimensionally-split Piecewise Parabolic Method (PPM, <cit.>) solver, which is based on the PROMETHEUS code
as implemented in FLASH version 3.2. The public version has been modified to allow
for a non-uniform grid <cit.>, inclusion of a viscous stress in axisymmetry <cit.>,
and the pseudo-Newtonian potential of Artemova <cit.> for gravity as reported in <cit.>.
The neutrino implementation consists of a leakage scheme for cooling, with a local prescription
to compute the optical depth using the pressure scale height <cit.>.
Absorption is included using a lightbulb-type scheme that accounts for the annular
geometry of the accretion disk <cit.>. Three neutrino species are included (ν_e,ν̅_e,ν_x),
with the latter representing all 4 heavy lepton species. Charged-current weak interaction for emission
and absorption reactions of {ν_e,ν̅_e} with nucleons are included using the rates of <cit.>.
Additionally, neutrino emission from e^+e^- pair annihilation and plasmon decay is included,
as well as opacity contributions from charged-current and neutral-scattering contributions
following <cit.>, as reported in <cit.>.
By default, the equation of state is that of <cit.>, with the abundances of neutrons, protons,
and alpha particles in nuclear statistical equilibrium (NSE), accounting for the nuclear binding energy
of alpha particles. An additional set of models is evolved with the same equation of state, but now
additionally including a heavy nucleus (^54Mn) in nuclear statistical equilibrium, to capture
the additional nuclear energy release and match the EOS used by ALCAR (see Appendix <ref>).
The computational domain spans the radial range [10^6,10^11] cm and the full range of polar
angles, using a logarithmic grid in radius with 640 cells, and a polar grid equispaced in cosθ
with 112 cells (Δ r/r≃Δθ≃ 0.02 at the equator). The boundary conditions
are outflow in radius and reflecting in polar angle.
FLASH models evolve tracer particles for post-processing in same way as the ALCAR models; see <ref>.
§.§ Nucleosynthesis and kilonova post-processing
We employ a nuclear reaction network that includes 7362 nuclei from nucleons to ^313Ds. We include α-decay, β-decay, charged particle reactions, neutron captures and their inverse process, photo-disintegration, as well as spontaneous, neutron-induced, and β-delayed fission. It corresponds to the set of nuclear reactions labelled `FRDM' in ref. <cit.>.
We also consider weak interactions including the electron/positron captures and (anti-)neutrino absorption on nucleons.
For all trajectories, the nucleosynthesis calculation is started from the last time when the temperature reaches 10 GK. For each tracer the early evolution history of thermal quantities and weak interaction rates in the trajectory is obtained based on the simulation data. When the disk simulation ends at t_f=10 s, the tracer reaches a radius r_f.
After the end of simulation we take the assumption of homologous expansion such that the density is extrapolated as ρ(t)=ρ(t_f) [1+v_f(t-t_f)/r_f]^-3 with the asymptotic velocity v_f at t_f. The temperature is evolved consistently, taking into account viscous and nuclear heating, and including the energy exchange associated with emission
and absorption of neutrinos.
Using the masses and final (i.e. at t=t_f) velocities of all trajectories, as well as the nuclear heating rates, mass fractions of lanthanides and actinides, and average mass numbers along each trajectory, we estimate the kilonova signal using the approach detailed in <cit.>. The effective heating rates powering the kilonova are computed from the total heating rates using the approximate thermalization efficiencies of Ref. <cit.>. In addition to heating from β^-- and α-decays as well as fission, which is treated following the standard treatment of <cit.> (as done in Ref. <cit.>), we find also a small contribution of e^--capture and β^+-decays (see, e.g., Ref. <cit.>) that are dominated by the decay of ^56Ni, for which 80 % (20 %) of the energy goes into γ-rays (neutrinos). In contrast to the multi-dimensional kilonova analysis of <cit.>, we assume spherical symmetry, as we are only interested in the most basic properties of the kilonova signal. To this end, we do not apply kernel-based interpolation techniques to map the trajectory properties to the velocity grid, but instead use simple 0th-order binning as follows: We discretize the velocity range between v/c=0 and 0.5 using 50 bins and, for each velocity bin ranging from v to v+Δ v, obtain its mass Δ m by summation of all trajectories falling in this velocity range. The heating rates, lanthanide mass fractions, and average mass numbers (needed for the calculation of the gas energy density) for this bin are computed as mass-weighted averages over the same trajectories. The approximate radiative transfer equations are then solved on a finer grid (ranging from v/c=0 to 0.6 with 300 uniform zones) using linear interpolation to map from the coarser grid.
§.§ Model parameters
The baseline configuration mirrors the parameters of model
m1 of <cit.>,
which has a black hole of 3 M_⊙ and spin parameter of 0.8. The
initial condition for the disk is an equilibrium torus with mass 0.1M_⊙,
initial Y_e = 0.1, entropy of 8 k_ B per baryon, constant specific angular momentum, and
radius of initial density maximum at r=40 km (see, e.g., <cit.> for a
study of initial disk properties arising from dynamical merger simulations). The kinematic viscosity coefficient follows
the functional form of <cit.>, namely:
ν = αc_i^2/Ω_ K,
with α being a constant, c_i the isothermal sound speed, and Ω_ K the equatorial
Keplerian angular velocity of the pseudo-Newtonian potential. The default model has α=0.06,
and we also consider alternative models with α=0.03. Only the rϕ and θϕ components
of the viscous stress tensor are considered, in order to mimic conversion of shear kinetic energy
into thermal energy by turbulent angular momentum transport driven by the magnetorotational instability
<cit.>.
Our naming convention prepends the letter “A" to models run with ALCAR, and “F" to models
run with FLASH. Models named full use the entire production settings of each code as
described in <ref>,
with suffixes {a3,a6} when using α={0.03,0.06}, respectively.
In addition, we evolve a set of models with reduced neutrino physics: no heavy lepton neutrinos, and
only charged-current neutrino/antineutrino emission and absorption, and neutral-current scattering on
nucleons. These models are denoted with “red" (for reduced).
Also, a version of all models is repeated in FLASH, but now including a representative
heavy nucleus (^54Mn, see Appendix <ref>)
in the EOS, to match that used in ALCAR. These models start with “Fh" (for heavy nucleus).
Finally, model F-full-a3 is repeated using 10 times more tracer particles than the default value,
to test convergence of particle-based analyses (we name it F-full-a3-N10).
§ CODE COMPARISON RESULTS
§.§ Dynamics
§.§.§ Accretion
The evolution of the inner disk during the first ∼ 100 orbits is mostly laminar,
and set by the interplay between viscous angular momentum transport and
neutrino cooling. Figure <ref> shows the mass accretion
rate at r=10 km, slightly inside the radius of the innermost stable circular orbit
(ISCO, ∼ 13 km) for all
models evolved. Aside from a small offset in time, the evolution of the
accretion rate is nearly identical for ALCAR and FLASH models during
this initial phase.
Around ∼ 100 orbits, with the exact value determined by the strength
of viscosity, neutrino cooling decreases sharply and the disk becomes
radiatively-inefficient. The timing of this transition at
∼ 200 ms and ∼ 450 ms
for α = 0.06 and 0.03, respectively (Figure <ref>), shows excellent
agreement between ALCAR and FLASH models. Combined with the
similarity of the inner accretion rate evolution, this agreement shows that
the viscous angular momentum transport is fully compatible
between the two implementations.
After the transition to the radiatively-inefficient (advective) phase,
the disk becomes highly turbulent and the mass accretion rate at the ISCO
becomes more stochastic. Figure <ref> shows
that the amplitude of fluctuations and overall evolution in accretion rate remains consistent
between ALCAR and FLASH models. At this stage, a small offset
becomes apparent between FLASH models that differ in the inclusion
of the nuclear binding energy of a representative heavy nucleus (models
F and Fh in Figure <ref>), with the models having a larger nuclear
binding energy release showing a larger drop in accretion rate (an effect first reported in <cit.>).
§.§.§ Outflow kinematics and nuclear energy release
While the bulk of mass ejection in viscous hydrodynamic evolution
takes place once the disk becomes radiatively inefficient,
earlier outflows do occur.
Figure <ref> shows the total mass outflow rate (bound and unbound)
at 10^9 cm for all models. The most notable difference between ALCAR and
FLASH models is the early bump at 200-400 ms, which corresponds to mass ejection
driven by neutrino energy deposition (the “neutrino-driven wind")[Note that there is
a finite travel time for outflow material to reach the extraction radius from the region where
it is launched: 10^9 cm/0.1 c ∼ 0.3 s.].
This early outflow component is significantly larger in the ALCAR models,
which implement multi-group M1 neutrino transport. Unsurprisingly, the accuracy of
neutrino-driven mass ejection is dependent on the quality of the neutrino transport approximation.
Mass ejection following the transition to radiative inefficiency peaks at a time
∼ 1 s at the extraction radius located at 10^9 cm (Figure <ref>).
The rise, peak, and subsequent evolution of this component, which makes up the
majority of the disk wind, is similar yet quantitatively different for ALCAR and
FLASH models.
While identical evolution is not expected given the large stochatic
fluctuations, a systematic difference is observed: both ALCAR
and FLASH models with a heavy nucleus (Fh) rise earlier to peak, reach a higher
peak, and decrease faster thereafter relative to the FLASH models (F) that only include
the nuclear binding energy of alpha particles. Table <ref> shows that this
difference translates into a ∼ 10% boost in ejected mass and
a 10-20% boost in average outflow velocity when comparing models F and Fh, which
only differ in the inclusion of the nuclear binding energy of ^54Mn. Compared
to ALCAR models, F models have all lower average velocity but eject more mass.
To illustrate the magnitude of the nuclear energy release in the different EOS mixtures,
we plot in Figure <ref> the NSE abundances for a representative thermodynamic
path of the outflow (Y_e=0.3, ρ∝ T^3). At low temperature, the difference
in nuclear binding energy released per nucleon between the Fh (^54Mn) and F (^4He only)
models is
(0.648× 8.74-0.6× 7.07) MeV≃ 1.4 MeV.
Converting to pure kinetic energy, this would correspond to a speed difference of
∼ 0.06 c, which is of the same order of magnitude (but larger) than the difference in average
expansion velocity between Fh and F models.
The outflow velocity distribution is shown in the rightmost column of Figure <ref>.
In all cases, the velocity distribution has the same qualitative form: a double-peaked structure,
a sharp cutoff at ∼ 0.1 c, and an extended tail to lower velocities. The distribution
shows excellent agreement between all models that use α=0.03 (full-a3 and red-a3), with quantitative
differences related to the amount of mass ejected. A more noticeable
difference appears in the full-a6 set, for which model Fh shows low-velocity tail that is
shifted to higher velocities. This is consistent with the larger average velocity shown
in Table <ref> which is due to less material moving at low speeds.
Overall, ALCAR and FLASH models that include the nuclear binding energy
contribution from ^54Mn (Fh) have average velocities that differ by less than
10%, thus accounting for most of the difference between A and F models.
Figure <ref> also shows abundances as a function of temperature
for two NSE mixtures with more nuclei. Using the 47 isotope mixture
of <cit.> and employing constant nuclear partition
functions ω_i, results in a marginally higher abundance of heavy nuclei
(everything other than ^4He, n, p) by 4% relative to using only ^54Mn,
with an increase in the nuclear
energy release of the same magnitude. Significant differences
require a much larger number of isotopes: the right panel of Figure <ref>
shows abundances obtained with a mixture of 4452 isotopes and using
temperature-dependent partition functions ω_i(T).
While the effect of the temperature-dependent partition functions is to shift the
transition between nuclear species to slightly higher temperature
relative to the case of constant partition functions, the larger number
of nuclei allows to reach a higher mass fraction and consequently larger
nuclear energy release. The 20% increase in heavy nuclei mass fraction results in
an extra ∼ 1.7 MeV per nucleon released, which can boost the velocity
by up to another 0.02 c, if fully converted to kinetic energy, relative to using only
^54Mn as a representative nucleus. This motivates future work toward improving
how nuclear physics and r-process heating is included in post-merger simulations.
§.§.§ Neutrino quantities and equilibrium Y_e
The neutrino luminosities and mean energies for A- and F-models are shown in Figure <ref>.
In ALCAR, the M1 luminosities are measured at 500 km, whereas in FLASH they
are computed instantaneously in the entire domain, correcting for the neutrinos absorbed, as in <cit.>.
Despite the different transport methods, the global electron-type neutrino and antineutrino luminosities after t∼ 3 ms
are consistent in both codes to within 10–20%, regardless of the neutrino physics included (i.e., model full-a3 versus red-a3).
A larger discrepancy of up to a factor ∼ 2 is obtained
in the heavy-lepton luminosities (models full-a6 and full-a3). Nevertheless the time evolution is remarkably close in all species,
owing to the agreement in angular momentum transport and global dynamics as discussed in <ref>.
Figure <ref> also shows the mean energies for all neutrino species evolved,
obtained as the global ratio of energy- to number luminosities for each species (as in <cit.>).
For electron-type neutrinos and antineutrinos, the mean energies show close similarity as with the luminosities,
with no significant differences in the level of agreement between models that include all neutrino interactions
and those that reduce the neutrino emitting channels. Again, a larger discrepacy is observed in the mean
energies of heavy lepton neutrinos.
The electron fraction distribution of the outflow at T=5 GK (Figure <ref>) shows a systematic shift of its peak
toward lower electron fractions by ∼ 0.02-0.03 in ALCAR models relative to FLASH models, consistent with the
offset in average Y_e shown in Table <ref>. The entropy distribution peaks at lower values in FLASH models,
but shows otherwise a similar shape relative to ALCAR models, consistent with the agreement in mean
values (Table <ref>). The expansion time also shows consistent distributions between FLASH and ALCAR models,
with larger deviations in Fh models relative to both F and A models.
We can analyze the offset in Y_e by computing the equilibrium values toward which weak interactions are driving
the composition in the disk. These equilibrium values are obtained by balancing the rates of neutrino
and antineutrino emission/absorption (as in, e.g., Ref. <cit.>). We denote by ⟨ Y_ e,em^ eq⟩
the mass-averaged equilibrium electron fraction obtained by balancing electron neutrino and antineutrino
emission rates, ⟨ Y_ e,abs^ eq⟩ the corresponding equilibrium value obtained with absorption
rates only, and ⟨ Y_ e,tot^ eq⟩ the equilibrium value obtained by balancing both emission and absorption rates.
Figure <ref> shows the evolution of the mass-averaged electron fraction ⟨ Y_ e⟩ and the
total equilibrium value towards which weak interactions are driving it. At early times, these equilibrium
values are moderately neutron rich (∼ 0.2), consistent with the mild electron degeneracy of the disk (also shown
in Figure <ref>).
As the disk density decreases and degeneracy drops, weak equilibrium increases Y_e toward
∼ 0.5 because free nucleons recombine into α particles and heavy nuclei (characterized by an average mass number A_ h and charge number Z_ h).
Assuming full recombination and a fixed representative heavy nucleus, mass and charge conservation lead to the following relation for the asymptotic equilibrium electron fraction at low temperature:
Y^eq_e = 1/2 - [1/2-(Z_ h/A_ h)] X_ h ,
where Eq. (<ref>) is also valid in the case where X_ h, A_ h, and Z_ h denote the (average) properties of a distribution of heavy nuclei.
Consistent with these considerations, both ALCAR models and FLASH models that assume ^54Mn as representative heavy nucleus have on average X_ Mn∼ 0.6 at late times, i.e. Y^eq_e ≃ 0.48, that corresponds to
the asymptotic average ⟨ Y_e^ eq⟩ at low temperatures. FLASH models without ^54Mn, i.e. only α particles, have Y^eq_e = 0.5 (Figure <ref>).
The mass-averaged electron fraction follows the shape of the equilibrium value without reaching it in both ALCAR and FLASH models, decoupling around the time at which neutrino luminosities drop significantly (cf. Figure <ref>). An offset between ALCAR and FLASH models is apparent in both the average electron fraction and the equilibrium value, with FLASH models showing consistently higher values. The offset in the electron fraction distribution seen in Figure <ref> thus most likely originates from the offset in the equilibrium electron fractions obtained with each code, given that all models start with the same initial electron fraction.
To analyze this offset further, we separate the equilibrium electron fraction into emission and absorption
components in Figure <ref> for the full-a3 models. At early times, the ALCAR model has a
lower emission equilibrium and a higher absorption equilibrium than FLASH models, such that the net equilibrium value is lower. The difference in emission equilibrium can be attributed to the higher electron degeneracy in the ALCAR models (Fig. <ref>), while the difference in absorption equilibrium is likely due to the different neutrino transport implementations.
Note that the average Y_e in FLASH models takes longer to approach the equilibrium electron fraction
than in ALCAR models, because the effective weak interaction timescales are significantly longer
due to the leakage and absorption
implementation. Figure <ref> shows the average weak interaction timescales in the disk due to
neutrino emission ⟨τ_ em⟩ and absorption ⟨τ_ abs⟩, as well
as the accretion (viscous) timescale ⟨τ_ vis⟩, for the full-a3 model,
computed as in Ref. <cit.>.
In the ALCAR model, the emission and absorption timescales are shorter than the
accretion and current times t, hence the torus approaches Y_e equilibrium quickly and remains
close to that state until freeze-out, as shown in Figure <ref>. The FLASH model, on the
other hand, is such that the shortest timescale, ⟨τ_ em⟩, is initially
shorter than the accretion time but longer than the current time, hence the Y_e of the
torus remains out of equilibrium until t∼ 10 ms.
As time elapses and the absorption contribution decreases, the net equilibrium Y_e merges with
the emission equilibrium, and the offset in equilibrium value between the two codes decreases. Despite these
differences, the actual mass-averaged electron fraction between the two codes has a moderate offset of ∼ 0.02
throughout the evolution.
§.§ Nucleosynthesis
Figure <ref> compares the abundance yields from ALCAR and FLASH models as functions of mass number A at 1 Gyr and of atomic number Z at 1 day. The abundance patterns in models F and Fh are very similar to each other, except that lanthanides are enhanced by a factor of ∼2–4 in model Fh-full-a3 compared to model F-full-a3.
This suggests that the nucleosynthesis outcome is not very sensitive to the inclusion of the nuclear binding energy of ^54 Mn (Fh).
Overall, ALCAR and FLASH models agree very well. The abundance patterns near the first r-process peak are well reproduced compared to that in the metal-poor star HD-222925 <cit.>. Consistent with the offset for the peak electron fraction from ∼ 0.28–0.29 in the FLASH models to ∼ 0.23–0.24 in the ALCAR models (Figure <ref>), the abundances of 2nd-peak r-process elements (A≈ 130) in the ALCAR models are higher than in the F and Fh models, namely by a factor of ∼1.5–2, because those elements are most efficiently produced from the ejecta with Y_e=0.2 to 0.23. The enhancement of ^132 Te at 1 day leads to higher specific heating rates (shown in the second column of Fig. <ref>) originating from the β^- decay chain of ^132 Te–^132 I–^132 Xe. A noteworthy difference is observed in the abundance of actinides, which is about a factor of 10–40 smaller in the ALCAR models, associated with the lack of very neutron-rich ejecta with Y_e<0.2 in these models. The small amount of ejecta with Y_e∼ 0.1 in FLASH models (Figure <ref>) is apparently sufficient to make a significant difference in the yields of actinides, therefore potentially allowing these elements to be used as diagnostics of the ejecta electron fraction.
The model sets full-a3 and red-a3 underproduce nuclei with A>140 compared to the solar r-process pattern, while the model set full-a6 shows a more consistent abundance pattern even in 3rd-peak elements and actinides. As found in previous studies (e.g. <cit.>), a higher viscosity leads to faster matter ejection and, therefore, earlier freeze-out of Y_e, increasing the fraction of matter ejected with Y_e<0.2 (i.e. neutron-rich enough to enable actinide production). The early freeze out results in more matter being ejected with Y_e < 0.2, leading to a more solar-like distribution of elements heavier than A≈ 140 <cit.>.
§.§ Kilonova signal
The kilonova signal produced by the ejecta is compared for all models in Figure <ref>. As anticipated from the similarity of outflow properties and nucleosynthesis yields, the basic kilonova properties (bolometric luminosity, photospheric temperature and velocity) show an overall good level of agreement, especially considering that matter ejection from turbulent disks involves a non-negligible level of stochasticity.
The specific heating rates, shown in the second column from the left, differ only marginally when varying the physics input, while they are systematically shifted to higher values (about 20–40 % during the considered times) in the ALCAR models compared to the FLASH models. This difference is connected to the more pronounced 2nd r-process peak in the ALCAR models, as mentioned in the previous section. However, the impact of this difference on the brightness of the kilonova is partially compensated by the slightly smaller ejecta masses of the ALCAR models.
Since BH-disk ejecta are relatively slow compared to other ejecta components that can be produced in a NS merger, they become optically thin at rather late times, t∼ 5–10d. These transition times when the ejecta start to become optically thin can be read off in the third column of Figure <ref> as the times when the total luminosities for the first time exceed the effective heating rates. Previous to these transition times, most of the ejecta are still optically thick and the emission is produced from just the outermost ejecta layers (see first and fifth column of Fig. <ref>). The higher luminosities seen at early times in the ALCAR models are likely to be connected to the more pronounced neutrino-driven mass ejection in the ALCAR models, which leads to an extended high-velocity tail.
However, another, purely numerical reason may also be poor sampling of the high-velocity edge of the ejecta with tracer particles. This is suggested by the comparison with model F-full-a3-N10, which evolves 10 times more tracer particles than model F-full-a3 and exhibits significantly brighter emission at early times. This comparison suggests that the adopted number of 1500-2000 equal-mass tracer particles is high enough to describe the main part of the light curve, during which the photosphere travels through the ejecta, but is insufficient for accurately resolving the emission at earlier times. We note, however, that the reduced accuracy at early times may not be overly relevant for kilonova modeling of NS mergers, because the early light curve is likely to be dominated by other ejecta components.
§ SUMMARY AND DISCUSSION
We have carried out a code comparison study of ALCAR and FLASH,
both of which have been used extensively over the past decade to study
the viscous hydrodynamic evolution of BH accretion disks formed
in neutron star mergers. For the comparison, we employ
a representative system around a BH with identical initial conditions, and vary the viscosity
parameter as well as neutrino physics. Our main results are the following:
1. – We find excellent agreement in the quantities that
depend on angular momentum transport, i.e., the accretion rate history,
and timing of weak interaction freeze-out (Figures <ref> and <ref>).
A larger discrepancy is obtained in quantities that depend on the neutrino
transport approximation, such as the magnitude of the neutrino-driven wind (Figure <ref>) and
the electron fraction distribution of the ejected material (Figure <ref>).
2. – Both codes show the same progression of the equilibrium electron fraction
from low to high values as the disk becomes less degenerate over time. Slightly
higher electron degeneracies, and therefore more neutron-rich equilibrium conditions,
are found in the ALCAR models, which accounts for the ∼ 10% shift to lower average
Y_e values in the ejecta of the ALCAR compared to the FLASH models
(Figs. <ref>-<ref>).
3. – The outflow velocity is sensitive to the accuracy with which the nuclear binding energy
release is treated. Including a representative heavy nucleus (^54Mn) results in an
additional energy release of ∼ 1 MeV per baryon relative to including only alpha particle
recombination (Figure <ref>). Additional energy release can be obtained when including a much larger
isotope mixture. This motivates further work toward including more realistic nuclear physics
in post-merger simulations.
4. – The nucleosynthesis yields follow the offset in Y_e, with ALCAR models producing more
elements with A>130 by a factor ∼ 2 relative to FLASH models, within an overall good agreement otherwise. This also results in a higher heating rate in ALCAR models due to enhancement of the ^132 Te–^132 I–^132 Xe β^- decay chain. Very minor differences result from the additional energy release from ^54Mn models in FLASH (Fh versus F), with the possible exception of factor ∼ 2 changes in the lanthanides fraction in some models.
Small differences in the amount of ejecta with Y_e∼ 0.1 can have a factor ∼ 10 imprint in the abundance
of actinides, with the potential to use these species as a diagnostic of the electron fraction of the disk outflow.
5. – The kilonova signature is quite similar in all models after the ejecta become optically thin,
despite the aforementioned differences in the heating rate. More pronounced differences are found at early times when the ejecta are optically thick, with ALCAR models being brighter due in part to a more extended high-velocity tail given the more prominent neutrino-driven wind. We also find evidence of undersampling of the early ejecta with ∼ 2000 total particles, showing a significant brightening in FLASH models when 10 times more particles are used.
The expected boost in expansion velocity when including a large number of
isotopes (∼ 0.1 c relative to pure alpha particles, Fig. <ref>)
is consistent with what has been found in studies looking at the impact of r-process
heating on the late-time evolution of disk outflows (e.g., <cit.>).
Development of EOSs with more complete nuclear heating rates that also
cover the relevant thermodynamic range for post-merger evolution would be
of high usefulness to post-merger modeling.
Further code comparison studies are needed to bracket uncertainties in theoretical predictions
for kilonova light curves and r-process nucleosynthesis yields, as more events with
electromagnetic counterparts are anticipated in the future. While a first code comparison of GRMHD
models has been carried out recently covering short evolutionary timescales of tens of milliseconds <cit.>, more extensive comparisons of GRMHD models include a microphysical EOS and neutrino transport are needed. The high computational costs of these calculations makes extensive
comparisons impractical at present, however, but nevertheless highly desirable for the future.
RF acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC) through Discovery Grants RGPIN-2017-04286 and RGPIN-2022-03463. A sabbatical visit, during which this work was conceived and partially completed, was supported by the Cluster Project ELEMENTS from the State of Hesse and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC Advanced Grant KILONOVA No. 885281). RF is also grateful for
the hospitality of the GSI Helmholtz Centre for Heavy Ion Research and the Institute of Physics, Academia Sinica, where part of this work was also conducted. OJ acknowledges support by the ERC under the European Union's Horizon 2020 research and innovation programme under grant agreement nr 759253. GMP and ZX acknowledge support by the ERC under the European Union's Horizon 2020 research and innovation programme (ERC Advanced Grant KILONOVA No. 885281).
OJ, GMP, and ZX also acknowledge support by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 279384907 - SFB 1245 and MA 4248/3-1, and the State of Hesse within the Cluster Project ELEMENTS.
Some of the software used in this work was in part developed by DOE NNSA-ASC OASCR Flash Center at the University of Chicago. We also acknowledge support from the Shared Hierarchical Academic Research Computing Network (SHARCNET, www.sharcnet.ca) and the Digital Research Alliance of Canada (alliancecan.ca). FLASH models were run on the Niagara supercomputer at the SciNet HPC Consortium <cit.> and analyzed on the graham cluster at the University of Waterloo. SciNet is funded by the Canada Foundation for Innovation, the Government of Ontario
(Ontario Research Fund - Research
Excellence), and by the University of Toronto.
We are also grateful for computational support by the VIRGO cluster at GSI and the HOKUSAI computer center at RIKEN.
§ NUCLEAR STATISTICAL EQUILIBRIUM FOR IONS
Here we provide the explicit relations defining nuclear statistical
equilibrium for the ion component of the equation of state. Chemical equilibrium
leads to the stoichiometric equations for the chemical potentials
associated with reactions relating particle species, combined with
mass and charge conservation. For a gas of neutrons, protons,
alpha particles, and ^54Mn nuclei, we have
2μ_ n + 2μ_ p = μ_α
12μ_α + 5μ_ n + μ_ p = μ_ Mn
X_ n + X_ p + X_α + X_ Mn = 1
X_ p + 1/2X_α + 25/54X_ Mn = Y_e
where the subscripts {n,p,α,Mn} correspond to neutrons,
protons, alpha particles, and ^54Mn nuclei, μ_i are the
chemical potentials, X_i are the mass fractions, and Y_e is the
electron fraction. If particles follow a Maxwell-Boltzmann distribution,
we have
μ_i = k_ BT[ln(n_i/n_ Q,i) - lnω_i] - χ_i,
where χ_i is the nuclear binding energy, ω_i is the nuclear
partition function, n_i is the number density, and
n_ Q,i = (m_i k_ BT/2πħ^2)^3/2
is the quantum concentration of particle species i. The stoichiometric equations for the chemical
potential then become Saha equations for each dissociation/recombination channel.
Solution of the system of Equations (<ref>)-(<ref>)
yields the equilibrium mass fractions for each species, as a function of density, temperature,
and electron fraction X^ NSE_i(ρ,T,Y_e). The nuclear binding energy
contribution to the specific internal energy is included as
e_ int = e_ int^0 -χ_α/m_α X_α - χ_ Mn/m_ Mn X_ Mn
where e_ int^0 is the internal energy excluding nuclear binding energy,
and m_i is the mass of particle species i. For a non-relativistic ion gas, we have
e_ int^0 = 3/2k_ BT/m_n∑_i X_i/A_i.
For an equation of state in which the temperature is found from the internal energy using a Newton-Raphson scheme
(as in FLASH), the derivatives of the mass fractions in NSE with respect to temperature must also be calculated
to obtain the total temperature derivative of the internal energy as defined in Equation (<ref>).
The original FLASH implementation does not account for ^54Mn nuclei, thus Equation (<ref>) is not
included, and X_ Mn=0 everywhere else (including in Equation <ref>). NSE is solved
for by solving the Saha equation for α particles directly. To include ^54Mn nuclei in the extended
set of simulations, we generate a table of {n,p,α,Mn} mass fractions and associated temperature
derivatives (∂ X^ NSE_i / ∂ T)_ρ,Y_e using the code[Available at cococubed.com]
of <cit.> and constant
partition functions ω_ n=ω_ p=2 and ω_α=ω_ Mn=1. The table covers the
range T∈ [5,100]× 10^9 K,
log_ 10ρ∈ [1,12], and Y_e ∈ [0.01,0.99]. The ion internal energy is then computed
using Equations (<ref>) and (<ref>).
|
http://arxiv.org/abs/2307.02519v1
|
20230705161659
|
Transient spectroscopy from time-dependent electronic-structure theory without multipole expansions
|
[
"Einar Aurbakken",
"Benedicte Sverdrup Ofstad",
"Håkon Emil Kristiansen",
"Øyvind Sigmundson Schøyen",
"Simen Kvaal",
"Lasse Kragh Sørensen",
"Roland Lindh",
"Thomas Bondo Pedersen"
] |
physics.chem-ph
|
[
"physics.chem-ph"
] |
APS/123-QED
[email protected]
Hylleraas Centre for Quantum Molecular Sciences,
Department of Chemistry, University of Oslo, Norway
Hylleraas Centre for Quantum Molecular Sciences,
Department of Chemistry, University of Oslo, Norway
Hylleraas Centre for Quantum Molecular Sciences,
Department of Chemistry, University of Oslo, Norway
Department of Physics,
University of Oslo, Norway
Hylleraas Centre for Quantum Molecular Sciences,
Department of Chemistry, University of Oslo, Norway
University Library, University of Southern Denmark, DK-5230 Odense M, Denmark
Department of Chemistry—BMC,
Uppsala University, Sweden
[email protected]
Hylleraas Centre for Quantum Molecular Sciences,
Department of Chemistry, University of Oslo, Norway
Based on the work done by an electromagnetic field on an atomic or molecular electronic system,
a general gauge invariant formulation of transient absorption spectroscopy is presented within the semi-classical approximation.
Avoiding multipole expansions, a computationally viable expression for the spectral response function
is derived from the minimal-coupling Hamiltonian of an electronic system interacting with one or more
laser pulses described by a source-free, enveloped electromagnetic vector potential.
With a fixed-basis expansion of the electronic wave function, the computational cost of
simulations of laser-driven electron dynamics beyond the dipole approximation is the same
as simulations adopting the dipole approximation.
We illustrate the theory by time-dependent configuration interaction and coupled-cluster simulations of
core-level absorption and circular dichroism spectra.
Transient spectroscopy from time-dependent electronic-structure theory without multipole expansions
Thomas Bondo Pedersen
August 1, 2023
===================================================================================================
§ INTRODUCTION
Using technology developed in the past two decades, ultrashort laser pulses with attosecond duration have enabled the observation and manipulation of
multi-electron dynamics in atoms, molecules, and materials, thus opening new research avenues in physics and chemistry <cit.>.
Quantum-mechanical simulations are mandatory to properly understand, interpret, and predict advanced attosecond experiments.
While nuclear motion becomes important on longer time-scales (femtoseconds),
one- and multi-electron ionization dynamics constitute major challenges for time-dependent electronic-structure simulations, along with electron-correlation effects <cit.>.
Single active electron (SAE) models <cit.>
that, at best, only account for electron correlation through an effective potential are widely used to study processes
induced by lasers with frequency well below any multi-electron excitation energy. As the frequency increases and approaches
resonance with a multi-electron excited state, the SAE approximation breaks down and a correlated many-body method should
be applied instead <cit.>.
Regardless whether the SAE model or a many-body description is used, most simulations of laser-induced processes employ the
electric-dipole approximation where the magnetic component of the laser field is neglected and the electric component is
assumed to be spatially uniform. This is an excellent approximation when the spatial extent of the electronic system is small compared
with the wavelength of the laser field.
Attosecond laser pulses, however, are commonly generated by high harmonic generation in the extreme ultraviolet and X-ray spectral regions
where beyond-dipole effects may become non-negligible.
It is, therefore, of interest to include higher-order electric and magnetic multipole interactions in simulations of laser-driven
electron dynamics, preferably without incurring a significant computational penalty.
Within response theory <cit.>, which is essentially time-dependent perturbation theory Fourier-transformed to the frequency domain,
beyond-dipole effects have been studied using the full plane-wave vector potential for the semiclassical
description of the matter-field interaction <cit.>.
Due to the use of perturbation theory and the neglect of terms quadratic in the vector potential,
these studies are limited to weak laser fields but do not suffer from issues such as origin-dependence and slow basis-set
convergence that may arise from the use of multipole expansions <cit.>.
Conceptually, at least, it is rather straightforward to generalize the response-theory approaches
to the time domain, avoiding perturbation theory altogether and hence enabling the study of both weak- and strong-field processes
without multipole expansions.
The theory of transient absorption spectroscopy (TAS), see, e.g., recent work by <cit.>, has been formulated in the framework of
the electric-dipole approximation. In the present work, we present a generalization that accounts for the presence of spatially non-uniform fields, which
reduces to the original formulation in the long-wavelength (electric-dipole) limit. In line with the previous work based on response
theory <cit.>,
we present initial test simulations on small molecules in the weak-field limit using time-dependent configuration-interaction
(TDCI) <cit.>
and time-dependent coupled-cluster (TDCC) <cit.> theories.
Ignoring ionization processes, we use static, atom-centered Gaussian basis sets such that the prerequisite integrals involving the
full plane-wave vector potential can be computed using the recent implementation reported by <cit.>.
This allows us to validate our implementation of the generalized theory of TAS by comparing with previously reported theoretical pump-probe
and X-ray absorption spectra. In addition, we compute the anisotropic X-ray circular dichroism (CD) spectrum of hydrogen peroxide generated from
simulations of the electrons interacting with circularly polarized laser
pulses <cit.>,
comparing with the CD spectrum predicted by the rotatory strength tensor <cit.>.
§ THEORY
Atomic and molecular transient (as well as steady-state) absorption spectra can be obtained by computing the spectral response function
S(ω) which, in turn, is obtained from a frequency-resolved analysis of the total energy transfer Δℰ between an electromagnetic field
and the electronic system. The spectral response function S(ω) is defined such that it satisfies the relation
Δℰ = ∫_0^∞ω ω S(ω).
The absorption cross section σ(ω) can be computed as
σ(ω) = ω S(ω)/I(ω),
where I(ω) is the total field energy per unit area at frequency ω.
In this work, however, we shall focus on the spectral response function.
We first formulate a general, gauge invariant theory for the energy transfer, proceeding to the derivation of the spectral response function
for the specific case of an enveloped, source-free electromagnetic field without multipole expansion.
§.§ Energy transfer
We consider an atomic or molecular electronic system exposed to
the classical electromagnetic fields
E(r,t) = -∂_tA(r,t) - ∇ϕ(r,t),
B(r,t) = ∇×A(r,t),
where A(r,t) and ϕ(r,t) are the vector and scalar potentials, respectively.
Specifically, we will consider the interactions of the electrons with laser pulses, i.e.,
the physical electric and magnetic fields, E and B, are nonzero only in a finite time interval
and vanish as t→±∞.
Within the nonrelativistic, clamped-nuclei Born-Oppenheimer approximation the time evolution of the electronic
system is governed by the electronic Schrödinger equation
|Ψ̇(t)⟩ = H(t) |Ψ(t)⟩, |Ψ(t→ -∞)⟩ = |Ψ_0⟩,
where |Ψ_0⟩ is the initial wave function of the electrons, typically the ground-state wave function in the absence
of external fields.
The semiclassical, minimal-coupling Hamiltonian is given by
H(t) = 1/2π^2(r,t) + W - ϕ(r,t),
where π(r,t) = p + A(r,t) is the kinetic momentum operator
and W represents all Coulomb interactions among the electrons and (clamped) nuclei.
Throughout this paper, summation over electrons will be implicitly assumed for brevity of notation, and Hartree atomic units are used.
We have also skipped the spin-Zeeman term as we will use only closed-shell, spin-restricted wave functions in the present
work.
We wish to derive a general expression for the spectral response function S(ω) in Eq. (<ref>).
Physically, the total energy transfer Δℰ
expresses the work performed on the electronic system by the external electromagnetic fields,
and the rate of change of the energy is referred to as the power.
In classical electrodynamics <cit.>, the power function of an electron in an electromagnetic field is given by
P = -E·v, where v is the velocity of the electron.
This is also the energy lost by the electromagnetic field as calculated by Poynting's theorem <cit.>,
ensuring energy conservation (of the particle and field systems together).
The quantum-mechanical power operator can be obtained by Weyl quantization <cit.>
as
P(r, t) = -1/2(E(r,t)·π(r,t) + π(r,t)·E(r,t)).
Hence, we may express the total energy transferred from the field to the electronic system as
Δℰ = ∫_-∞^∞ t ⟨P(r,t)|.⟩
In previous work on transient absorption spectroscopy—see,
e.g., Refs. <cit.>—the energy
transfer is expressed as the integral
Δℰ = ∫_-∞^∞ t ℰ(t)/ t,
where ℰ(t) is the instantaneous energy of the electrons.
At this point, the instantaneous energy is typically equated with the quantum-mechanical expectation value of the Hamiltonian,
⟨H(t)|=⟩⟨Ψ(t) | H(t) |Ψ(t)|$⟩. In general, however,
neither the expectation value⟨H(t)|$⟩ nor the Hamilton function in classical mechanics <cit.>
equals the energy of the electrons when a time-dependent external electromagnetic field is present.
This is clear from the fact that both ⟨H(t)|$⟩ and⟨H(t)|/⟩tare gauge-dependent quantities.
Instead, the operator <cit.>
K(t) = H(t) + ϕ(r,t) = 1/2π^2(r,t) + W,
can be regarded as a (generally time-dependent) energy operator which yields gauge invariant expansion coefficients
and transition probabilities when the wave function is expanded in its (generally time-dependent) eigenstates.
Using the energy operator, Eq. (<ref>), and the Ehrenfest theorem, we find
ℰ(t)/ t = ⟨K(t)|⟩/ t
= ⟨P(r,t)|,⟩
which leads to Eq. (<ref>) upon substitution in Eq. (<ref>).
We refer to references <cit.> for further discussions of the intricacies of gauge invariance in external time-varying fields.
Within the electric-dipole approximation,A(r,t) ≈A(0,t) = A(t), ϕ(r,t) = 0, which was assumed in
previous work <cit.>, the power operator becomesP(t) = -π(t)·E(t). Inserting this expression into Eq. (<ref>) yields
Δℰ = -∫_-∞^∞ t ⟨π(t)|·⟩E(t).
Using the Ehrenfest theorem,
⟨r|⟩/ t = ⟨π(t)|,⟩
and integration by parts, we arrive at
Δℰ = ∫_-∞^∞ t ⟨r|·⟩Ė(t),
which agrees with the expressions obtained in Refs. <cit.>.
Identifying the instantaneous energy as the expectation value⟨H(t)|$⟩ is valid when the scalar potential vanishes
which, in turn, is a valid choice with the Coulomb gauge condition ∇·A(r,t) = 0 whenever the electric field is divergence-free (no charge contributions to the electric field), i.e. within radiation gauge <cit.>.
It is a peculiarity of the electric-dipole approximation that the correct energy transfer is obtained from
⟨H(t)|$⟩ with the choicesA(r,t) = 0andϕ(r,t) = -r·E(t).
§.§ Representation of laser pulses without multipole expansion
From here on we will assume a divergence-free electric field and work in the radiation gauge such thatK(t) = H(t).
Following common practice, we separate the Hamiltonian into a time-independent and a time-dependent part,
H(t) = H_0 + V(t),
H_0 = 1/2p^2 + W,
V(t) = A(r,t)·p + 1/2A^2(r,t).
In the context of time-dependent perturbation theory or frequency-dependent response theory,
the weak-field approximation—i.e., neglecting the term quadratic in the vector potential—is
usually invoked, although it is not formally necessary to do
so <cit.>.
For the real-time simulations pursued in the present work, invoking the weak-field approximation does not lead
to any simplifications and, hence, we retain the quadratic term in all simulations.
The vector potential that solves the Maxwell equations within the Coulomb gauge is a linear combination of plane waves. However, this is impractical for modelling ultra-fast laser pulses.
We will instead model the vector potential as a linear combination of enveloped plane waves
A(r,t) = ∑_mA_m(r,t) G_m(t)
= ∑_mA_m {u_m (k_m ·r - ω_m t - γ_m)}G_m(t),
where each term in the sum models a single pulse with amplitudeA_m,
carrier frequencyω_m,
and carrier-envelope phaseγ_m.
The Coulomb gauge condition implies that
the (complex) polarization vectoru_mis orthogonal to the real wave vectork_m,
which has lengthω_m/cwherecis the speed of light.
The electric- and magnetic-field amplitudes of each pulse areE_m = ω_m A_mandB_m = E_m/c, respectively, and we define the peak intensity of each pulse to be
I_m = 1/2ϵ_0 c E_m^2.
Chirped laser pulses can be modelled by lettingγ_mbe time-dependent.
In experimental work, Gaussian functions are often favored for the envelopesG_m(t). In numerical studies, however,
Gaussians are inconvenient due to their long tails and infinite support.
For this reason, we use trigonometric envelopes on the form <cit.>
G_m(t) =
cos^n(π (t-t_m)/T_mn) | t-t_m |≤T_mn/2
0 | t-t_m | > T_mn/2
wheren > 0is a chosen parameter,t_mis the central time of pulsem, andT_mnis the total duration
ofA_m.
The total duration depends onnand may be computed from
T_mn = πτ_m/2arccos(2^-1/(2n)),
whereτ_mis the full width at half maximum ofG^2_m(t), i.e.,τ_mis approximately the desired
experimental pulse duration defined from the intensity distribution <cit.>.
The trigonometric envelopes, Eq. (<ref>), define a sequence of functions that rapidly
and uniformly converges to the Gaussian functionexp(-2 ln(2) (t-t_m)^2/τ_m^2)for increasing values ofn<cit.>.
Moreover, in contrast to finite numerical representations of Gaussian envelopes,
the trigonometric envelopes guarantee that the dc (zero-frequency) component of the electric field vanishes identically
for any choice ofn > 0, in agreement with the far-field approximation of the Maxwell equations <cit.>.
A similar setup has been used before in grid treatments of single-electron systems <cit.> where pulses on the form
A(r,t) = A_0 sin^2(π (ω t - k·r)/ω T)sin(ω t - k·r)u,
were used. Here,uis a real polarization vector and the envelope depends both on time and on spatial coordinates.
This has the benefit of modelling the overall shape of the pulse in space, albeit with potential edge effects if the approximationA(r,t) ≈0att=0andt=Tis made along with a neglect of the spatial non-periodicity.
The pulse with the purely time-dependent envelope, Eq. (<ref>) withn=2, may be regained from the spatio-temporal envelope
by an expansion through lowest order ink·r/n_cyc, wheren_cycis the number of optical cycles of the pulse.
§.§ The spectral response function
Since we have assumed a divergence-free electric field, the power operator becomesP(r,t) = -E(r,t)·π(r,t), and Eq. (<ref>) simplifies to
Δℰ = -∫_-∞^∞ t ⟨E(r,t)·π(r,t)⟩.
Using the Fourier transform convention
f(t) = ℱ_ω[f̃(ω)] = 1/√(2π)∫_-∞^∞ω f̃(ω)e^ω t,
f̃(ω) = ℱ_t[f(t)] = 1/√(2π)∫_-∞^∞ t f(t)e^-ω t,
the integration over time in Eq. (<ref>) can be turned into an integration over frequency,
Δℰ = ∫_-∞^∞ω Y(ω),
with
Y(ω) = - ℱ_t [ ω⟨A(r,ω)^*·π(r,t) ⟩],
where we have usedE(r,ω)^* = ωA(r,ω)^*.
Introducing
f_1,m(r) = cos(k_m·r),
f_2,m(r) = sin(k_m·r),
g_1,m(t) = cos(ω_mt+γ_m)G_m(t),
g_2,m(t) = sin(ω_mt+γ_m)G_m(t),
u_m^ij = δ_ij(u_m) + ϵ_ij(u_m),
whereδ_ijis the Kronecker delta andϵ_ijis the Levi-Civita symbol, the vector potential, Eq. (<ref>),
can be recast as
A_m(r,t)G_m(t) = A_m∑_i,j=1^2u_m^ijf_i,m(r)g_j,m(t).
Equation (<ref>) can now be written as
Y(ω) = ω∑_m∑_i,j=1^2 F_ij,m(ω) g_j,m(-ω),
whereF_ij,m(ω)is the Fourier transform of the function
F_ij,m(t) = - A_mu_m^ij·[ ⟨ f_i,m(r)p⟩.
.+ ∑_n∑_k,l=1^2 A_n u_n^kl⟨ f_k,n(r) f_i,m(r) ⟩ g_l,n(t)].
Hence,
Δℰ = ∫_0^∞ω ω∑_m∑_i,j=1^2 [(1 - 𝒫) F_ij,m(ω) g_j,m(-ω) ],
where𝒫is the parity operator defined by𝒫f(ω) = f(-ω).
The spectral response function thus becomes
S(ω) = ∑_m∑_ij=1^2 (1 - 𝒫)F_ij,m(ω) g_j,m(-ω),
which can be computed by samplingF_ij,m(t)during a simulation, followed by Fourier transformation in a post-processing step.
In the electric-dipole approximation,f_1,m(r) = 1andf_2,m(r) = 0, and in this case the spectral response function reduces to
S(ω) = 2 [ ⟨π⟩(ω) ·A(ω)^* ],
or, equivalently,
S(ω) = -2 [ ⟨d⟩(ω) ·E(ω)^* ],
in terms of the dipole operatord=-r. The latter expression, Eq. (<ref>), was used in
Refs. <cit.>.
We remark that Eqs. (<ref>) and (<ref>) are equivalent only if the Ehrenfest theorem, Eq. (<ref>),
is satisfied, i.e., for fully variational many-body wave function approximations, and in the limit of complete one-electron basis set.
For the visual presentation of spectra we use normalized spectral response functions
S(ω) = S(ω)/max(S_ref(ω))
whereS_refis the spectral response function of some reference system.
§ NUMERICAL EXPERIMENTS
In order to test the multipole-expansion-free theory outlined above, we will investigate the following aspects:
* Reproducibility of results obtained within the electric-dipole approximation in the long wavelength limit:
Core-level pump-probe spectrum of LiH (section <ref>).
* Reproducibility of results obtained with low-order multipole expansions for short wavelengths:
Pre-K-edge quadrupole transitions in Ti (section <ref>).
* Intrinsically beyond-dipole phenomena:
Anisotropic circular dichroism (section <ref>).
§.§ Computational details
All simulations are performed with the open-source software Hylleraas Quantum Dynamics (HyQD) <cit.>.
We employ a series of nonrelativistic, closed-shell, spin-restricted time-dependent electronic-structure
methods based on a single reference Slater determinant
built from spin orbitals expanded in a fixed atom-centered Gaussian basis set. The orbital expansion coefficients are either kept constant (static orbitals)
at the ground-state Hartree-Fock (HF) level or allowed to vary in response to the external field (dynamic orbitals). Static orbitals are
used in the time-dependent configuration interaction singles (TDCIS) <cit.>,
time-dependent second-order approximate coupled-cluster singles-and-doubles (TDCC2) <cit.>,
and time-dependent coupled-cluster singles-and-doubles (TDCCSD) <cit.> methods. Dynamic orbitals are
used in the time-dependent Hartree-Fock (TDHF) <cit.>,
time-dependent orbital-optimized second-order Møller-Plesset (TDOMP2) <cit.>,
and orbital-adaptive time-dependent coupled-cluster doubles (OATDCCD) <cit.> methods.
Only the methods using dynamic orbitals are gauge invariant (in the limit of complete basis set) <cit.>.
No splitting of the orbital space is used in the OATDCCD method which, therefore, is identical to the nonorthogonal orbital-optimized
coupled-cluster doubles model <cit.>.
In the TDHF and TDOMP2 models the dynamic-orbital evolution is constrained to maintain orthonormality throughout, whereas in OATDCCD theory the
dynamic orbitals are biorthonormal <cit.>.
The methods can be roughly divided into three approximation levels.
The TDCIS and TDHF methods are the least computationally demanding ones (with formal scaling𝒪(K^4)withKthe number of basis functions)
and do not account for electron correlation.
The TDCCSD and OATDCCD methods are the most accurate and most expensive (𝒪(K^6)) methods with full treatment of double excitations.
Finally, the TDCC2 and TDOMP2 methods are intermediate in both accuracy and computational cost (𝒪(K^5)).
The TDCC2 method is a second-order approximation to the TDCCSD model, while the TDOMP2 model is the analogous second-order approximation to the
orbital-optimized coupled-cluster doubles model <cit.>.
The doubles treatment of TDOMP2 theory is essentially identical to that of TDCC2 theory but provides full orbital relaxation through
unitary orbital rotations instead of the singles excitations of static-orbital coupled-cluster theory.
Since fixed, atom-centered Gaussian basis sets are used, ionization cannot be described and, therefore, the simulations are
restricted to weak electromagnetic field strengths.
On the other hand, the fixed basis set allows us to compute matrix elements of the plane-wave interaction operators using
the OpenMolcas software package <cit.> via a Python interface implemented in
the Dalton Project <cit.>.
The remainder of the Hamiltonian matrix elements
and the ground-state HF orbitals are computed using the PySCF program <cit.> with the exception of the LiH system for which
the Dalton quantum chemistry package <cit.> was used.
The convergence tolerance for the HF ground states is set to10^-10 a.u.for both the HF energy and the norm of the orbital gradients in the PySCF calculations, while the default value of10^-6 a.u.on the HF energy was used in the Dalton calculations.
The basis sets were obtained from the Python library Basis Set Exchange <cit.>.
The systems are initially in the ground state which is calculated with ground-state solvers implemented in HyQD
for all the methods except the TDHF and TDCIS models, for which the ground-state wave function is computed using PySCF.
A convergence tolerance of10^-10is also used for the amplitude residuals in the ground-state coupled-cluster calculations.
The integration of the equations of motion is done with the symplectic Gauss-Legendre integrator <cit.>
of order six and with a convergence threshold on the residual norm of 10^-10for the implicit equations.
The simulations are performed with the pulse defined in Eq. (<ref>). The laser pulse parameters
will be given for each system below.
In actual simulations, time-dependent functions such asF_ij,m(t)andg_i,m(t)are computed as
discrete time series, forcing us to use the fast Fourier transform algorithm.
To reduce the appearance of broad oscillations around the peaks due to spectral leakage,
we roughly follow the procedure used by <cit.>.
The simulation is started at timet<0when the first pulse is switched on and continued until
timet_max > 0after the last pulse is switched off.
We then extend the recorded time series such thatt_min=-t_maxto obtain a symmetric time range
aboutt=0. To do so, we use thatA(r,t)=0and henceV(t) = 0in the time interval before the pulse is switched on.
We then multiply the resulting time series defined on the uniformly discretized time interval fromt_mintot_maxwith the Hann function,
w_H(t) = cos^2(π t/2t_max),
before the fast Fourier transform is performed.
§.§ Core-level pump-probe spectrum of LiH
The most common experimental methods for spectral analysis of attosecond interactions employ pump-probe setups.
Therefore, we start by simulating a pump-probe spectrum for LiH.
The K pre-edge features of Li are expected at less than60 eV, corresponding to a wavelength of∼ 200 Å. In the weak-field limit, the beyond-dipole effects are expected to be quite small, allowing us to compare
with the TDCCSD simulations carried out within the electric-dipole approximation
by <cit.>.
For the most part we follow the setup of <cit.>.
We start the TDCCSD simulations att=-200 a.u.and end it att_max=5000 a.u.The pump pulse centered att_1=-40 a.u.has
a carrier frequency of3.55247 eVand maximum electric field strength of0.01 a.u.(corresponding to a peak intensity of3.51×10^12 W/cm^2), while the probe pulse centered att_2=0 a.u.has a carrier frequency of57.6527 eVand maximum electric field strength of0.1 a.u.(peak intensity3.51×10^14 W/cm^2). Both pulses are linearly polarized in thez-direction (parallel to the molecular axis)
with zero carrier-envelope phases, and the propagation direction is along thex-axis. The beyond-dipole spectrum was generated using Eq. (<ref>) while Eq. (<ref>) was used to generate the dipole spectrum. The dipole simulation was done in velocity gauge to eliminate any gauge differences between the two simulations.
We note in passing that the intensity of the probe pulse is too strong to warrant the complete neglect of ionization processes, but to facilitate
comparison with the spectra reported in Ref. we choose to keep it.
used a Gaussian envelope on the electric field with root-mean-square widthσ_1=20 a.u.for the pump pulse andσ_2=10 a.u.for the probe pulse.
Here, we instead use the trigonometric approximation in Eq. (<ref>) placed on the vector potential with
T_mn = π√(ln(2))σ_m/arccos(2^-1/(2n)), m = 1,2,
andn = 19, which is the largest integer for which the pump pulse is strictly zero att=-200 a.u.There are mainly three aspects of our simulations that will make our dipole spectrum different from that in Ref. <cit.>:
(1) placement of a trigonometric envelope on the vector potential rather than a Gaussian envelope on the electric field,
(2) simulating in velocity gauge instead of length gauge, and
(3) using Eq. (<ref>) rather than Eq. (<ref>) to generate the spectra.
The first point corresponds to effectively a different electric field component of the physical pulse.
This difference will diminish with increasing number of cycles in the pulses.
Both points (2) and (3) are due to lacking gauge invariance.
Illustrating the difference between the two pulse setups, Fig. <ref> shows thez-component of the electric field
at the origin,E_z(0,t), with Gaussian envelope on the electric field and trigonometric envelope on the vector potential.
The bottom panel shows the difference between the two pulse setups, and the contribution to the difference due to the trigonometric
approximation and due the placement of the envelope on the vector potential rather than on the electric field.
We see that the placement of the envelope is the dominating contribution, especially in the pump region.
The difference in the pump region is also more significant due to the smaller amplitude and consequently larger relative difference.
Fig. <ref> shows the TDCCSD dipole spectra generated with the two alternative setups.
The length-gauge spectrum is identical to that reported by <cit.>, and although differences are
visible on the scale of the plot, we conclude that the velocity-gauge spectrum conveys the same physics.
Acknowledging the differences between the two setups, we will now focus on the difference between the simulations with and without the dipole approximation.
Figure <ref> compares the pump-probe spectrum simulated in the dipole approximation and with a plane-wave operator generated with equations (<ref>) and (<ref>), respectively.
Evidently, beyond-dipole effects are utterly negligible in this case: The simulation with the plane-wave operator produces a spectrum with the same
transition frequencies as the velocity-gauge electric-dipole simulations, deviating by at most4.5 ×10^-5, corresponding to 0.0087 %, in relative intensity.
§.§ K pre-edge quadrupole transitions in Ti
For heavier elements, the bound core-valence excitations move up in energy and the shorter wavelengths become comparable to the “size” of the
atoms in terms of, e.g., covalent atomic radii. This implies that higher-order multipole effects become visible in high-resolution spectra.
The K-edge of Ti is expected at just below5000 eV. This corresponds to a wavelength of roughly2.5 Åwhich is comparable to
the covalent radius of Ti (1.60 Å<cit.>). Consequently, one can expect visible beyond-dipole effects even in the low-intensity limit.
We consider the Ti^4+ion and the TiCl_4molecule. In the Ti^4+ion the1s →3dtransition is
dipole forbidden but quadrupole allowed. In TiCl_4the tetrahedral symmetry splits the3d-orbitals into groups of twoeorbitals and threet_2orbitals. The1s →etransition is dipole-forbidden but quadrupole-allowed,
while the1s →t_2transition attains a dominant electric-dipole contribution due to4p–3dmixing.
Experimentally <cit.>, a broad peak around4969 eVin the X-ray absorption spectrum of TiCl_4has been assigned to the1s →t_2and1s →ewith most of the intensity stemming from the former.
In the implementation presented in this paper,
electric-quadrupole and other higher-order contributions from the electromagnetic field should automatically be accounted for.
For both the Ti^4+and TiCl_4systems, we perform simulations with a10-cycle pulse withn=2for the envelope, Eq. (<ref>), carrier frequency181 a.u.(4925.26 eV),
and carrier-envelope phaseγ= 0.
The duration of the simulation is100 a.u.for Ti^4+, while for TiCl_4we use a total simulation time of600 a.u.to ensure
a reasonable resolution of the splitting of the d-orbitals.
The electric-field strength isE_1 = 0.01 a.u.(peak intensity3.51×10^12 W/cm^2) and time stepΔt = 2.5×10^-4 a.u.Linearly polarized along thex-axis, the pulse is propagated along thez-axis (parallel to one of the four Ti–Cl bonds in the case of TiCl_4).
All Ti^4+spectra are normalized relative to the maximum peak in the TDCCSD spectrum.
We first consider the1s →4pand1s →3dtransitions of Ti^4+, which have been studied recently at the
equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) level of theory by
<cit.> using multipole expansion up to electric octupole/magnetic quadrupole terms, for the full second-order contribution in the "mixed" length and velocity gauge <cit.>, in the framework of the Fermi golden rule.
In order to compare with their results, we use the ANO-RCC-VDZ basis set <cit.>.
Figure <ref> displays the K pre-edge spectrum obtained for Ti^4+with the TDCC2, TDOMP2, TDCCSD, and OATDCCD methods,
showing also the transition frequencies obtained by <cit.>.
To within the spectral resolution of the simulation, the TDCCSD method predicts the same transition frequencies as
the static EOM-CCSD method, as expected. The intensity of the dipole-allowed1s →4ptransition
is very nearly the same both with and without the dipole approximation.
The orbital-adaptive methods yield roughly the same intensity profiles as their static-orbital counterparts, but the transition
frequencies are blue-shifted:∼0.5 eVfor TDOMP2 versus TDCC2 and∼2 eVfor OATDCCD versus TDCCSD.
As has been observed previously <cit.>, these blue-shifts are insignificant compared with other
sources of error such as basis-set incompleteness and higher-order correlation effects.
Electron-correlation effects are significantly more important than the orbital relaxation provided by dynamic orbitals,
as seen in Fig. <ref> where the TDCCSD spectrum is compared to the spectra obtained with the TDHF and TDCIS methods.
While the TDHF and TDCIS simulations produce virtually identical spectra,
electron correlation causes a red-shift of the transition frequencies by roughly8 eV.
The TDHF and TDCIS intensities are comparable to but slightly higher than the TDCCSD ones.
The main source of error, besides relativistic effects, is the choice of basis set: Changing from the ANO-RCC-VDZ basis set
to the cc-pVTZ basis set increases the EOM-CCSD transition frequencies by more than28 eV<cit.>.
Since we are not aiming at prediction or interpretation of experimental results in this work, we study the TiCl_4K pre-edge spectrum using the
most affordable TDCIS method with the ANO-RCC-VDZ basis set. The TDCIS spectrum is shown in
Fig. <ref>.
The dipole-forbidden1s →etransition
is visible at4941.50 eV, roughly1.5 eVbelow the dipole-allowed1s →t_2transition at4942.99 eV.
The TDCIS frequencies are blue-shifted by approximately12 eVrelative to the EOM-CCSD results reported by
<cit.>.
The1s →t_2transition has a
slightly higher intensity with the plane-wave interaction operator than with the dipole interaction operator.
It should also be noted, however, that the intensities of the dipole-allowed transitions typically are
slightly higher with the dipole approximation and, therefore, one should be careful using the dipole result as a reference for evaluating the quadrupole contribution. The deviation may be caused by a difference in the quality of the operator representation or the wave function, which may occur when propagating with different operators in a finite basis set.
§.§ Anisotropic circular dichroism
Circular dichroism (CD)—the difference in absorption of left and right circularly polarized radiation exhibited by chiral molecules—is
a particularly interesting case to test the implementation of the beyond-dipole interaction, since the observed effect cannot be explained within
the electric-dipole approximation. At least electric quadrupole and magnetic dipole terms must be included <cit.>
and, consequently, the differential absorption is weak compared with linear, electric-dipole absorption.
Chiroptical spectroscopies, including CD, are important for determining the absolute configuration of chiral molecules
and core-resonant CD is particularly well suited to gauge local molecular chirality <cit.>.
As Eq. (<ref>) was derived assuming complex polarization vectors, the implementation presented here can easily be used to
generate spectra involving pulses with circular (or, more general, elliptical) polarization, including at short wavelengths.
As alluded to above, the leading contributions to a CD spectrum arise from the magnetic-dipole and electric-quadrupole terms in the multipole expansion
of the vector potential. In an isotropic sample, the quadrupole contribution vanishes since the electric dipole–electric quadrupole component of the
rotatory strength tensor is traceless <cit.>. As a prototypical example which previously has been used to test new implementations of
CD spectra <cit.>, we will consider the H_2O_2molecule in a chiral conformation
with fixed orientation relative to the external laser pulse.
The CD spectrum is calculated as the difference between the spectral response functions of two distinct simulations:
one with left circular polarization and one with right circular polarization of the pulse. We define the normalized differential absorption as
S_l-r(ω) = S_l(ω) - S_r(ω)
whereS_l(ω)andS_r(ω)are the normalized spectral response functions for the left and right circularly polarized pulses.
The molecular geometry of H_2O_2, depicted in Fig. <ref> along with the Cartesian axis definitions,
is taken from Ref. . The Cartesian coordinates can be found in the supplementary material.
We choose the polarization vectors such thatu^l + u^r = ĵ, whereĵis a unit vector aligned with the C_2axis and
superscriptsrandlrefer to right and left circular polarization, respectively, as seen from the source.
We run two pairs of simulations with the propagation direction along thex-axis and along thez-axis.
For the propagation direction along thex-axis we useu^r = (0,1,)andu^l = (0,1,-),
and for the propagation direction along thez-axis we useu^r = (-,1,0)andu^l = (,1,0).
We use a carrier frequency in the K-edge region of oxygen,ω= 20 a.u.(544.23 eV),
and carrier-envelope phaseγ= 0. The duration of the
laser pulse is10optical cycles and the trigonometric envelope is defined withn=2, which corresponds toτ= 1.14 a.u..
The electric-field strength isE_1 = 0.01 a.u.(peak intensity3.51×10^12 W/cm^2) and the carrier-envelope phase isγ= 0.
The time step isΔt = 0.005 a.u.and the total simulation time is1000 a.u.We use the TDHF, TDCIS, TDCC2, TDOMP2, and TDCCSD methods with the cc-pVDZ basis set<cit.>, and
the spectra for propagation direction along thex- andz-axes are normalized with respect to the corresponding TDCIS simulation.
The resulting CD spectra are plotted in Figs. <ref> and <ref>.
As in the Ti^4+simulations above, we see that the TDCIS and TDHF methods produce nearly identical CD spectra
with minor visual differences. The TDCC2 and TDOMP2 methods also yield similar CD spectra, producing the same sign pattern of the differential absorption peaks,
although the TDOMP2 peak positions are slightly more red-shifted than the TDCC2 ones relative to the TDHF peaks. The intensities of the TDCC2 and TDOMP2
spectra are significantly reduced compared with the TDHF and TDCIS spectra.
The TDCCSD method shifts the transition frequencies somewhat but produces an intensity of the dominant peak around561-562 eVwhich is closer to that of TDHF theory than the TDCC2 and TDOMP2 methods. Although this may indicate that high-level electron-correlation treatment is important, the deviation may also be caused by limited frequency resolution (see below).
Of course, the choice of carrier frequency will affect the relative peak magnitudes but further tests have shown that this effect is rather marginal
as long asωis reasonably close to the transition energies.
Figure <ref> shows the CD spectrum obtained from the TDCIS simulations along with a stick spectrum calculated
from the rotatory strength tensors <cit.> computed by full diagonalization of the CIS Hamiltonian matrix.
For both propagation directions, the stick spectrum is normalized such that the maximum peak is equal to the maximum peak from the corresponding TDCIS simulation.
Since the carrier frequency is544.23 eV, it is expected that the peaks of the stick spectrum are smaller than the simulated peaks to the immediate left of the
dominant peak, and larger further to the right of the dominant peak. This is indeed what we observe
in the bottom panel of Fig. <ref>.
In the top panel, however, this is not the case. This can be ascribed to insufficient convergence.
The excited states of H_2O_2in the C_2geometry come in pairs, typically separated by 0.01 eV or less, formed by the lowering of symmetry relative to a
planar, achiral (cis or trans) structure.
For propagation in thex-direction, the CD for these pairs of states are of about the same magnitude but with opposite signs, causing lowering of the peak intensities.
Figure <ref> shows the effect of increasing the simulation time from1000 a.u.to7500 a.u.The change in the
bottom panel is relatively minor, while the dominant peak in the top panel has increased by an order of magnitude.
This is closer to the expected difference calculated from rotatory strength tensors. However, the peak at567 eVis still much suppressed,
which is caused by the states only being separated by about0.0088 eV.
An overview of the occupied orbitals and the11lowest-lying virtual orbitals is given in Table <ref>.
The core orbitals, 1σ_sand 1σ_s^*, are separated by7.3483×10^-3 eVand, hence, excitations from either of the core orbitals to
low-lying virtual orbitals will fall in the K pre-edge region.
The TDCIS spectrum contains5main peaks below580 eValong with three smaller ones at553.44 eV,560.24 eV, and576.82 eV.
The first peak at546.48 eVcan be viewed as a transition to virtual orbitals 5B and 6B.
The main peak in Fig. <ref> at569.63 eVcontains significant excitations to orbitals 8A, 9A and 9B,
which are orbitals with significantπcharacter, and with electron density mostly located on the oxygen atoms.
The main peak in Fig. <ref> at573.97 eVis mainly due to excitations to the 7B and 8B (and somewhat to 10A) orbitals.
Finally, noting that the cc-pVDZ basis set is insufficient for accurate predictions of CD spectra in general—see, e.g., Ref. —
we compare the TDCIS spectra with those obtained with larger basis sets in Fig. <ref>.
As expected, the basis-set effect is significant.
Going from double-zeta to triple-zeta basis retains some of the main features but the energies are red-shifted, whereas
the inclusion of diffuse orbitals in the aug-cc-pVDZ basis set leads to a much more radical change of the underlying dynamics due to a higher density of
excited states in the energy region around the carrier frequency.
More accurate predictions of transient CD spectra, especially with the higher-level TDCC methods, clearly require larger basis sets including diffuse functions.
§ CONCLUDING REMARKS
We have derived a gauge invariant expression for the spectral response function which is applicable to transient absorption and emission spectra.
This expression is applicable both within and beyond the electric-dipole approximation. Using an enveloped plane-wave vector potential to formulate
the semiclassical matter-field interaction operator, simulations of laser-driven many-electron dynamics with a fixed atom-centered Gaussian basis set
can be straightforwardly carried out with no additional cost compared with the analogous electric-dipole simulations.
Numerical experiments show that beyond-dipole effects are fully captured without explicit multipole expansions, and that electric-dipole results are
correctly reproduced in the long wavelength limit. Circular (or, more general, elliptical) polarization is easily handled, as illustrated by
preliminary simulations of anisotropic transient X-ray circular dichroism spectra.
Aimed at electronic ground and bound excited states, fixed atom-centered Gaussian basis sets do not support electronic continuum states and, consequently,
we have only considered low-intensity laser fields in this work. We are currently extending the approach presented here to more flexible bases that allow us
to study highly non-linear processes such as core ionization where the magnetic component of the electromagnetic field may play a decisive role.
§ SUPPLEMENTARY MATERIAL
The supplementary material contains the molecular geometries of LiH, TiCl_4, and H_2O_2, and a brief analysis of the differences
between pulses with envelopes defined on the electric field and on the vector potential.
§ ACKNOWLEDGMENT
This work was supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262695.
The calculations were performed on resources provided by Sigma2—the National Infrastructure for High Performance Computing and
Data Storage in Norway, Grant No. NN4654K.
SK and TBP acknowledge the support of the Centre for Advanced Study in Oslo, Norway, which funded and hosted the
CAS research project Attosecond Quantum Dynamics Beyond the Born-Oppenheimer Approximation during the academic year
2021-2022. RL acknowledges the Swedish Research Council (VR, Grant No. 2020-03182) for funding.
|
http://arxiv.org/abs/2307.00986v1
|
20230703130546
|
Effect of the cross-section architecture on the impact resistance of bio-inspired low-porosity structures using neural networks
|
[
"Shashank Kushwaha",
"Junyan He",
"Diab Abueidda",
"Iwona Jasiuk"
] |
cs.CE
|
[
"cs.CE"
] |
]Shashank Kushwaha^1
]Junyan He^1
]Diab Abueidda^2
]Iwona Jasiuk^1mycorrespondingauthor
^1 Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
^2 National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, IL, USA
[mycorrespondingauthor]Corresponding author
[email protected]
Biological structural designs in nature, like hoof walls, horns, and antlers, can be used as inspiration for generating structures with excellent mechanical properties. A common theme in these designs is the small percent porosity in the structure ranging from 1 - 5%. In this work, the sheep horn was used as an inspiration due to its higher toughness when loaded in the radial direction compared to the longitudinal direction. Under dynamic transverse compression, we investigated the structure-property relations in low porosity structures characterized by their two-dimensional (2D) cross-sections. A diverse design space was created by combining polygonal tubules with different numbers of sides placed on a grid with different numbers of rows and columns. The volume fraction and the orientation angle of the tubules were also varied. The finite element (FE) method was used with a rate-dependent elastoplastic material model to generate the stress-strain curves in plane strain conditions. A gated recurrent unit (GRU) model was trained to predict the structures' stress-strain response and energy absorption under different strain rates and applied strains. The parameter-based model uses eight discrete parameters to characterize the design space and as inputs to the model. The trained GRU model can efficiently predict the response of a new design in as little as 0.16 ms and allows rapid performance evaluation of 128000 designs in the design space. The GRU predictions identified high-performance structures and four design trends that affect the specific energy absorption were extracted and discussed.
§ GRAPHICAL ABSTRACT
< g r a p h i c s >
Bio-Inspired Structure-property relations Neural networks Specific energy absorption
§ INTRODUCTION
Lightweight structures with high energy absorption capacity are of high interest for multiple engineering applications. Various structural elements found in animals and plants could be used as inspiration to design novel structures that can sustain impacts generated during collision <cit.>. The process of evolution has created complex architectures in nature capable of handling low-to-medium velocity impacts (up to 50 m/s). An example is the trabecular-honeycomb biomimetic structure inspired by beetle elytra <cit.>. Rams see impact velocities of around 5.5 m/s when fighting. Also, sheep horn can withstand a maximum impact force of 3400N <cit.> during collisions.
Biological structures exhibit excellent energy absorption capabilities and inspire the design of new energy absorbers. Bio-inspired structures have been used in countless applications, including automobiles <cit.>, protective armors <cit.>, and wings of aircraft <cit.>. Further, a variety of materials have been used to manufacture bio-inspired structures, including polymers <cit.>, aluminum alloy <cit.>, fiber-reinforced composites <cit.>, and concrete <cit.>. Hence, studying the structure-property relations of bio-inspired designs is of great research and industrial interest. The exploration of structure-property relations involves surveying many different structural features at a given loading condition. Various studies utilize optimization-based methods to generate new designs for energy absorption and study the structure-property relations <cit.>. However, a systematic compilation of bio-inspired designs' mechanical response and energy absorption characteristics is lacking. This paper aims to develop a systematic framework to generate structures that combine different design elements found in low-porosity structures in nature. i.e., the study of the structures with aligned tubules whose porosity is in the range of 1% - 5%, under transverse dynamic compression. The framework generates low porosity structures with constant cross-sections along the thickness direction by randomly combining various design features such as tubule shape, orientation, and in-plane arrangement.
Neural network (NN) models have been extensively used in the field of mechanics to predict stress-strain response in composites <cit.>, metals <cit.>, and lattices <cit.>. However, the use of NN models for studying bio-inspired structures remains scarce. Once trained, the NN can efficiently predict the mechanical performance of new designs at a rate much faster than classical numerical simulations, thus allowing rapid preliminary design selection and trend identification. Therefore, the second objective of this work is to develop a neural network (NN) model to approximate the structure-property relations, linking the input design parameters with loading conditions and the mechanical performance of the structure. Using this trained model, structure-property maps of the design space at different loading rates are identified, and design trends are discussed.
This paper is organized as follows: sec:method presents an overview of the numerical simulations, the input data preprocessing, and the NN model's architecture. sec:results includes the results obtained from the study, and explores the quality of NN predictions and the validity of the results. sec:conc summarizes the outcomes and lists some possible future directions for the bio-inspired structures.
§ METHODS
§.§ Geometry generation and Finite element analysis
The designs considered in this work are 3D structures containing tubules with a constant cross-section; hence the designs can be uniquely characterized by their 2D, in-plane cross-sections, and the plane strain condition is assumed. A Python script was developed to generate cross-sectional sketches in the finite element (FE) analysis package Abaqus <cit.> for a given volume fraction, tubule shape, tubule orientation, and the arrangement of the tubules within the structure. The cross-section of the bio-inspired structures studied in this work is an 11-by-11 mm^2 square, whereas all the tubules are confined within a concentric square area of 10-by-10 mm^2. The tubule volume fraction was uniformly sampled from the range [1% , 5%]. In this work, we approximated the tubule cross-sections by polygons of a different number of sides that were uniformly sampled from the range [3, 6]. Additional rotation was applied to the cross-sections, and the rotation angle was uniformly sampled from the range [0, 360] degrees. Multiple tubules can be present in the structure, and we placed them on a n_y × n_x grid, where n_y and n_x denote the number of rows and columns, respectively. n_y and n_x were sampled in the range [1, 8]. Some selected structures in the design space are shown in sample_designs. All the structures were discretized with 4-node bilinear plane strain quadrilateral elements with reduced integration. A nominal element edge length of 0.24 mm was chosen for meshing.
The relationship between different structural designs and energy absorption mechanisms seen in bones, teeth, and horns is discussed by McKittrick et al.<cit.>. Further, they discuss that when rams butt heads, the horns are loaded in the transverse direction, which provides more energy absorption than in the longitudinal direction. The Abaqus/Explicit dynamic simulation defined a rate-dependent elastic-plastic material model to capture the structures' response at varying strain rates. The material properties of the base material chosen for the study are similar to polycarbonate-acrylonitrile butadiene styrene (PC-ABS). The Young's modulus and Poisson's ratio are 2.5 GPa and 0.35, respectively. The strain rate dependent yield stress versus plastic strain curves used to define the plastic region are included in material_model. However, the strains to failure are tremendous in horns, as much as 80%. Hence, no damage model has been used in this study since the maximum nominal strain considered is 25%.
In this study, the boundary conditions for impact loading were approximated by sandwiching the structure between two rigid plates, and the structures were subjected to dynamic transverse compression. The bottom plate was held fixed, and the top plate traveled downward with a constant velocity determined by the user-defined strain rate. The nominal strain rate was uniformly sampled from the range [0.9, 90.9] s^-1 corresponding to indenter velocity from the range [10, 1000] m/s. The reaction force and displacement were measured at the top rigid plate. All sidewalls were traction-free and were free to deform. All simulations had a constant final displacement of 2.25 mm, corresponding to 25% nominal compressive strain along the y-axis. The reaction force and displacement at the top plate, plastic dissipation, and elastic strain energy of the porous structures were outputs of the FE simulations. cae depicts the FE model assembly and a typical deformed structure at the end of dynamic compression.
A total of 4500 simulations were conducted on an AMD Ryzen 7 5800H processor with 8 cores. Depending on the applied impact velocity, each simulation took about 5-30 minutes to complete.
§.§ Neural network for sequence prediction
§.§.§ Input data, data augmentation, and loss function
The input parameter range is described in sec:fe_sim. The corresponding output arrays were obtained from the impact simulations conducted in Abaqus/Explicit. The output arrays were down-sampled to 50-time steps for the efficiency of neural network training. The inputs used in the model consist of eight temporal information arrays. The first five arrays are constant in time and correspond to the parameters used to define the structure's geometry. The parameters include n: the topology of the tubule (i.e., number of sides in a polygon), n_ x: number of tubules evenly distributed in the x direction, n_ y: number of tubules evenly distributed in y direction, a_ o: rotation angle for all the tubules in the structure, and v_ f: volume fraction of the individual tubule in each element created by n_ x times n_ x elements in a 10-by-10 mm^2 grid. The remaining three inputs are physics-informed temporal arrays described as follows:
* Current time value at each output time point.
* Nominal compression strain at each output time point.
* Nominal compression strain rate.
A standard scaler in Scikit-Learn normalized all the inputs <cit.> before training. The scaler was fitted only to the training data points to avoid information leakage <cit.>. The available training data was increased using data augmentation. Corresponding to each simulation conducted in Abaqus with 25% final nominal strain, twenty final nominal strains in the range [10%,25%] were randomly sampled, and all inputs and outputs were linearly interpolated to the selected final strain level. This method generated training data points at the same strain rate but different final nominal strain, and increased the total number of input data points from 4500 to 90000. These data points were divided into training (65%), validation (15%), and testing datasets (20%).
The mean absolute error (MAE) has been employed as the loss function in this study <cit.>. The loss function is defined as:
MAE = ∑^N_i=1 |Y_i - Ŷ_i| / N ,
where N,Y_i,Ŷ_i denote the number of training data points, ground-truth outputs, and the NN predictions, respectively. The mean squared error (MSE) is chosen as a metric, which is defined as:
MSE = ∑^N_i=1 (Y_i - Ŷ_i)^2 / N .
§.§.§ Neural network model
This study uses a recurrent neural network (RNN) model to train the forward model for output prediction. Specifically, the gated recurrent unit (GRU) model is used. This model has been widely used to predict sequences <cit.>. Further, Abueidda et al.<cit.> compared the performance of different RNN models to predict the response of elastoplastic material undergoing deformation under variable strain rates. Although the GRU model is more computationally expensive than the long short-term memory (LSTM) model and the temporal convolutional network (TCN) model, it predicts the output with lower error. Based on the GRU model's demonstrated capabilities to predict the structures' response under complex deformation histories, this study used the model to predict stress-strain curves for the structures under dynamic transverse compression.
The GRU-based model was implemented and tested in Keras <cit.> with a TensorFlow <cit.> backend. The GRU model comprises three stacked layers of 500 GRU units, each with hyperbolic tangent (tanh) activation leading to a model with 3.77 million trainable parameters. The loss function was minimized using an Adam optimizer <cit.> with an initial learning rate of 1×10^-3. The model was trained for 150 epochs with a batch size of 600, and training was repeated 10 times to obtain average training time and model accuracy. The data set was shuffled and partitioned in each training repetition, as described in sec:data_loss. All training was conducted on Google Colab Pro+ using GPU acceleration on Tesla V100 GPU.
§.§ Global optimization
Using the trained neural network, a Python script was developed to traverse the input design space and evaluate the energy absorption performance. The input design space was divided into grid points based on the first five input parameters described in sec:data_loss. Each grid point represents a unique structure within the input design space based on five input parameters. The specific energy absorption (SEA) was computed for each grid point by calculating the area under the load-displacement curve (calculated from the GRU model predictions). Three design parameters: side of the polygon, n_ x and n_ y could take discrete integer values within their respective input range, whereas volume fraction and angle offset were divided into 40 and 20 equally spaced intervals, respectively. Hence, this method was used to analyze the SEA for 128000 structures within the input design space. However, it should be noted that only the designs with non-intersecting tubules are considered valid. Other designs are excluded from the analysis. This process was repeated for five different values of v_ y within the range described in sec:method. A similar process can be repeated at different equally spaced intervals to obtain the performance of all the structures in the input design space for a given set of final strain and velocity of the indentor (v_ y).
§ RESULTS AND DISCUSSION
§.§ Predicting stress-strain curves and energy outputs
The number of input data points used in training was decided based on the prediction accuracy measured using the value of the loss function. In this study, the percentage of total input data was incremented to train the neural network model until similar prediction accuracy was observed. Further, the average response of the GRU model was measured by training the model 10 times after shuffling the data before each training iteration. The loss function value corresponding to the increasing amount of training data is shown in fig:pct_plot. Further, a typical training history is also presented in fig:loss. The average training and inference times for the GRU model and the average FE simulation time are reported in time_comp.
After training the NN, the NN predictions were compared to the ground truths obtained from FE simulations, ranked by the percentile of MAE for each output array. The model with median MAE among the 10 training repetitions was used to generate the plots shown in prediction_comparison. The final MAE for this model is 6.07× 10^-3.
The amount of data required for training was chosen by checking the loss function value for different percentages of input data. fig:pct_plot shows that the loss increases as the percentage of the input data is decreased concerning the reference (80% data). Hence, we chose 80% data as input for training. Further, it could be inferred from fig:loss that no major overfitting has occurred. The statistical distribution of MAEs is shown in prediction_comparison. From the first three columns, up to 75% percentile, we could see that the GRU model can closely predict the FE simulation results for stress-strain curves, plastic dissipation, and elastic strain energy. Even in the worst case, the GRU model correctly predicts the general shape of the FE-simulated stress-strain curve.
In the current study, the cross-section image of the structure has been parameterized using five design variables. These variables are then used as inputs in the GRU model. Another valid approach is to encode the cross-sectional images of the design via an autoencoder before training the GRU model. This approach was used in the work of He et al.<cit.> for exploring the structure-property relations of thin-walled lattices. However, training the autoencoder can take additional computational resources and is unnecessary when discrete parameter values can parameterize the current design space. Hence in this work, we used the design parameters to describe the designs instead of the autoencoder. However, judging from the comparison with FE data shown in prediction_comparison, the prediction accuracy is high even with the simplified approach.
§.§ Validation of the neural network predictions
The worst and best designs (as predicted by the trained GRU model) at two different impact velocities (10 and 100 m/s) were validated by FE simulations to check the accuracy of the GRU model predictions. The four designs are depicted in validatation_designs.
FE simulations were conducted to obtain the ground-truth values of SEA under an applied plate velocity of 10 m/s (cases (a) and (b)) and 100 m/s (cases (c) and (d)) and a final axial strain of 0.25. The comparison of the FE-simulated and GRU-predicted SEA values are shown in validation_sea. As can be seen from the results, the trained GRU is highly accurate for the two impact velocities tested, and the predicted SEA values fall within 5% of their respective ground truth values. This result provides confidence in applying the trained model for further inference tasks.
§.§ Structure-SEA map
The Python script described in sec:opt was used to calculate SEA from the stress-strain curve predicted by the NN at each design point. Each structure could be represented by a unique design index defined using the first five input parameters to the NN as described in sec:data_loss. Finally, the scatter plots for SEA at each design surveyed in the grid search for two different impact velocities are plotted in SEA_map and SEA_map_10, which show a structure-property map for this chosen design space.
Using the scatter plot shown in SEA_map, we could identify the best and worst designs regarding specific energy absorption within the input design space for the given loading condition and final strain. These two points are also highlighted in the SEA_map. Further, the same Python code described in sec:opt could be used to plot the SEA for structures with various constraints. For example, SEA_4.5_5 shows the distribution of SEA for structures with a volume fraction of porosity between 4.5% and 5%.
§.§ Design trends and observations
The structure-energy absorption maps shown in sec:SEA_map are useful for obtaining an overview of the entire design space. However, additional design insights could be drawn from the map to guide future design work:
* At the same volume fraction of porosity within the structure, final strain, and indenter velocity exceeding 100 m/s, arranging the pores vertically results in optimal energy absorption. By contrast, the lowest energy absorption is achieved when the porosity is concentrated at the center. The structure illustrated in Stress_max_SEA emerged as the most efficient design for SEA, according to the SEA map depicted in SEA_map. Conversely, the structure in Stress_min_SEA demonstrated the lowest SEA.
The structure with maximum SEA (Stress_max_SEA) has a porosity of close to 1%, whereas the one with minimum SEA (Stress_min_SEA) has a porosity closer to 5%. In both instances, we observe a higher stress band that originates at the structure's corners and radiates toward its center during compression. In essence, the presence of material in areas of high stress is crucial for achieving a higher SEA. In the case of the structure in Stress_max_SEA, only a few pores are present within the high-stress region. On the other hand, the structure illustrated in Stress_min_SEA has its entire porosity at the center, resulting in diminished load-carrying capacity and a lower SEA.
* The orientation of polygonal tubules significantly impacts energy absorption in low-porosity structures. This can be observed in orientation_trend, which illustrates two structures with the same square-shaped porosity volume fraction but different angle offsets. When subjected to similar loading conditions, stack_high_SEA exhibits 4% higher energy absorption compared to stack_low_SEA as validated by FE simulations. The GRU-predicted trend of how the tubule orientation angle affects the SEA is shown in trend_orient for square porosity. The prediction shows a sinusoidal variation, which is reasonable, as the top-down projected load-bearing area (area unaffected by porosity) varies in a sinusoidal fashion.
* The structure with maximum and minimum SEA depends upon the volume fraction of the porosity. Further, it is also affected by the strain rate and the orientation of polygonal porosity, as shown in validatation_designs. For example, the red mark in SEA_map, SEA_map_10, and SEA_4.5_5 shows different structures (design index) with maximum and minimum SEA.
* The Pearson correlation coefficient is calculated to assess the relationship between SEA and different geometric parameters. Both cases show a strong negative correlation between SEA and volume fraction, indicating that increasing porosity volume fraction generally leads to decreasing SEA. The correlation coefficients for angle offset are close to zero, consistent with the sinusoidal nature of the trend observed in orientation_trend. The orientation of pores was found to cause a significant variation in the SEA, that is a difference close to 4%. Hence, in order to conclude correct observations using correlation analysis, it is necessary to employ exploratory grid search methods to identify select designs that exhibit a high SEA. The number of pores in the x-direction is negatively correlated to SEA, while the number of pores in the y-direction is positively correlated. Apart from that, a minor correlation is observed for other variables.
§ CONCLUSIONS AND FUTURE WORK
In this work, a combinatorial framework was developed to generate bio-inspired low-porosity designs with tubules of various shapes, orientations, and in-plane arrangements. The structures were made from PC-ABS with rate-dependent elastoplastic behavior. FE simulations were conducted to obtain the stress-strain curves of the structures at different impact velocities during transverse loading. Using the FE simulation data, a GRU model was trained to predict the stress-strain curve for low-porosity bio-inspired structures under dynamic transverse compression loading. Data augmentation techniques were implemented to reduce the number of simulations required using Abaqus. The trained NN model could make accurate predictions (MAE: 6.07 × 10^-3) for SEA of all the structures across a range of final strain and strain rates. Further, the trained neural network was used to survey the entire design space with 128000 structures at each strain rate. Overall, the trained model NN was able to generate all the performance predictions on low-end laptops. Stress-strain response for each structure could be predicted in 0.16ms. Hence, it renders itself a suitable preliminary guide in preliminary design stages to quickly survey designs for more detailed analyses. The SEA maps were generated using a grid search based on geometric variables. The structure-property maps facilitated the identification of several design trends and observations obtained from the trained NN model. The study investigated the impact of porosity arrangement, volume fraction, strain rate, and orientation on the SEA of low-porosity structures. The study revealed that varying the orientations of pores can result in approximately 4% difference in SEA. Further, arranging pores vertically at the same volume fraction led to greater SEA. Such SEA maps could also be used to study the effect of design parameters on energy absorption for various loading conditions. The Pearson correlation analysis was also utilized to study the correlation between SEA and other geometric parameters. The results indicated a strong negative correlation between SEA and porosity volume fraction, while minor correlations were observed for other variables. The minor correlation between the variables reinforces the need to utilize exploratory grid searches to identify select configurations that exhibit higher SEA under given loading conditions.
In future work, gradients of the GRU model will be utilized to define an inverse design problem and generate new designs. In the current work, periodic boundary conditions were not enforced on the representative volume when comparing different structures. The effect of enforcing periodic boundary conditions will also be explored in future work.
§ DATA AVAILABILITY
The data and source code that support the findings of this study will be available upon request during the review process, and it will be open source after the publication is online.
§ CONFLICT OF INTEREST
The authors declare that they have no conflict of interest.
§ ACKNOWLEDGEMENTS
We acknowledge the support of the National Science Foundation grant (MOMS-1926353) and the Army Research Office contract (No. W 911NF-18-2-0067).
§ CREDIT AUTHOR CONTRIBUTIONS
Shashank Kushwaha: Conceptualization, Methodology, Software, Formal analysis, Investigation, Data Curation, Writing - Original Draft. Junyan He: Methodology, Software, Formal analysis, Writing - Original Draft. Diab Abueidda: Supervision, Writing - Review & Editing. Iwona Jasiuk: Supervision, Resources, Writing - Review & Editing, Funding Acquisition.
unsrtnat
|
http://arxiv.org/abs/2307.01495v1
|
20230704060222
|
Furstenberg entropy spectra of stationary actions of semisimple Lie groups
|
[
"Jérémie Brieussel",
"Tianyi Zheng"
] |
math.DS
|
[
"math.DS",
"math.GR",
"math.PR"
] |
Furstenberg entropy spectra of stationary actions of semisimple Lie
groups
Jérémie Brieussel and Tianyi Zheng
==========================================================================
We determine Furstenberg entropy spectra of ergodic stationary actions
of SL(d,ℝ) and its lattices. The constraints on entropy
spectra are derived from a refinement of the Nevo-Zimmer projective
factor theorem. The realisation part is achieved by means of building
Poisson bundles over stationary random subgroups.
§ INTRODUCTION
A compact metrizable space X acted upon continuously by a locally
compact group G equipped with a probability measure μ admits
a stationary probability measure η, which means a fixed point
of the convolution μ∗η=η. The system G↷(X,η)
is referred to as a stationary action of G. The study of stationary actions
of semisimple Lie groups was initiated by Furstenberg <cit.>.
He showed in particular that stationary measures on X are in bijection
with measures invariant under a minimal parabolic subgroup P.
He also introduced a numerical invariant h_μ(X,η) nowadays
referred to as the Furstenberg entropy:
h_μ(X,η)=-∫_G∫_Xlogdg^-1η/dη(x)dη(x)dμ(g).
It is 0 if and only if η is a G-invariant measure.
A systematic study of ergodic stationary actions of semisimple Lie
groups was developed by Nevo and Zimmer in a series of articles. They
established that any such action admits a maximal projective factor
(G/Q,ν_Q), where Q is a parabolic subgroup of G; and
when G is a higher rank simple Lie group,
this factor is trivial if and only if the stationary measure η
is actually invariant <cit.>. Under a further mixing assumption
(called P-mixing), they proved that the stationary system is a relative
measure-preserving extension of the maximal projective factor.
<cit.>. This implies
in particular that h_μ(X,η)=h_μ(G/Q,ν_Q) and it follows
that the Furstenberg
entropy of P-mixing stationary (G,μ)-spaces can take on only
finitely many values <cit.>. These results no longer hold without higher rank hypothesis, as PSL(2,ℝ)
admits infinitely many P-mixing stationary systems (in fact can
be taken to be smooth manifolds) with distinct entropy. Nor without
the P-mixing hypothesis, as groups with a parabolic subgroup mapping
onto PSL(2,ℝ) may have infinite entropy spectrum <cit.>.
The purpose of the present article is to give a complete description
of the entropy spectrum of SL(d,ℝ) and of its lattices,
equipped with appropriate measures, see Theorem <ref>
below.
We say a step distribution μ on G has finite boundary
entropy if the Furstenberg entropy of the Poisson boundary of (G,μ)
is finite. For such a measure μ, we refer to the range of possible
Furstenberg entropy values over all ergodic μ-stationary systems
as the Furstenberg entropy spectrum of (G,μ):
EntSp(G,μ):={ h_μ(X,ν):(X,ν)(G,μ)} .
Note that by <cit.>, (X,ν) is an
ergodic (G,μ)-stationary system if and only if ν
is extremal in the set of μ-stationary measures on X.
§.§ Structure of stationary systems and constraints on entropy values
We will use:
Let Δ denote simple roots of a semisimple Lie group G.
For I⊆Δ, denote by P_I the standard parabolic
subgroup that corresponds to I (see Section <ref>).
Denote by 𝖿:2^Δ→2^Δ the map where given
I⊆Δ, 𝖿(I) is the largest subset I'⊆ I
such that the Levi subgroup L_I' has no ℝ-rank 1
noncompact simple factors.
From the proof of Nevo-Zimmer projective factor theorem in <cit.>,
we extract the following statement. Plausibility of such a formulation is hinted in the remarks after <cit.>.
Let G be a connected semisimple real Lie group with finite center,
μ an admissible measure on G. Suppose (X,ν) is an ergodic
(G,μ)-system where ν is not G-invariant. Let λ
be the corresponding P-invariant measure on X provided by the
Furstenberg isomorphism. Suppose (G/Q,ν_Q) is the
maximal standard projective factor of (X,ν), where Q=P_I,
then the measure λ is invariant under the parabolic subgroup
P_𝖿(I).
A probability measure μ on G is called admissible if
suppμ generates G as a semigroup, and some convolution
power μ^∗ k is absolutely continuous with respect to Haar
measure on G. The Furstenberg isomorphism between μ-stationary and P-invariant probability measures on X is described in Section <ref>.
When the Levi subgroup L_I has no rank one noncompact
factors, 𝖿(I)=I, Theorem <ref> implies the
following corollary. Compared to <cit.>:
the λ-ergodicity assumption of S-action is dropped; instead
it is assumed that the Levi subgroup of Q has no rank one factors.
Let G be a connected semisimple real Lie group with finite center,
μ an admissible measure on G. Suppose (X,ν) is an ergodic
(G,μ)-system where ν is not G-invariant. Let (G/Q,ν_Q)
be the maximal standard projective factor of (X,ν). If the Levi
subgroup of Q has no ℝ-rank 1 non-compact simple
factors, then (X,ν)→(G/Q,ν_Q) is
a relative measure-preserving extension.
The conclusion of Theorem <ref> can be formulated in terms
of the boundary map. Denote by 𝒫(X) the space of probability
measures on X and let β_v:G/P→𝒫(X)
be the boundary map associated with the stationary measure
ν (its definition is reviewed in Subsection <ref>).
Then the measure λ is invariant under P_𝖿(I)
if and only if β_v factors through the
projection G/P→ G/P_𝖿(I). In this formulation, we
can derive an analogous statement for lattices equipped with Furstenberg
measures, through an induction procedure for stationary actions in
<cit.>.
Let Γ be a lattice in a semi-simple Lie group G. Denote
by P a minimal parabolic subgroup of G. We say a non-degenerate
measure μ_0 on Γ is a Furstenberg measure if
the Poisson boundary of (Γ,μ_0) can be identified with (G/P,ν_P), where ν_P is in the same measure class as
m̅_K, the unique K-invariant probability measure on G/P.
Such measures on Γ exist by <cit.>.
In this setting, a (Γ,μ_0)-stationary system (X,ν)
gives rise to a Γ-boundary map β_ν:G/P→𝒫(X),
although G does not necessarily act on X. We derive the following
from Theorem <ref>.
Let Γ be a lattice in a connected semisimple real Lie group
G with finite center. Equip Γ with a Furstenberg measure μ_0. Suppose (X,ν) is an ergodic (Γ,μ_0)-system
where ν is not Γ-invariant. Let (G/Q,ν_Q)
be the maximal standard projective Γ-factor of (X,ν),
where Q=P_I. Then the Γ-boundary map β_ν:G/P→𝒫(X)
factors through G/P_𝖿(I), that is a.e., β_ν(gP)
depends only on the coset gP_𝖿(I).
Constraints on the Furstenberg entropy spectrum follow from the structure theorems.
Let G be a connected semisimple real Lie group with finite center
and denote by Δ simple restricted roots of G. Let μ
be an admissible step distribution on G with finite boundary entropy.
Then the Furstenberg entropy spectrum of (G,μ) satisfies
EntSp(G,μ)⊆⋃_I⊆Δ[h_μ(G/P_I,ν_I),h_μ(G/P_𝖿(I),ν_𝖿(I))],
where ν_I is the (unique) μ-stationary measure on G/P_I.
An analogous statement for lattices equipped with Furstenberg discretization measures
is stated in Theorem <ref>.
§.§ Realisation of entropy values
For the free group F_k on k-generators and step distribution
μ uniform on the generators and their inverses, Bowen shows in
<cit.> that EntSp(F_k,μ) is the full
interval [0, h_μ], where h_μ is
the Furstenberg entropy of the Poisson boundary of (F_k,μ),
which is equal to the random walk asymptotic entropy. The proof is
based on a construction of Poisson bundles over invariant random subgroups
(IRSs) and analysis of associated random walks on the coset graphs.
For G=SL(d,ℝ) and its lattices, we realise Furstenberg
entropy values within the constraints of Theorem <ref>
and <ref> via construction of Poisson bundles
over stationary systems. Recall the map I↦𝖿(I) defined
in Notation <ref>. We say a measure μ is B_∞ if it is admissible on G,
of compact support and bounded density with respect to the Haar measure.
Let G=SL(d,ℝ) and denote by Δ={1,…,d-1}
its simple roots. Suppose
* μ is in the B_∞ class on G,
* or μ is a Furstenberg measure on a lattice Γ<G of finite
Shannon entropy.
Write S=⟨ suppμ⟩ for the subgroup
generated by suppμ, then
EntSp(S,μ)=⋃_I⊆{1,…,d-1}[h_μ(G/P_I,ν_I),h_μ(G/P_𝖿(I),ν_𝖿(I))],
where ν_I is the μ-stationary measure on G/P_I.
In particular for B_∞-measures, SL(2,ℝ) has full entropy
spectrum, and SL(3,ℝ) has entropy spectrum of the form
{0}∪[h_μ(G/Q),h_μ(G/P)] for some non-trivial parabolic
subgroup P<Q<G. A particular instance for SL(4,ℝ) is
illustrated in Figure <ref>.
It is natural to ask what are entropy spectra of other simple
Lie groups. Note that rank one Lie groups Sp(n,1), n≥2, and
F_4(-20) have Kazhdan's property (T). Therefore by <cit.>,
{0} is an isolated point in their Furstenberg entropy spectra.
For these rank one groups, Theorem <ref> does not provide
sharp constraints on their entropy spectra.
It is classical that for admissible μ considered in Theorem <ref>,
the boundary entropy values h_μ(G/P_I,ν_I)
can be expressed in terms of the Lyapunov spectrum of the μ-random
walk, see the Furstenberg formula in Subsection <ref>.
Using Poisson bundles over stationary systems permits to obtain intervals of entropy values.
Such an extension beyond the framework over measure-preserving
systems is necessary. The bundles over IRS considered in <cit.> can be described as
factors of a system of the form (X× B,m×ν_B),
where (X,m) is equipped with an ergodic G-invariant
measure m and (B,ν_B) is the Poisson boundary
of the μ-random walk on G. For a higher rank connected simple
Lie group G, <cit.> implies that in this setting, for any G-factor
(Z,ν) of (X× G/P,m×ν_B),
there is a parabolic subgroup Q of G, such that (Z,ν)
is a measure preserving extension of (G/Q,ν̅_B).
Therefore in this case Poisson bundles over measure preserving systems
provide at most 2^r Furstenberg entropy values, where r is
the ℝ-rank of G.
For Q'<Q, two parabolic subgroups of G=SL(d,ℝ) whose
Levi subgroups differ only by a rank 1 factor, we consider stationary G-systems
induced from measure-preserving Q-actions. There is a large supply
of such systems where Q acts through the quotient Q→ PSL(2,ℝ).
The associated Poisson bundles will be factors of the stationary joinings
with the Poisson boundary, rather than direct products. We remark that the random walk models that
appear in the construction, based on stationary joinings,
are inherently different from random walks on stationary random graphs considered in <cit.>.
The proof of realisation of the interval [h_μ(G/Q,ν_Q),h_μ(G/Q',ν_Q')]
is based on a continuity argument. As in <cit.>, we
construct a family of Poisson bundles (Z_p,λ_p), parametrised by p∈[0,1].
The key point is to show that the entropy h_μ(Z_p,λ_p)
depends continuously on p. Upper and lower semi-continuity are
treated separately. Upper semi-continuity follows from standard entropy
formulae. More precisely, they provide expressions in terms of infimum of mutual information
(or Shannon entropy) of time n random walks. This allows to show
that entropy is an infimum of continuous functions. Our approach to lower semi-continuity
is based on identification of the Poisson bundles with concrete models. Then
the KL-divergence occurring in the definition of entropy can be expressed
as the supremum of relative entropies over finite partitions in the model. After verifying
certain approximation properties, entropy is expressed as a supremum of continuous functions.
In view of an explicit identification of the Poisson bundles, it is
convenient to use measure preserving systems of SL(2,ℝ)
induced from IRSs of a lattice F, taken for simplicity to be the
Sanov subgroup, free of rank 2. Their respective Poisson boundaries,
the boundary circle ∂ℍ of the hyperbolic plane
and the space of ends ∂ F, are F-measurably isomorphic, provided F
is endowed with a Furstenberg measure, which can be chosen to have
finite entropy and finite log-moment.
We show that taking Poisson bundles interacts in a compatible way with
inducing. These two
stages of induction reduce the proof of Theorem <ref>
to the case of free groups, for which a key approximation property is shown in
Proposition <ref>. Along the way we show full
entropy realisation for a large class of step distributions on F:
Let F be a free group of finite rank, endowed with a non-degenerate probability measure
μ with finite entropy and finite logarithmic moment. Then EntSp(F,μ)=[0, h_μ].
This generalizes Bowen's original result for the case where μ is uniform on the generators and their inverses <cit.>. In the case of finitely supported μ, full entropy realisation was known by <cit.>. For virtually free groups, full realisation
is known for symmetric 4th moment measures by <cit.>, and the
existence of a gap is ruled out for first moment measures by <cit.>.
Similar arguments can be applied to other groups acting on trees;
for which we will investigate elsewhere.
§.§ Organization of the article
Section <ref> collects necessary preliminaries.
After that the article is divided in three parts, with some additional
details provided in two appendices.
§.§.§ Part I: Constraints on entropy spectrum
The first part consists of Section <ref> and <ref>.
In Section <ref> we apply the line of arguments in
<cit.>, in particular an operation on continuous functions using
contracting dynamics and Gauss map considerations, to show a property of maximal projective factor,
stated in Theorem <ref>.
Consequences of the structure theorem, namely Theorems <ref>,
<ref> and <ref>, are derived in Section <ref>.
§.§.§ Part II: Poisson bundle over a stationary system
The second part consists of Section <ref> to <ref>,
where we develop some general theory on Poisson bundles over stationary
systems.
In Section <ref>, we explain the definition of
such Poisson bundles, which starts with stationary joinings. The resulting
system (Z,λ) is a G-factor that fits into
(X× B,ην_B)→(Z,λ)→(X,η),
where (X,η) is a (G,μ)-stationary system, (B,ν_B)
is the Poisson boundary of μ-random walk, and ην_B
denotes the stationary joining of the two. For a stationary system
(X,η) which is standard in the sense of Furstenberg-Glasner
<cit.>, we show that the Poisson bundle over (X,η)
can be described as a proximal extension where the fibers are Poisson
boundaries of coset Markov chains whose law is given by suitable Doob
transforms. A typical example of such (X,η) is induced from
a measure-preserving action of Q, where Q is a parabolic group.
The fiberwise Markov property is the key ingredient in deriving the
entropy formulae in Subsection <ref>. It is standard
that the entropy formulae imply upper semi-continuity properties of
entropy, as explained in Section <ref>.
There remains to obtain lower semi-continuity. As a starting point,
we formulate in Section <ref> entropy criteria
for identification of Poisson bundles, which are adapted from the
strip and ray criteria originally due to Kaimanovich <cit.>.
Here by identification, we mean explicitly describing a (G,μ)-stationary
system (M,λ̅) and showing it is G-isomorphic
to (Z,λ).
In Section <ref> we formulate an approach to prove
lower semi-continuity of Furstenberg entropy for some specific systems
such as end-compactification bundles. The basic idea is that a symbolic
representation provides a natural sequence of finite partitions into
cylinder sets, which generate the σ-field on the fiber. We
may then write the fiberwise KL-divergence as the supremum of relative
entropy on the finite partitions. To ensure lower-semicontinuity,
it is sufficient to show that the relative entropy on the chosen finite
partitions varies continuously over the base. In Section <ref>
we describe a technical condition, referred to as locally
constant uniform approximation, which implies such continuity.
§.§.§ Part III: entropy realization for free groups, for SL(d,ℝ)
and its lattices
The third part consists of Sections <ref>
and <ref>. We apply the general framework of Part
II to free groups and to SL(d,ℝ).
In Section <ref>, we revisit Poisson bundles
of free groups over IRSs constructed in Bowen <cit.>. They
are supported on subgroups with tree-like Schreier graphs.
We apply the strip criterion Theorem <ref> to show that these
bundles are isomorphic to end compactification bundles over the same
base system. The finite partitions allowing approximations are simply
the shadows of vertices on a sphere of given radius. We conclude this
section with a proof of Theorem <ref>.
Section <ref> is devoted to the realization part
of Theorem <ref>. Denote by F the free group on two
generators. Take a parabolic subgroup Q<SL(d,ℝ) whose
Levi subgroup L has a rank-1 factor. In matrix form it means that
L has a 2×2 block on the diagonal. Since F is a lattice
of SL(2,ℝ), we may induce an IRS of F to a stationary
random subgroup (SRS) of G=SL(d,ℝ), see Subsection <ref>.
The corresponding SRS is a measure-preserving extension of (G/Q,ν_Q).
Via a discretization argument, we transfer identification results
for the free group in Section <ref> to identify
Poisson bundles over the (co-)induced SRS of G in Subsection <ref>.
Roughly speaking, in such a Poisson bundle, a fiber can be described
as the space of ends of a tree-like graph equipped with a suitable
measure, where the graph and measure depend on the base point. One
may view such an identification result as providing a symbolic representation
fiberwise for the Poisson bundle. Approximations on the tree-like
fibers of the bundle are integrated to obtain lower semi-continuity
via Fatou's lemma. Together with Section <ref>,
we conclude the continuity argument.
We mention that the SRS in the construction are supported on non-discrete
subgroups of SL(d,ℝ) for d≥3. This is necessary:
by a result of Fraczyk and Gelander <cit.>, every discrete SRS
of SL(d,ℝ), d≥3, is an IRS.
§.§.§ Appendices
Appendix A reviews the Nevo-Zimmer operation on continuous functions
from <cit.> based on contracting dynamics. Operations in the
expanding direction were considered earlier: it is first used by Margulis
in the proof of the Normal Subgroup Theorem <cit.>;
and by Nevo-Zimmer in structure theorem under P-mixing assumption
<cit.>.
In Appendix B we include proofs of mutual information and entropy
formulae for Furstenberg entropy of Poisson bundles, which are stated
in Subsection <ref>. These follow from classical arguments
being adapted to our setting.
Acknowledgments. J.B. acknowledges support of the ANR-22-CE40-0004 GoFR. T.Z. was partially supported by
a Sloan research fellowship. It is a pleasure to thank Yair Hartman for interesting discussions at various stages of this work.
§ PRELIMINARIES
§.§ Induced actions
We recall basic facts about induced actions, see Zimmer's book <cit.>
for detailed treatment.
Let G be a locally compact group and H a closed subgroup of
G. Let m_G be a left Haar measure on G. Let (S,η)
be an H-space. Let H act from the right on the product G× S
by (g,s).h=(gh,h^-1.s). Denote by X=G×_HS the space
of H-orbits in G× S and p:G× S→ X the natural
projection. There is an action of G on the quotient X induced
from G↷ G× S by g.(g',s)=(gg',s). The space
X with the quotient Borel structure and quotient measure from (G× S,m_G×η)
is called the G-space induced from H↷(S,η).
When no ambiguity arises, we write [g,s] for the H-orbit p(g,s).
Another way to describe the induced G-action is through a cocycle.
Choose a Borel section θ:G/H→ G of the natural projection
G→ G/H, such that θ([e])=e. Let α:G× G/H→ H
be the cocycle defined as α(g,[g'])=θ([gg'])^-1gθ([g']).
Denote by (G/H×_αS,p_∗m_G×η)
the G-space where G acts by g.([g'],s)=([gg'],α(g,[g']).s).
As G-spaces, G/H×_αS is isomorphic to X via the
map
([g],s)→ p(θ([g]),s),
see <cit.>.
§.§ G-spaces and factor maps
We follow the preliminaries in <cit.>. Let
G be a locally compact second countable group. We say a Lebesgue
space (X,η) is a G-space if G acts measurably
on X and the probability measure η is quasi-invariant with
respect to the G-action. Given a probablity measure μ on G,
we say (X,η) is a stationary (G,μ)-space
if in addition μ∗η=η.
A measure μ on G is admissible if its support generates G
as a semi-group and some convolution power is absolutely continuous
with respect to Haar measure. It is in the B_∞ class if
it is furthermore absolutely continuous with respect to Haar measure and admits
a bounded density with compact support.
By <cit.>, we may take a compact model for (X,η),
that is, a compact metric space Z on which G acts continuously,
equipped with a probability measure η_Z on its Borel σ-algebra,
such that (X,η) and (Z,η_Z) are
measurably isomorphic G-spaces. The space of probability measures
on a compact metric space Z is denoted by P(Z). Equip P(Z)
with the weak^∗-topology.
For a G-map π:(X,η)→(Y,ν) between two compact G-spaces,
there exists a Borel map σ:X→ Y such that σ=π,
η-a.e.. We say π is a G-factor map if Y=π(X) and
ν=π_∗η; the set π^-1({ y})
is called the fiber over y. Denote by D_π:Y→ P(X) the
disintegration map, which is the unique map with the property that
for ν-a.e. y, D_π(y) is supported on the fiber π^-1({ y}),
and ∫_YD_π(y)dν(y)=η. We will often write η^y:=D_π(y).
A G-factor map π:(X,η)→(Y,ν) is called a measure
preserving extension if D_π is G-equivariant, that is,
g.η^y=η^g.y for all g∈ G and a.e. y∈ Y.
By Mackey's point realization theorem, G-factors of (X,ν)
correspond to G-invariant sub-σ-algebras on X, modulo
zero measure subsets.
§.§.§ The boundary map
Denote by (B,ν_B) for the Poisson boundary of (G,μ)
and bnd:(G^ℕ,ℙ_μ)→(B,ν_B)
the map from the trajectory space to the Poisson boundary.
Let (X,η) be a (G,μ)-stationary system, μ∗η=η.
By the martingale convergence theorem, ℙ_μ-a.s.,
η_ω=lim_n→∞ω_n.η.
Since the map ω↦η_ω is measurable with respect
to the invariant σ-field of the random walk, it factorizes
through the Poisson boundary of (G,μ). That is, we have a G-measurable
map
β_η:B → P(X)
bnd(ω) ↦η_ω,
where P(X) is the space of probability measures on X. The map
β_η is called the boundary map, it is the
essentially unique measurable G-map B→ P(X) which satisfies
the barycenter property that
η=∫_Bβ_η(b)dν_B(b),
see for instance <cit.>.
Recall the following terminologies.
* The (G,μ)-space (X,η) is a μ-boundary (equivalently
μ-proximal) if ℙ_μ-a.s., the measures η_ω∈ℳ(X)
are point masses. In other words, if (X,η) is a G-factor
of the Poisson boundary (B,ν_B).
* A G-factor map π:(X,η)→(Y,ν) is called a μ-proximal
extension if ℙ_μ-a.s., the extension (X,η_ω)→(Y,ν_ω)
is a.s. one-to-one.
* We call a (G,μ)-stationary system (X,η) standard
if there exists a G-factor map π:(X,η)→(Y,ν) with
(Y,ν) a μ-proximal system and π a measure preserving
extension. By <cit.>, the structure
of a standard system as a measure preserving extension of a proximal
system is unique.
§.§.§ Furstenberg isomorphism
Let G be a lcsc group equipped with an admissible probability measure
μ on G. Assume that the Poisson boundary of the μ-random
walk can be identified with a homogenous space G/H. Following <cit.>,
in this situation the boundary map can be interpreted as what is now
called the Furstenberg isomorphism/correspondence.
Since μ is admissible, then the stationary measure ν is
in the quasi-invariant measure class on G/H. Since the action of
G on its Poisson boundary is amenable in the sense of Zimmer, we
have that the subgroup H is necessarily amenable.
Given a locally compact G-space X, denote by 𝒫_μ(X)
the space of μ-stationary probability measures on X, and 𝒫_H(X)
the space of H-invariant probability measures on X. Then by
<cit.>, there is an isomorphism between the
affine spaces 𝒫_μ(X) and 𝒫_H(X), implemented
by
ψ_μ :𝒫_μ(X)→𝒫_H(X)
η ↦λ=β_η(H),
where β_η:G/H→𝒫(X) is the G-boundary
map associated with η. Denote by ν_H the μ-harmonic
measure on G/H, we have that the barycenter map is implemented
by
η=∫_G/Hβ_η(w)dν_H(w)=∫_G/Hg.λ dν_H(gH).
Let X be a G-space, H a closed subgroup of G. Given a
measure ν_0 on G/H and H-invariant measure λ
on X, we write ν_0∗λ:=ν̃_0∗λ=∫_Gg.λ dν̃_0(g),
where ν̃_0 is any lift of ν_0 to Prob(G).
Since λ is H-invariant, it does not depend on the choice
of ν̃. In this setting, denote by X_0 the support
of λ. We can then view G↷(X,ν∗λ)
as a factor of the induced system (G/H×_αX,ν×λ),
see <cit.>.
We will refer to the map ψ_μ:𝒫_μ(X)→𝒫_H(X)
as the Furstenberg isomorphism. This isomorphism is continuous with
respect to the weak^∗ topology. As a consequence of the isomorphism,
we have uniqueness of μ-stationary measure on X is equivalent
to uniqueness of H-invariant measure on X.
Recall that the action of G on a compact space X is said to
be strongly proximal if for any probability measure η
on X, there exists a sequence of elements (g_n)
in G such that g_n.η converges weakly to a δ-mass.
When G↷ X is strongly proximal, by <cit.>,
the G-space X× X supports a unique stationary measure for
μ; and this measure is concentrated on the diagonal of X× X.
It follows in particular that if G↷ X is strongly
proximal, then H-invariant measure on X is unique.
§.§.§ Furstenberg entropy
Recall that given two probability measures P and Q on the same
space (Ω,ℬ), if P is absolutely continuous with
respect to Q, the relative entropy of P with respect to Q,
also known as their Kullback-Leibler divergence, is defined as the
integral (possibly infinite):
D(P Q):=∫_Ω(logdP/dQ)dP.
The Furstenberg entropy, as defined in (<ref>) can be written
as h_μ(X,ν)=∫_GD(ν∥ g^-1ν)dμ(g)=∫_GD(g.ν∥ν)dμ(g).
We refer to <cit.> for a detailed account on Furstenberg
entropy, and only recall here a few well-known properties.
Furstenberg entropy is monotone under factors: when π:(X,η)→(Y,ν)
is a G-map, we have h_μ(Y,ν)≤ h_μ(X,η) with equality
if and only if π is measure preserving. The Furstenberg entropy
of a (G,μ)-space is maximal for the Poisson boundary 0≤ h_μ(X,η)≤ h_μ(B,ν_B).
When η' is a probability measure in the measure class of η,
then by <cit.>, we have
h_μ(X,η)=-∫_G∫_Xlogdg^-1η'/dη'(x)dη(x)dμ(g).
This allows to view the Furstenberg entropy as a cohomology invariant.
It permits to change the stationary measure η in the Radon-Nikodym
derivative in order to compute the entropy.
We also record the following property of the barycenter map.
Let (C,ν_C) and (X,η) be two nonsingular
G-spaces. Suppose there is a G-map β:C→𝒫(X)
such that η is the barycenter of β_∗(ν_C). Then
h_μ(C,ν_C)≥ h_μ(X,η).
Consider the product space C× X on which G acts diagonally,
and equip it with the measure λ such that
∫_C× Xfdλ=∫_C∫_Xf(c,x)dβ_c(x)dν_C(c).
The map C× X→ C with (c,x)↦ c is a measure-preserving
extension as β is equivariant. It follows that h_μ(C× X,λ)=h_μ(C,ν_C).
Since η=bar(β_∗(ν_C)), the
coordinate projection C× X→ X pushes forward the measure
λ to η. Therefore h_μ(X,η)≤ h_μ(C× X,λ).
§.§ Structure of parabolic subgroups
Let G be a semisimple real Lie group, we recall some structure
theory, see <cit.> and also <cit.>.
Denote by 𝔤 the Lie algebra of G. Let θ:𝔤→𝔤
be a Cartan involution on 𝔤. We have the Cartan decomposition
𝔤=𝔨⊕𝔭, where 𝔨
(𝔭 resp.) is the +1 (-1 resp.) eigenspace of θ.
Let 𝔞 be a maximal commutative subalgebra of 𝔭.
Denote by Φ=Φ(𝔞,𝔤) the set of restricted
roots. For a fixed ordering on Φ, denote by Δ simple
roots. For α∈Φ, denote by 𝔤_α the
restricted root space 𝔤_α={x∈𝔤:[h,x]=α(h)xh∈𝔞}, and set 𝔫=⊕_α >0𝔤_α.
Let G=KAN denote the Iwasawa decomposition, where K,A,N are
the analytic subgroups of G corresponding to 𝔨,𝔞,𝔫.
Conjugacy classes of parabolic subalgebras of 𝔤 are
parametrized by subsets of Δ. For I⊆Δ, let
𝔞_I=⋂_α∈ Iα.
Let 𝔷(𝔥) denote the centralizer of the
subalgebra 𝔥 in 𝔤. Let 𝔪_I
be the orthogonal complement of 𝔞_I in 𝔷(𝔞_I)
with respect to the restriction of the Killing form, 𝔷(𝔞_I)=𝔪_I⊕𝔞_I.
Let [I] denote the set of roots in Φ
expressible as integral linear combination of elements in I. Let
𝔫^I=⊕_α>0,α∉[I]𝔤_α, 𝔫^-I=⊕_α<0,α∉[I]𝔤_α.
Then the parabolic subalgebra 𝔭_I admits the decomposition
𝔭_I=𝔪_I⊕𝔞_I⊕𝔫^I.
The parabolic subgroup P_I is the normalizer of the parabolic
subalgebra 𝔭_I. Denote by A_I=exp(𝔞_I),
N_I=exp(𝔫^I), N̅_I=exp(𝔫^-I)
and L_I=Z_G(A_I) the centralizer of A_I in
G. Then P_I admits the Levi decomposition P_I=L_I⋊ N_I,
and the Levi subgroup L_I is reductive : it is the product of M_I with its split component A_I. The decomposition P_I=M_IA_IN_I is
called the Langlands decomposition of P_I, see <cit.>.
The minimal parabolic subgroup P corresponds to the empty subet
I=∅. Write the corresponding decompositions as 𝔭=𝔭_∅=𝔪⊕𝔞⊕𝔫
and P=P_∅=MAN. By <cit.>,
we have that M_I=M_I^0M, where M_I^0 is the identity
component of M_I. It follows that P_I=M_I^0P.
For I⊆Δ, let
R_I :={exp(s):s∈𝔞,α_1(s)≤0,α_2(s)<0α_1∈Δα_2∈Δ∖ I} ,
D_I :=R_I∩ A_I={exp(s):s∈𝔞_I,α(s)<0α∈Δ∖ I} .
The set D_I is nonempty if and only if I≠Δ (<cit.>).
For I⊆Δ, s∈ R_I, by <cit.>
the automorphisms Int(s)|_N_I and Int(s^-1)|_N̅_I
are contracting, where Int(g).x=gxg^-1 is conjugation by
g.
For G=SL(5,ℝ) and I={1,3,4}, we have
L_I=([ ∗ ∗ 0 0 0; ∗ ∗ 0 0 0; 0 0 ∗ ∗ ∗; 0 0 ∗ ∗ ∗; 0 0 ∗ ∗ ∗; ]),
N_I=([ 1 0 ∗ ∗ ∗; 0 1 ∗ ∗ ∗; 0 0 1 0 0; 0 0 0 1 0; 0 0 0 0 1; ]),
N̅_I=([ 1 0 0 0 0; 0 1 0 0 0; ∗ ∗ 1 0 0; ∗ ∗ 0 1 0; ∗ ∗ 0 0 1; ]),
D_I={([ e^-t_1I_2 0; 0 e^-t_2I_3; ]), t_1>t_2}, where I_k is the k× k identity matrix.
§ PROPERTIES OF THE NEVO-ZIMMER MAXIMAL PROJECTIVE FACTOR
§.§ Statement
Let G be a connected semisimple real Lie group with finite center
and P be a minimal parabolic subgroup of G. Suppose ν_P
is a probability measure in the G-quasi-invariant measure class
on G/P. For a parabolic subgroup Q>P, denote by ν_Q the
pushforward of ν_P under the projection G/P→ G/Q.
We are given a G-system (X,ν), where X is taken to be a
compact model. Suppose there is a P-invariant measure λ
on X such that ν=ν_P∗λ, where the convolution
is explained in Notation <ref>.
This is the setting of <cit.>. The measure ν_P is the μ-harmomic
measure on G/P for some admissible step distribution μ on
G; (X,ν) is a (G,μ)-stationary system and λ
is provided by the Furstenberg isomorphism.
Denote by Δ simple roots of G. Recall that a standard parabolic
subgroup P_I, I⊆Δ, admits the Levi decomposition
P_I=L_I⋊ N_I, where N_I is the unipotent radical
of P_I, the Levi subgroup L_I=Z_G(A_I) is reductive.
Our goal in this section is to prove the following.
Let G, ν_P, (X,ν) be as above, where ν=ν_P∗λ
for a P-invariant, P-ergodic measure λ on X. Suppose
(G/Q,ν_Q) is a maximal projective factor of (X,ν),
where Q=P_I is a parabolic subgroup of G. Then each connected
simple non-compact factor with ℝ-rank ≥2 of the Levi
subgroup L_I of Q preserves the measure λ.
The existence of a (unique) maximal projective factor follows from
<cit.>. Consequences of Theorem <ref>
will be derived in Section <ref>. Throughout the
rest of this section, we assume the setting of Theorem <ref>.
§.§ Q-system and disintegration of the Haar measure m_K
§.§.§ Notations for Borel sections and cocycles
We first set some notations for the cocycles that appear in the induced
systems. Fix a choice of Borel sections θ:Q/P→ Q and τ:G/Q→ G
with the property that θ([P])=e and τ([Q])=e. For later
convenience, we also require that θ(Q/P)⊆ Q∩ K
and τ(G/Q)⊆ K: this is possible because K
acts transitively on G/P. Denote by β:G/Q× G→ Q the
cocycle associated with the section τ. Then we have a Borel
section
ϑ:G/P → G
gP ↦τ(gQ)θ(τ(gQ)^-1gP).
Denote by α the associated cocycle G/P× G→ G with
this section. Since τ(Q)=e, we have that ϑ restricted
to Q/P agrees with θ. Then α restricted to Q/P× Q
is the cocycle associated with θ. In the notation for a Q-system
Q/P×_αX_0, it is understood that the cocycle α
is restricted to Q/P× Q.
By the inducing in stages property (see
e.g., <cit.>), the two systems G/Q×_β(Q/P×_αX_0)
and G/P×_αX_0 are isomorphic, via a G-isomorphism
j_P^Q:G/Q×_β(Q/P×_αX_0) → G/P×_αX_0
(gQ,qP,x_0) ↦(τ(gQ)α(qP)P,x_0).
§.§.§ Change to K-invariant measures and decomposition of Haar measure
For our purposes it is convenient to have that the harmonic measure
on G/P is K-invariant, such that disintegration of Haar measures
can be applied. Recall that ν_0 denotes the μ-harmonic
measure on G/P. Denote by λ the P-invariant measure
on X given by the Furstenberg isomorphism, ν=ν_P∗λ.
Equivalently, the boundary map β_ν:G/P→𝒫(X)
sends gP to g.λ.
Denote by G=KAN the Iwasawa decomposition of G and m_K
the normalized Haar measure on the compact subgroup K. Since ν_P
is assumed to be in the G-quasi-invariant measures on G/P, it
follows that ν=ν_P∗λ and ν'=m_K∗λ
are in the same measure class on X as well, see <cit.>.
The boundary map associated with ν' sends gP to g.λ,
that is, β_ν'=β_ν. Denote
by m̅_K the pushforward of m_K under the natural projection
G→ G/Q. Then (G/Q,m̅_K) is a maximal projective
factor of (X,ν') if and only if (G/Q,ν_Q)
is a maximal projective factor of (X,ν). Therefore to prove Theorem
<ref>, we may replace (X,ν) by (X,ν')
where ν'=m_K∗λ.
Consider the decomposition of the Haar measure on K over the closed
subgroup K∩ Q. Since K is compact, both K and K∩ Q
are unimodular. Denote by m̅_K the (unique) K-invariant
probability measure on K/K∩ Q and m_K∩ Q the Haar measure
on K∩ Q normalized to have total mass 1. Recall that in Subsection
<ref> we have chosen a Borel section τ:G/Q→ G
such that τ(Q)=e and τ(G/Q)⊆ K. Then the decomposition
of Haar measure (see e.g., <cit.>) implies
the disintegration
m_K=∫_G/Qτ(y).m_K∩ Qdm̅_K(y).
§.§.§ Structure of the Q-system
We start in the same way as the proof of <cit.>.
The assumptions of Theorem <ref> imply that (X,ν)
fits into the sequence of G-spaces:
G/P×_αX_0ξ→Xφ→G/Q.
Here φ:X→ G/Q
is the G-map to the maximal projective factor, X_0 is the support of the measure λ=ψ_μ(ν)=β_ν(P) and the factor map ξ:G/P×_αX_0→ X is given
by ξ(gP,x_0)=ϑ(gP).x_0, where ϑ is the section map defined in (<ref>). Also note ξ_∗(p_∗m_K ×λ)=ν where p:G→ G/P is the quotient map.
By <cit.>
we have that X is induced from an ergodic action of Q. Next
we derive some information on the Q-system that arises this way.
The main property we will use is that such a Q-system is induced
from the P-system (X_0,λ) as well, see Proposition
<ref> below.
The following lemma is well-known, it is based on the fact that the
only P-invariant measure on G/Q is the δ-mass at the
identity coset Q.
Suppose (X,ν) fits into the sequence of G-spaces (<ref>).
Then we have for λ-a.e. x_0∈ X_0,
φ∘ξ(g,x_0)=gQg∈ G.
Since G/Q is a strongly proximal boundary, by <cit.>,
there is a unique P-invariant probability measure on G/Q. The
point mass at the identity coset Q is invariant under P, thus
it is the unique P-invariant measure on G/Q.
The measure λ on X_0, also viewed as a measure supported
on the set {(P,x_0):x_0∈ X_0}
in G/P×_αX_0, is P-invariant. Therefore its pushforward
under φ∘ξ is a P-invariant measure on G/Q, which
must be δ_Q. By G-equivariance, we have then for any
g∈ G and x_0∈ X_0'
φ∘ξ(gP,x_0)=φ∘ξ(g.(P,α(g,P)^-1.x_0))=g.(φ∘ξ(P,α(g,P)^-1.x_0))=gQ.
Next we use the disintegration (<ref>) to specify
a Q-system, which will be denoted as (Y,η). Recall
that β is the cocycle associated with the section τ:G/Q→ G.
Assume (<ref>). Define Y as the subset of X
Y:=ξ(Q/P×_αX_0),
equipped with the measure η:=m_K∩ Q∗λ. Then the
induced system (G/Q×_βY,m̅_K×η)
is G-isomorphic to (X,ν) via the map
ϕ :G/Q×_βY→ X
(gQ,y)↦τ(gQ).y.
Define ξ̃ to be the map
ξ̃ :G/Q×_β(Q/P×_αX_0)→ G/Q×_βY
(gQ,(qP,x_0))↦(gQ,ϑ(gP).x_0).
Recall the notations that p:G→ G/P is the natural projection,
and m̅_K denotes the pushforward of m_K under the projection
G→ G/Q. Write Z=G/Q×_β(Q/P×_αX_0).
By inducing in stages, we have a G-isomorphism j_P^Q:Z→ G/P×_αX_0
as in (<ref>).
Let φ:X→ G/Q be the G-factor map in (<ref>).
Then we have a sequence of G-factors:
(Z,m̅_K×(p_∗(m_K∩ Q)×λ))ξ̃→(G/Q×_βY,m̅_K×η)ϕ→(X,ν)φ→(G/Q,m̅_K).
The measurability and G-equivariance of the maps ξ̃
and ϕ are clear by their definitions. Also by the definitions
of the maps the following diagram commute:
Z[r]^ξ̃[d]_j_P^Q G/Q×_βY[d]^ϕ
G/P×_αX_0[r]^ξ X.
Since j_P^Q is a G-isomorphism and ξ is a G-factor
map, we see that ξ̃ and ϕ are G-factor maps as
well.
We need to verify that the measures follow the maps. The measure η
is defined as m_K∩ Q∗λ. Since k^-1ϑ(kP)∈ P
and λ is P invariant, we have k.λ=ϑ(kP).λ
and then
ξ_∗(p_∗(m_K∩ Q)×λ)=∫_K∩ Qϑ(kP).λ dm_K∩ Q(k)=∫_K∩ Qk.λ dm_K∩ Q(k)=m_K∩ Q∗λ=η.
Therefore for the map ξ̃,
ξ̃_∗(m̅_K×(p_∗(m_K∩ Q)×λ))=m̅_K×ξ_∗(p_∗(m_K∩ Q)×λ)=m̅_K×η.
Next we verify that ϕ_∗(m̅_K×η)=ν:
ϕ_∗(m̅_K×η) =∫_G/Q∫_K∩ Qτ(y)k.λ dm_K∩ Q(k)dm̅_K(y)
=(∫_G/Qτ(y).m_K∩ Qdm̅_K(y))∗λ=m_K∗λ=ν.
In the second line we plugged in the decomposition formula (<ref>).
For the sequence of G-factor maps in the Claim, we have ϕ∘ξ̃=ξ∘ j_P^Q
and by Lemma <ref>, φ∘ϕ∘ξ̃(gQ,qP,x_0)=τ(gQ)ϑ(qP)Q=gQ.
To show that the map ϕ is indeed a measurable G-isomorphism,
it remains to verify that ϕ is injective almost everywhere.
Since φ∘ϕ(gQ,y)=gQ, necessarily ϕ^-1({x})⊆{φ(x)}× Y.
When restricted to this fiber, we have ϕ(φ(x),y)=θ(φ(x)).y
which is injective in y∈ Y. We conclude that ϕ is an isomorphism.
As shown in the proof of <cit.>, suppose the Q-system
(Y,η) admits a projective factor (Q/Q_1,η̅),
where Q_1<Q is a proper closed subgroup of Q, then the G-system
(X,ν) admits (G/Q_1,ν_Q_1) as
a projective factor. To proceed, we will carry out the inductive step
which applies the Nevo-Zimmer arguments to the Q-systems
(Q/P×_αX_0,m̅_K∩ Q×λ)ξ→(Y,η).
The goal is to show that if a higher-rank factor of L_I=Z_G(S_I)
does not preserve the measure λ, then we will be able to
find a nontrivial projective factor (Q/Q_1,η̅)
of (Y,η), contradicting the assumption that G/Q is the maximal
projective factor. This will be carried out in the next subsections.
§.§ The Nevo-Zimmer operation applied to parabolic subgroups
Throughout this subsection, we assume the setting of Theorem <ref>
and (<ref>). We have a Q-system (Y,η) described
in Proposition <ref>, which fits in the setting of Appendix <ref>
with Q=P_I the lcsc group, its closed subgroup P=P_∅
and
ξ_0:(Q×_PX_0,m_K∩ Q×λ)→(Y,η).
We use notations for parabolic subgroups as in Subsection <ref>.
Take the Langlands decomposition Q=M_IA_IN_I. Denote by M_1,…,M_ℓ
the noncompact simple factors of the connected component M_I^0,
and 𝔪_1,…,𝔪_ℓ the corresponding
Lie algebras. For each noncompact simple factor
M_j, let 𝔞_j=𝔪_j∩𝔞.
Denote by I_i be the subset of I that consists of α∈ I
such that α vanishes on all 𝔞_k, k≠ i.
Denote by p:G→ G/P the natural projection. Define U̅_I=Q∩N̅,
where N̅=N̅_∅. Restricted to Q, we have
that p maps U̅_I diffeomorphically onto an open dense
conull set in (Q/P,m̅_K∩ Q).
Suppose M_i is a simple factor of M_I^0 with ℝ-rank
at least 2, which is fixed in what follows. We need to show that
M_i preserves the measure λ. Take
ϱ⊂ I_i to be a nonempty proper subset of I_i.
Take s∈ D_ϱ, where D_ϱ is defined as in (<ref>):
D_ϱ={ s∈∩_α∈ϱ(α):α(s)<0α∈Δ-ϱ} .
Then U̅_I admits a semi-direct product
decomposition as U̅_I=U̅_ϱ⋉V̅_ϱ,I,
where U̅_ϱ=N̅∩ P_ϱ and V̅_ϱ,I=N̅_ϱ∩ L_I.
Note that since ∅≠ϱ⊊ I_i, U̅_ϱ
is a nontrivial subgroup of M_i.
For G=SL(5,ℝ), with I={1,3,4} and ϱ={3}, we have
U̅_I=([ 1 0 0 0 0; ∗ 1 0 0 0; 0 0 1 0 0; 0 0 ∗ 1 0; 0 0 ∗ ∗ 1; ]),
U̅_ϱ=([ 1 0 0 0 0; 0 1 0 0 0; 0 0 1 0 0; 0 0 ∗ 1 0; 0 0 0 0 1; ]),
V̅_ϱ,I=([ 1 0 0 0 0; ∗ 1 0 0 0; 0 0 1 0 0; 0 0 0 1 0; 0 0 ∗ ∗ 1; ]),
N_ϱ=([ 1 ∗ ∗ ∗ ∗; 0 1 ∗ ∗ ∗; 0 0 1 0 ∗; 0 0 0 1 ∗; 0 0 0 0 1; ]),
D_ϱ={([ e^-t_1 0 0 0 0; 0 e^-t_2 0 0 0; 0 0 e^-t_3 0 0; 0 0 0 e^-t_3 0; 0 0 0 0 e^-t_4; ]), t_1>t_2>t_3>t_4 }.
We have a parametrization
ξ :U̅_ϱ×V̅_ϱ,I× X_0→ Y
(u,v,x_0)↦ uv.x_0.
By the choice that s∈ D_ϱ, (s,U̅_ϱ,V̅_ϱ,I,N_ϱ)
satisfies the conditions in Assumption <ref> for the Nevo-Zimmer
operation as reviewed in Appendix <ref>. More precisely, the following properties hold:
(i) the map
p:U̅_ϱ×V̅_ϱ,I → Q/P
(u,v) ↦ uvP
takes U̅_ϱ×V̅_ϱ,I homeomorphically
to a m̅_K∩ Q-conull set in Q/P, and moreover the pushforward
p_∗(m_U̅_ϱ× m_V̅_ϱ,I)
is in the same measure class as m̅_K∩ Q.
(ii) Int(s) acts trivially on U̅_ϱ=N̅∩ P_ϱ.
(iii) Int(s^-1) acts as a contracting automorphism
on V̅_ϱ,I=N̅_ϱ∩ L_I; Int(s)
acts as a contracting automorphism on N_ϱ.
Denote by L̃^∞(Y) the lifts of functions
in L^∞(Y,η) to U̅_ϱ×V̅_ϱ,I× X_0
via ξ. The Nevo-Zimmer operation provides a map
ℰ_ϱ,s:C(Y) →L̃^∞(Y),
f ↦ℰ_ϱ,sf,(ℰ_ϱ,sf)(u̅,v̅,·)=𝔼_λ[f̃(u̅,·)|ℱ^s],
where ℱ^s is the s-invariant sub-σ-algebra
of ℬ(X_0), f̃(u̅,·):X_0→ℝ
is defined as f̃(u̅,x_0)=f(u̅.x_0).
For more explanation of this operation we refer to <cit.>.
As discussed in Appendix <ref>, there are three
possible situations for ℰ_ϱ,s.
(I) The subgroup U̅_ϱ preserves the measure λ.
This is exactly what we are aiming to prove.
(II1) there exists f∈ C(Y) and u̅∈U̅_ϱ
such that ∫ fdλ≠∫ fdu̅.λ; and for a.e.
u̅'∈U̅_ϱ, the function x_0↦ℰ_ϱ,sf(u̅',x_0)
is λ-constant. In this case by <cit.>,
(Y,η) has a nontrivial homogeneous factor of the form (Q/Q_1,η̅),
which can be taken as the Mackey realization of ℒ̃(Y)∩ℒ̃(Q/P).
(II2) The negation of ( I)∨( II1), see the next
subsection.
§.§ The Gauss map argument and Case (II2)
In this subsection we assume that for some ∅≠ϱ⊆ I_i
and s∈ D_ϱ, the operation ℰ_ϱ,s
is in Case (II2). Following <cit.>, take X_0'
to be the Mackey realization of the N_ϱ-invariant sub-σ-field
of ℬ(X_0), equipped with the measure λ' from
the restriction of λ to ℬ_N_ϱ(X_0).
We have now a Q-system (Y',η') that is the largest
common Q-factor of Y and Q×_PX_0',
Q×_PX_0[r]^ξ_0[d] Y[d]
Q×_PX_0'[r] Y'.
By constructions, we have the following properties.
Assume that ℰ_ϱ,s is in Case (II2). The Q-system
(Y',η') as above satisfies:
(a) The unipotent radical N_I of Q acts trivially on
(Y',η').
(b) The measure λ', viewed as a measure on Y', is
not preserved by U̅_ϱ.
To see (a), note that since ϱ⊂ I, we have that N_I<N_ϱ.
By construction N_ϱ acts trivially on X_0', thus its
subgroup N_I acts trivially on X_0'. Since N_I is contained
in P and it is a normal subgroup of Q, we have that for v∈ N_I,
g∈ Q, vgP=g(g^-1vg)P=gP, that is, N_I acts trivially
on Q/P. Then in the cocycle β:Q× Q/P→ P, for v∈ N_I,
β(v,gP)=θ(vgP)^-1vθ(gP)=θ(gP)^-1vθ(gP)∈ N_I.
That is, β(N_I× Q/P)⊆ N_I. We conclude that
N_I acts trivially on Q/P×_βX_0', and as a consequence
it also acts trivially on the factor (Y',η').
Part (b) is Lemma <ref>.
We now recall some facts about Gauss maps from <cit.>.
Let G be a real algebraic group. Denote by 𝔤 or Lie(G)
the Lie algebra of G, and G^0 the identity component of G.
If V is a k-dimensional subspace of 𝔤, denote
by [V] the corresponding element of Gr_k(𝔤),
where Gr_k(𝔤) is the Grassmannian of k-planes
in 𝔤. Let Gr(𝔤)=∪_k=1^ dim𝔤 Gr_k(𝔤).
The group G acts on Gr(𝔤) by adjoint action,
and we write g.[V]=[ Ad(g).V].
Following the reasoning in <cit.>, for the action
Q↷(Y',η'), consider the stabilizer map
ψ:Y' ↦ Sub(Q),
y' ↦ Stab_Q(y'),
and the associated Gauss map
dψ:Y' ↦ Gr(𝔮),
y' ↦[ Lie( Stab_Q(y'))].
The map dψ is a Q-equivariant Borel map.
Since the parabolic subgroup Q is a connected real algebraic group,
its action on Gr( Lie(Q)) is algebraic. Therefore
every orbit is locally closed (<cit.>)
and the measure (dψ)_∗η' is supported on
a single orbit (<cit.>). Let y_0∈ Y'
be a point such that (dψ)_∗η' is supported
on the orbit of [ Lie( Stab_Q(y_0))].
Since Y' is a Q-factor of Q×_PX_0', every Q-orbit
in Y' meets X_0'. Replacing
y_0 by another point on Q.y_0∩ X_0' if necessary, we
may assume that y_0∈ X_0'. By Lemma <ref> (b), U̅_ϱ is not
contained in Stab_Q(y_0). On the other hand, Stab_Q(y_0)
contains N_ϱ, which is positive dimensional.
Recall that we are given a higher-rank factor M_i of L_I
as in the statement of Theorem <ref>.
Assume that ℰ_ϱ,s is in Case (II2). Write H= Stab_Q(y_0),
where y_0∈ X_0' is as above. Then we have Q-factor maps
(Y',η')→ Q/N_Q(H)→ M_i/N_M_i(H∩ M_i),
where Q acts on M_i/N_M_i(H∩ M_i) through the factor
M_i. The normalizer N_M_i(H∩ M_i) is a
proper subgroup of M_i.
Let L=L_I be the Levi subgroup of Q. By Lemma <ref>
(a), we have that Stab_Q(y)= Stab_L(y)⋊ N_I.
It follows that N_Q(H)=N_L(H∩ L)⋊ N_I. Therefore
Q/N_Q(H) and L/N_L(H∩ L) are isomorphic.
Since M_i is a normal subgroup of L, we have that N_L(H∩ M_i)>N_L(H∩ L).
The Levi subgroup L is reductive, it can be written as a product
of M_i with other almost simple factors and the split component
A_I, all the latter components commute with M_i, thus are
contained in N_L(H∩ M_i). It follows that M_i acts
transitively on L/N_L(H∩ M_i), which is isomorphic to M_i/N_M_i(H∩ M_i)
by the second isomorphism theorem.
Recall that the closed subgroup H∩ M_i contains N_ϱ∩ M_i,
which is of positive dimension since ϱ⊊ I_i. Recall
also that U̅_ϱ≤ M_i. Then H∩ M_i does
not contain U̅_ϱ by Lemma <ref> (a). Thus
the Lie algebra of H∩ M_i cannot be {0} or 𝔪_i.
Since M_i is almost simple, it follows that H∩ M_i is
not normal in M_i.
Lemma <ref> allows us to apply <cit.>
to deduce that in Case (II2), (Y',η') admits a nontrivial projective
factor.
In Case (II2), (Y',η') admits a nontrivial projective factor
(Q/Q_1,η̅), where Q_1 is a parabolic
subgroup of G and Q_1 is a proper subgroup of Q.
Apply <cit.> to the simple Lie group M_i
and its algebraic subgroup F=N_M_i(H∩ M_i),
where F is a proper subgroup by Lemma <ref>. We deduce
that F is contained in a proper parabolic subgroup of M_i.
Conjugate F by an element of M_i if necessary, there is a
non-empty subset J⊆ I_i such that F is contained in
the parabolic group of M_i parametrized by I_i-J. Let Q_1=P_I-J
be the corresponding parabolic subgroup of G. Then Q_1∩ M_i=P_I_i-J,
Q/Q_1 is isomorphic to M_i/P_I_i-J, thus a factor of
M_i/F. The statement then follows from Lemma <ref>.
§.§ Concluding that M_i preserves the measure λ
Suppose (G/Q,ν_Q) is a maximal projective factor
of (X,ν), where Q=P_I is a proper parabolic subgroup
of G. Let λ=ψ_μ(ν). As explained in Subsection
<ref>, we may replace μ by μ'=m_K∗μ
and ν by m_K∗λ: since ν and m_K∗λ
are in the same measure class, we have that (G/Q,m̅_K) is
the maximal projective factor of (X,m_K∗λ).
Then (X,m_K∗λ) is induced from the Q-system
(Y,η) specified in Proposition <ref>.
Suppose M_i is a simple factor of the Levi subgroup L_I
with ℝ-rank ≥2. Denote by I_i
the set of simple roots corresponding to M_i. Take ϱ⊂ I_i
to be a nonempty proper subset of in I_i and take s∈ D_ϱ,
where D_ϱ is specified in (<ref>). Applying the
Nevo-Zimmer operation ℰ_ϱ,s with respect to (s,U̅_ϱ,V̅_ϱ,I,N_ϱ)
as explained in Subsection <ref>.
Then in three cases that can arise, we have
* in Case (I), U̅_ϱ preserves the P-invariant measure
λ;
* in Case (II1), Q↷(Y,η) admits a nontrivial projective
factor by <cit.>;
* in Case (II2), Q↷(Y,η) admits a nontrivial projective
factor by Proposition <ref>.
Since G/Q is assumed to be a maximal projective factor of (X,m_K∗λ),
the situations in (II1) or (II2) cannot occur for (ϱ,s).
Indeed, otherwise (X,m_K∗λ) would admit a
projective factor (G/Q_1,m̅_Q_1) with Q_1
a proper subgroup of Q, contradicting the maximality assumption
on G/Q, see the proof of <cit.>. It follows
that for any ϱ such that ∅≠ϱ⫋ I_i,
U̅_ϱ preserves λ. This implies λ
is invariant under M_i, see the last paragraph of the proof of
<cit.>. Indeed, since the ℝ-rank of M_i
is at least 2, such subgroups U̅_ϱ generate the
opposite unipotent subgroup N̅_i of M_i. It follows
that N̅_i preserves the measure λ. Since M_i
is generated by N̅_i and M_i∩ P, and P preserves
the measure λ, we conclude that λ is invariant under
M_i.
§ CONSEQUENCES OF THE STRUCTURE THEOREM
§.§ Proof of Theorem <ref> and Corollary <ref>
By the structure of parabolic subgroups, we derive Theorem <ref>
and Corollary <ref> stated in the Introduction from Theorem
<ref>.
Write I'=𝖿(I) and Q'=P_𝖿(I). Let Q'=M_I'A_I'N_I'
be the Langlands decomposition. The disconnectedness of M_I'
is controlled by that of P. More precisely, there is a finite subgroup
F<P such that M_I'=M_I'^0F, see <cit.>.
Denote by E the maximal normal compact subgroup of M_I'^0,
and M_1,…,M_ℓ' the noncompact simple factors. Then
M_I^0 is an almost direct product of E,M_1,…,M_ℓ'.
By the definition of 𝖿(I), each M_k where k∈{1,…,ℓ},
has ℝ-rank ≥2, and M_k is a factor of M_I<Q.
Theorem <ref> then implies that M_k preserves
the measure λ.
Next note that from the decomposition of Lie algebras, see <cit.>,
the compact factor E of M_I'^0 is contained in M=Z_K(𝔰)<P.
In the decomposition Q'=M_I'^0FA_I'N_I', we have EFA_I'N_I'<P
thus preserves the measure λ; and the noncompact factors
M_1,…,M_ℓ' preserves λ by Theorem <ref>.
We conclude that Q' preserves the measure λ.
In the case that L_I has no rank-1 non-compact factors, we
have 𝖿(I)=I, thus the statement of Corollary <ref>
follows:
Under the assumptions, I'=𝖿(I)=I, thus by Theorem <ref>,
Q preserves the P-invariant measure. Recall Proposition <ref> that (X,ν)
fits into the G-factors sequence
(G/Q×_βY,ν_Q×η)→(X,ν)→(G/Q,ν_Q),
and η=m̅_K∩ Q∗λ. Since λ is invariant
under Q, it follows that η=λ. Thus (G/Q×_βY,ν_Q×η)=(G/Q×_βY,ν_Q×λ)
is a measure-preserving extension of (G/Q,ν_Q).
§.§ Proof of Theorem <ref>: constraints on Furstenberg entropy
spectrum
Theorem <ref> implies the following bound on Furstenberg entropy.
Suppose (G/P_I,ν̅_0) is the maximal standard
projective factor of (X,ν). Then
h(G/P_I,ν̅_0)≤ h(X,ν)≤ h(G/P_𝖿(I),ν̅_0).
Since it is assumed that (G/P_I,ν̅_0) is
a G-factor of (X,ν), the first inequality h(G/P_I,ν̅_0)≤ h(X,ν)
follows. For the second inequality, by the Fursterberg isomorphism,
we have that ν=ν_0∗λ, where λ is P-invariant.
By Theorem <ref>, the parabolic subgroup P_𝖿(I)
preserves the measure λ. Then we may express ν=ν_0∗λ
as
ν=∫_G/Pg.λ dν_0(g)=∫_G/P_𝖿(I)g.λ dν̅_0(g).
The proof of <cit.> applied to P_𝖿(I)
instead of P shows that (X,ν) can be viewed as
a system induced from the measure-preserving P_𝖿(I)-system
(X_0,λ), with a G-factor map
(G×_P_𝖿(I)X_0,ν̅_0×λ)→(X,ν).
Since (G×_P_𝖿(I)X_0,ν̅_0×λ)
is a measure-preserving extension of (G/P_𝖿(I),ν̅_0),
we have that
h(G/P_𝖿(I),ν̅_0)=h(G×_P_𝖿(I)X_0,ν̅_0×λ)≥ h(X,ν).
Theorem <ref> follows directly from Corollary <ref>.
§.§ Lattices equipped with Furstenberg measures
In this subsection, we apply the induction procedure for stationary
systems in <cit.> to deduce statements on a lattice Γ<G,
equipped with a Furstenberg measure μ_0 as described in Definition <ref>.
Indeed in what follows, it is sufficient to assume that (G/P,ν_P)
is the Poisson boundary of (Γ,μ_0), and ν_P is in the G-quasi-invariant measure class.
Consider an ergodic (Γ,μ_0)-space (X,ν).
Since (G/P,ν_P)
is the Poisson boundary of (Γ,μ_0), we have
the Γ-equivariant boundary map β_ν:G/P→𝒫(X)
associated with the stationary measure ν on X. Define for
g∈ G, the measure ϕ_g∈𝒫(X) as the barycenter
of (β_ν)_∗(gν_P). Note
that when restricted to Γ, γ↦ϕ_γ is
equivariant as ϕ_γ=γ.ν.
Fix a measurable section τ:G/Γ→ G and denote by c:G× G/Γ→Γ
the corresponding cocycle. Denote by X̃=G/Γ×_cX
the induced space. For a function f∈ L^∞(G/Γ×_cX,m_G/Γ×ν),
write f_z(·)=f(z,·) for z∈ G/Γ, and regard
f_z∈ L^∞(X,ν). As in <cit.>, define
a measure ν̃ on X̃ as follows:
∫_X̃fdν̃ =∫_G/Γ∫_Xf_zdϕ_τ(z)dm_G/Γ(z)
=∫_G/Γ∫_G/P∫_Xf_zdβ_ν(w)dν_P(τ(z).w)dm_G/Γ(z).
The formula (<ref>) implies that ν̃ admits
a decomposition as in the Furstenberg isomorphism that ν̃=ν_P∗λ̃,
where λ̃ is P-invariant. Indeed, by <cit.>,
the map β̃_ν̃:G/P→𝒫(X̃),
which is related to the Γ-boundary map β_ν:G/P→𝒫(X)
by
∫_X̃fdβ̃_ν̃(w)=∫_G/Γ∫_Xf_zdβ_ν(τ(z)^-1.w)dm_G/Γ(z),w∈ G/P,f∈ L^∞(X̃),
is G-equivariant; and the measure ν̃ defined in (<ref>)
is the barycenter of (β̃_ν̃)_∗(ν_P).
Write λ̃=β̃_ν̃(P),
then λ̃ is P-invariant and satisfies ν̃=ν_P∗λ̃.
An important property of the induction procedure, shown in <cit.> is the following.
(G/Q,ν_Q)
is a Γ-factor of (X,ν) if and only if it is
a G-factor of (X̃,ν̃).
Here we consider G-factors up to measure classes only. As ν_P and m̅_K are in the same measure class, the measure ν̃_1=m̅_K∗λ̃ is in the same measure class as ν̃, thus (X̃,ν̃_1) is a μ-stationary G-space for any admissible K-invariant probability measure μ on G.
As (X̃,ν̃_1) admits a maximal projective factor for (G,μ), by <cit.>, the fact implies that (X,ν) admits the same maximal projective factor for (Γ,μ_0).
If (G/Q,ν_Q) is a Γ-factor of (X,ν), we obtain by induction that (X̃,ν̃) admits as a G-factor the space G/Γ×_c G/Q, which is a measure-preserving extension of (G/Q,ν_Q), where by assumption on μ_0, ν_Q is in the same measure class as the K-invariant probability measure on G/Q.
Conversely, if (G/Q,ν_Q) is a G-factor of X̃=G/Γ×_c X, there is a family of maps (p_z:X→ G/Q) for z ∈ G/Γ such that for every g ∈ G and for ν̃ almost every (z,x) in G/Γ×_c X, one has p_gz(c(g,z)x)=gp_z(x). By standard technics (see <cit.> for details), one can assume that it actually holds for all z ∈ G/Γ and all x∈ X. Taking z=Γ and restricting to g=γ∈Γ, one gets a Γ-factor map p_Γ:X → G/Q, with ν_Q the unique μ_0-stationary measure.
Suppose (G/Q,ν_Q) is the maximal standard projective
Γ-factor of (X,ν), where Q=P_I is a parabolic subgroup.
By Fact <ref>, (G/Q,ν_Q) is also the maximal standard
projective G-factor of the induced G-system (X̃,ν̃).
Apply Theorem <ref> to (X̃,ν̃),
where ν̃=ν_P∗λ̃ as above, we have
that Q'=P_𝖿(I) preserves the measure λ̃.
To lighten notations, in what follows write β=β_ν
and β̃=β̃_ν.
We claim that the P-invariant measure λ̃ is Q'-invariant
if and only if the Γ-boundary map β:G/P→𝒫(X)
satisfies that β(hqP)=β(hP)
for all q∈ Q' and m-a.e. h∈ G. Indeed, for g∈ G,
by (<ref>) and G-invariance of m_G/Γ we have
that
∫_X̃g^-1.fdλ̃=∫_G/Γ∫_Xf_zdβ(τ(z)^-1gP)dm_G/Γ(z).
It follows that g.λ̃=λ̃ if and only if
for m_G/Γ-a.e. z, β(τ(z)^-1gP)=β(τ(z)^-1P).
By Γ-equivariance of β, this is equivalent
to that β(γ^-1τ(z)^-1gP)=β(γ^-1τ(z)^-1P)
for all γ∈Γ. Since τ:G/Γ→ G is a section,
G=τ(G/Γ)Γ, the claim is verified. We conclude that
the property Q'=P_𝖿(I) preserves the measure λ̃
implies the Γ-boundary map θ:G/P→𝒫(X)
factors through θ̅:G/Q'→𝒫(X), where θ̅(gQ')=θ(gP)
is well-defined almost everywhere.
Next we derive constraints on the Furstenberg entropy spectrum of
(Γ,μ) in the same manner as in Subsection <ref>.
Let μ_0 be a Furstenberg measure of finite Shannon entropy
on a lattice Γ<G, where G is a connected semisimple real
Lie group with finite center. Denote by Δ simple restricted
roots of G. The Furstenberg entropy spectrum of (Γ,μ_0)
satisfies
EntSp(Γ,μ_0)⊆⋃_I⊆Δ[h_μ_0(G/P_I,ν_I),h_μ_0(G/P_𝖿(I),ν_𝖿(I))].
Suppose (X,ν) is an ergodic (Γ,μ_0)-space
and (G/Q,ν_Q) is its maximal projective Γ-factor,
Q=P_I. Then h_μ_0(X,ν)≥ h_μ_0(G/Q,ν_Q).
Let Q'=P_𝖿(I). Then by Theorem <ref>, the Γ-boundary
map θ:G/P→𝒫(X) factors through the projection
G/P→ G/Q', that is, θ(gP)=θ(gqP) for q∈ Q'.
It follows that ν is the barycenter of θ̅_∗(ν_Q').
Apply Lemma <ref> to the (Γ,μ_0)-spaces
(G/Q',ν_Q') and (X,ν), we conclude
that h_μ_0(X,ν)≤ h_μ_0(G/Q',ν_Q').
We have shown that in this case
h_μ_0(G/Q,ν_Q)≤ h_μ_0(X,ν)≤ h_μ_0(G/Q',ν_Q').
The statement follows.
It is worth emphasizing that in the statements above, it is crucial
that the μ_0-harmonic measure on G/P is in the quasi-invariant
measure class. For a general step distribution μ on Γ,
the μ-harmonic measure may be singular with respect to m̅_K.
In such a case one can not derive constraints on EntSp(Γ,μ)
via the inducing procedure as above.
§ POISSON BUNDLE OVER A STATIONARY SYSTEM
In this section we define the μ-Poisson bundle over a stationary
system (X,ν) and study its basic properties. Throughout, we assume
that μ is nondegenerate in the sense that suppμ generates
G as a semigroup.
§.§ Stationary joining
For a more detailed reference on stationary joining, see <cit.>.
Suppose we are in the setting of Subsection <ref>.
Denote by (G^ℕ,ℙ_μ) the random
walk trajectory space and (B,ν_B) the Poisson boundary of (G,μ).
For a (G,μ)-stationary system (X,η), denote by ω↦η_ω
the almost sure limit of ω_n.η provided by the martingale
convergence theorem.
Let (X,η) and (Y,λ) be two (G,μ)-stationary systems.
Let the group G act on the product space X× Y diagonally.
The stationary joining of the two, denoted by (X× Y,ηλ)
is the system with measure
ηλ=∫_G^ℕη_ω×λ_ωdℙ_μ(ω)=∫_Bβ_η(b)×β_λ(b)dν_B(b).
In our notation ℙ_μ denotes the law of μ-random
walk trajectories from the identity e. Then g.ℙ_μ=ℙ_μ^g
is the law of the trajectories starting from g. We use the same
notation as stationary joining for the measure on X× G^ℕ
given by
ηℙ_μ=∫_G^ℕη_ω×δ_ωdℙ_μ(ω).
When η is G-invariant, ηℙ_μ=η×ℙ_μ.
On the space X× G^ℕ we have a skew transformation
T:(x,(ω_1,ω_2,…))↦(ω_1^-1.x,(ω_1^-1ω_2,ω_1^-1ω_3,…)).
The arguments of <cit.> (see also <cit.>) immediately imply that:
The transformation T preserves the measure ηℙ_μ
on X× G^ℕ. If G↷(X,η) is ergodic,
then T is an ergodic transformation on (X× G^ℕ,ηℙ_μ).
§.§ Definition of the Poisson bundle
Denote by Sub(G) the space of closed subgroup of G, equipped
with the Chabauty topology. We assume that our stationary system (X,η)
comes together with a G-equivariant measurable map L:X→ Sub(G),
denoted by x↦ L_x. For example, L_x= Stab_G(x).
The pushforward of η under this map is a μ-stationary measure
on Sub(G), often referred to as a stationary random
subgroup (in short SRS) of G. Denote by
W_Ω:={(x,(L_xω_1,L_xω_2,…)):x∈ X,(ω_1,ω_2…)∈ G^ℕ} =_x∈ X{x}×(L_x\ G)^ℕ
the space of trajectories in coset spaces. The group G acts on
W_Ω by g.(x,(L_xω_n))=(g.x,(L_g.xgω_n)).
Consider the map
ϑ :X× G^ℕ→ W_Ω
(x,ω)↦(x,(L_xω_1,L_xω_2,…)),
which is a G-equivariant. Write
ℙ_μ:=ϑ_∗(ηℙ_μ)
for the pushforward of the measure ηℙ_μ.
On the space W_Ω we have a time shift operator 𝒮
defined as
𝒮(x,(L_xω_1,L_xω_2,…))=(x,(L_xω_2,L_xω_3,…)),
which commutes with the G action on W_Ω. Consider the
invariant σ-field ℐ under 𝒮, that
is,
ℐ:={ A∈ℬ(W_Ω):𝒮^-1(A)=A} .
Let (Z,λ) be the Mackey point
realisation of (the completion of) the invariant σ-field ℐ
equipped with the measure ℙ_μ. We call
(Z,λ) a (G,μ)-Poisson bundle over the stationary
system (X,η).
The Poisson bundle (Z,λ) is G-ergodic if (X,η)
is G-ergodic. Indeed, (Z,λ) is a G-factor
of the stationary joining of (X,η) and the Poisson
boundary of (G,μ). The ergodicity of the latter follows
from Fact <ref>.
Denote by θ:(W_Ω,ℙ_μ)→(Z,λ)
the factor map which induces an isomorphism of measured G-spaces
between (W_Ω,ℐ,ℙ_μ|_ℐ)
and (Z,ℬ(Z),λ), where ℙ_μ|_ℐ
denotes the restriction of the probability measure ℙ_μ
to the invariant σ-field ℐ.
In each fiber (L_x\ G)^ℕ we obtain an invariant
σ-field ℐ_x, which almost surely coincides
with the invariant σ-field of the shift operator restricted
to this fiber. Therefore the fiber over x in Z is (almost surely)
identified with the Poisson boundary ρ^-1(x)=B_L_x\ G,
i.e. the space of ergodic components of the shift operator. Up to
measure zero, Z=_x∈ X{x}× B_L_x\ G.
Fiberwise, θ(x,·):(L_x\ G)^ℕ→ B_L_x\ G
can be viewed as a map which sends a trajectory on the coset space
to its image in the boundary B_L_x\ G. It plays a
role analogous to the map bnd:G^ℕ→ B.
When η is a G-invariant measure, we have ηℙ_μ=η×ℙ_μ.
Thus over a measure preserving system (X,η), the bundle (Z,λ)
is the same Poisson bundle as considered in <cit.>.
Denote by ℙ_μ=∫_Xℙ_μ,xdη(x)
the disintegration of the measure ℙ_μ over
the factor map W_Ω→ X, that is, for x∈ X, the distribution
of (L_xω_n)_n=1^∞ is ℙ_μ,x.
When η is not an invariant measure, in general the fiberwise
process (L_xω_n)_n=1^∞ is not a Markov
chain. To ensure fiberwise Markov property, we will consider the special
case of standard systems in the next subsection.
§.§ The special case of standard systems
Let (X,η) be a standard system in the sense of <cit.>,
that is, π:(X,η)→(Y,ν) a measure preserving extension
and (Y,ν) a μ-boundary. Then ν_ω is a point
mass ℙ_μ-a.s. In this case write ν_ω=δ_β_Y(ω)
where β_Y:G^ℕ→ Y factors through
the Poisson boundary of (G,μ). In the notation of the boundary
map we have β_ν( bnd(ω))=δ_β_Y(ω).
We use the same symbol β here, understanding that
for a μ-boundary (Y,ν), β_Y and β_ν
are consistent when identifying points with δ-masses.
Denote by y↦η^y the disintegration of η over π:(X,η)→(Y,ν).
A useful property is that for a standard system, disintegration measures
coincide almost surely with conditional measures:
Let (X,η) be a standard system with the structure π:(X,η)→(Y,ν).
Then
η_ω=η^β_Y(ω)ℙ_μω.
Consider disintegration ℙ_μ=∫_Yℙ_μ^ydν(y)
over the map β_Y:(G^ℕ,ℙ_μ)→(Y,ν).
Since the measurable spaces we consider are all Borel spaces, regular
conditional distributions exist. By uniqueness of disintegration,
we have that ℙ_μ^y is the conditional distribution
of ω given {β_Y(ω)=y}. It is
known that this conditional measure is the law of the Doob transformed
random walk determined by the Radon-Nikodym derivative φ_g(y)=dgν/dν(y),
see <cit.>. Explicitly, a trajectory
(ω_1,ω_2,…) with law ℙ_μ^y
is a Markov chain with transition kernel
ℙ_μ^y(g,A)=∫_G 1_A(gs)φ_gs(y)/φ_g(y)dμ(s).
Recall that ϑ_∗(ηℙ_μ)=ℙ_μ=∫_Xℙ_μ,xdη(x)
denotes the disintegration over W_Ω→ X.
When (X,η) is standard with
the structure π:(X,η)→(Y,ν), we have
ηℙ_μ=∫_Yη^y×ℙ_μ^ydν(y)=∫_Xη^π(x)×ℙ_μ^π(x)dη(x).
The coset process with law ℙ_μ,x can be
sampled as follows. First sample x∈ X according to the stationary
measure η; then take the Doob transformed random walk ω
on G conditioned on {β_Y(ω)=π(x)}.
The projected trajectory (x,(L_xω_1,L_xω_2,…))
on the coset space L_x\ G has distribution ℙ_μ,x=ϑ_∗ℙ_μ^π(x).
Take a product set A× B⊆ X× G^ℕ. By
Proposition <ref> and as ℙ_μ^y(β_Y(ω)=y)=1,
we have
ηℙ_μ(A× B) =∫_G^ℕη_ω(A)δ_ω(B)dℙ_μ(ω)=∫_Y∫_G^ℕη^β_Y(ω)(A) 1_B(ω)dℙ_μ^y(ω)dν(y)
=∫_Yη^y(A)∫_G^ℕ 1_B(ω)dℙ_μ^y(ω)dν(y)=∫_Yη^y(A)ℙ_μ^y(B)dν(y).
It follows that ℙ_μ=ϑ_∗(ηℙ_μ)
can be described as in the statement.
Note that the Doob transformed random walk on G is a Markov chain.
In order to retain the Markov property when projected to L_x\ G,
we impose the following assumption on the map x↦ L_x.
AssumpS(S) Stabilizer assumption to ensure fiberwise Markov property.
Suppose (X,η) is a standard (G,μ)-system with the structure
π:(X,η)→(Y,ν) a measure preserving extension and (Y,ν)
a μ-proximal system. We assume that L is a G-equivariant
map X→ Sub(G) such that for every x∈ X,
L_x< Stab_G(π(x)),
that is, L_x is contained in the G-stabilizer of the point
π(x)∈ Y.
This assumption is satisfied for instance when L_x= Stab_G(x):
since Y is a G-factor of X, we have L_x< Stab_G(π(x)).
In particular, assumption AssumpS(S) is always satisfied when (X,η)
is G-invariant.
Under AssumpS(S), for any x∈ X, the coset trajectory (L_xω_1,L_xω_2,…)∈(L_x\ G)^ℕ
of law ℙ_μ,x follows a Markov chain whose
transition kernel is given by
P_μ,x:(L_x\ G)×ℬ(L_x\ G)→[0,1]
P_μ,x(L_xg,A)=∫_G 1_A(L_xgs)φ_gs(π(x))/φ_g(π(x))dμ(s),
where φ_g(y)=dgν/dν(y) is the Radon-Nikodym
derivative.
The Doob transformed trajectory (ω_1,ω_2,…)∈ G^ℕ
with law ℙ_μ^π(x) is a Markov chain with transition
kernel given by (<ref>) with y=π(x). The containment
condition (<ref>) implies that the function
g↦φ_g(π(x))=dgν/dν(π(x))
is constant on the coset L_xg, for any g∈ G. Therefore the
Markov chain of the group trajectory induces the claimed Markov chain
of the coset trajectory. The proposition follows as Lemma <ref>
gives
∫_Xℙ_μ,xdη(x)=ℙ_μ=ϑ_∗(ηℙ_μ)=∫_Xϑ_∗(η^π(x)×ℙ_μ^π(x))dη(x).
On the Poisson boundary (B,ν_B) of the μ-random
walk, since g.ν_B= bnd_∗(g.ℙ_μ)
for g∈ G, the measure g.ν_B can be regarded as the harmonic
measure of the random walk starting from g. A similar property
holds in the current setting. Denote by ℙ_μ,x,g
the law of the Markov chain on L_x\ G with transition
kernel P_μ,x as in (<ref>), starting from the
coset L_xg. Recall the fiberwise boundary maps θ(x,·):(L_x\ G)^ℕ→ B_L_x\ G
from the definition of the bundle (Z,λ).
Under AssumpS(S), we have for disintegration over W_Ω→ X,
(g.ℙ_μ)_x=ℙ_μ,x,g,
and for disintegration over Z→ X,
(g.λ)_x=θ(x,·)_∗(ℙ_μ,x,g).
We first verify that the transition probabilities satisfy that for
A⊆{x}× L_x\ G⊆ W,
P_μ,x^n(L_xg^-1,A)=P_μ,g.x^n(L_g.x,g.A).
By Proposition <ref>,
P_μ,x^n(L_xg^-1,A)=∫_G 1_A(x,L_xg^-1s)φ_g^-1s(π(x))/φ_g^-1(π(x))dμ^(n)(s),
where φ_g is the Radon-Nikodym derivative dgν/dν
on (Y,ν). If (x,L_xg^-1s)∈ A then g.(x,L_xg^-1s)∈ g.A,
where g.(x,L_xg^-1s)=(g.x,gL_xg^-1s)=(g.x,L_g.xs).
Therefore 1_A(x,L_xg^-1s)= 1_g.A(g.x,L_g.xs).
Recall the general formula that dg_1g_2ν/dν(y)=dg_2ν/dg_1^-1ν(g_1^-1.y).
We have then
φ_g^-1s(π(x))/φ_g^-1(π(x))=dg^-1sν/dg^-1ν(π(x))=dsν/dν(g.π(x))=dsν/dν(π(g.x)).
Plugging back in (<ref>), we have that
P_μ,x^n(L_xg^-1,A)=∫_G 1_g.A(g.x,L_g.xs)dsν/dν(π(g.x))dμ^(n)(s)=P_μ,g.x^n(L_g.x,g.A).
It follows then from the Markov property that for A⊆{x}×(L_x\ G)^ℕ⊆ W_Ω,
we have
ℙ_μ,x,g^-1(A)=ℙ_μ,g.x,id(g.A).
Next we verify the first identity. Take a subset C⊆ W_Ω,
we have
(g.ϑ_∗(ηℙ_μ))(C) =ϑ_∗(ηℙ_μ)(g^-1.C)
=∫_Xδ_x⊗ℙ_μ,x,id(g^-1.C)dη(x)
=∫_Xδ_g.x⊗ℙ_μ,g.x,g(C)dη(x)(<ref>))
=∫_Xδ_x⊗ℙ_μ,x,g(C)dη(g^-1.x)x→ g.x).
By uniqueness of disintegration, we conclude that (g.ϑ_∗(ηℙ_μ))_x=ℙ_μ,x,g.
The second identity follows from equivariance of the map θ:W_Ω→ Z.
We now explain another description of the Poisson bundle over a standard
system (X,η). Under AssumpS(S), consider the Doob-transformed random
walk on G with law ℙ_μ^π(x). Recall that (Y,ν)
is assumed to be a quotient of the Poisson bounary (B,ν_B)
of (G,μ) and we denote by β_Y:(B,ν_B)→(Y,ν)
the factor map. Let ν_B=∫_Yν_B^ydν(y) be the disintegration.
The shift operator 𝒮 maps (ω_1,ω_2,…)
to (ω_2,ω_3,…). Take the 𝒮-invariant
sub-σ-field ℐ^y in (G^ℕ,ℙ_μ^y).
By Proposition <ref>, we have that the fiber (β_Y^-1({y}),ν_B^y)
from the disintegration is a model for ℐ^y equipped
with conditional measure ℙ_μ^y. Note that (<ref>)
implies that L_x preserves β_Y^-1({π(x)}).
In the notation introduced above, under AssumpS(S), in the Poisson bundle
(Z,λ)→(X,η) the fiber over x∈ X can
be described as the ergodic components of L_x↷(β_Y^-1({y}),ν_B^y),
where y=π(x).
The proof is the same as the statement for Poisson boundary of random
walks on Schreier graphs, see <cit.>, also
the commutative diagram on <cit.>.
§.§ Tail σ-field
In preparation for the random walk entropy formulae, we now consider
the relation between the invariant and tail σ-fields. We denote
the bundle of coset spaces by
W:={(x,L_xω):x∈ X,ω∈ G} =_x∈ X{x}× L_x\ G.
On the space (X× G^ℕ,ηℙ_μ),
we have a sequence of random variables
ξ_n :X× G^ℕ→ W
(x,ω)↦(x,L_xω_n).
In other terms, ξ_n=θ_n∘ϑ, where θ_n
is taking time n position of a coset trajectory in a fiber of W_Ω→ X.
See commutative diagram in Figure <ref>.
Let 𝒯 be the tail σ-field of (ξ_n)_n=1^∞:
𝒯:=∩_n=1^∞σ(ξ_n,ξ_n+1,ξ_n+2,…).
It is clear from the definitions that ℐ⊆𝒯.
The restriction of the map ξ_n defined above to the fiber over
a given x∈ X is the map
ξ_n(x,·):G^ℕ→ L_x\ G.
We write ξ_n^x for the random variable taking values in L_x\ G
with law ξ_n(x,·)_∗(ηℙ_μ)=ξ_n∗ℙ_μ^π(x)
by Lemma <ref>. (With a slight abuse, we identify L_x\ G
with {x}× L_x\ G.) By Proposition <ref>,
(ξ_n^x)_n=0^∞ is the Markov chain with
transition probabilities P_μ,x starting at the identity coset.
Let 𝒯_x be the tail σ-field of this Markov
chain. It is clear that 𝒯_x almost surely coincides
with the restriction of the tail σ-field 𝒯 to
the fiber over x.
In order to relate the Furstenberg entropy of the Poisson bundle to
the entropy of the random walk, we need to identify the invariant
and tail σ-fields, up to null sets. For general Markov chains,
the tail σ-field and the invariant σ-field do not
necessarily agree modulo null sets, see <cit.>. To
ensure that fiberwise ℐ_x≡𝒯_x modulo
null sets, we assume
AssumpT(T) Tail assumption. The random walk step distribution μ
on G satisfies that there exists n∈ℕ and ϵ>0
such that
d_ TV(μ^(n),μ^(n+1))<1-ϵ.
This assumption is satisfied for example by admissible measures on
a locally compact group, and for μ on a countable group with
μ(e)>0. As for any (G,μ)-space we have h_μ^(n)(X,η)=nh_μ(X,η),
we will assume without further mention that this tail assumption is
satisfied when we are concerned with Furstenberg entropy realization
problem.
By <cit.>, assumption AssumpT(T) is sufficient
to guarantee ℐ_x≡𝒯_x modulo null sets:
Suppose the step distribution
μ on G satisfies assumption AssumpT(T). Under AssumpS(S), for any x∈ X,
the fiberwise invariant and tail σ-fields are equal up to
null sets : ℐ_x≡𝒯_x. A fortiori, the
invariant and tail σ-fields agree ℐ≡𝒯
modulo null sets.
By Proposition <ref>, the fiberwise process (ξ_n^x)
is a Markov chain. Moreover, the formula for the Markov kernel implies
that
d_ TV(P_μ,x^n(L_xg,·),P_μ,x^n+1(L_xg,·))≤ d_ TV(μ^(n),μ^(n+1)).
Apply <cit.> to (ξ_n^x),
we conclude that under assumption AssumpT(T), ℐ_x≡𝒯_x
modulo null sets.
§.§ Formulae for Furstenberg entropy
Consider a Poisson bundle (Z,λ) over a standard system (X,η)
satisfying assumption AssumpS(S), together with the fiberwise Markov chain
(ξ_n^x) and the tail σ-field 𝒯_x,
as defined above.
We denote the conditional mutual information of ξ_n^x and
𝒯_x by
I(ξ_1,𝒯|X,η) :=∫_XI(ξ_1^x,𝒯_x)dη(x).
Similarly, when G is countable, we denote the conditional Shannon
entropy by
H(ξ_n|X,η)=∫_XH(ξ_n^x)dη(x).
We refer to Appendix <ref> for definitions and basic
properties of mutual information and entropy.
The next proposition shows that for a Poisson bundle (Z,λ)
over a standard stationary system (X,η), its Furstenberg entropy
can by expressed as the sum of the Furstenberg entropy of the base
(X,η) and mutual information from fiberwise Markov chains. It
will be useful for showing upper-semi continuity properties.
Let (Z,λ) be
the Poisson bundle over a standard system (X,η) satisfying AssumpS(S).
Then
h_μ(Z,λ)=h_μ(Y,ν)+I(ξ_1,𝒯|X,η).
The proof of this proposition, which follows classical arguments of
Derriennic <cit.> is given for completeness at the end
of Appendix <ref>.
In the case where (X,η)=(Y,ν) is a μ-boundary together
with L:Y→ Sub(G) the trivial map that L_y={id} for
all y, then the Poisson bundle over (Y,ν) is the Poisson boundary
(B,ν_B) of the μ-random walk. Proposition <ref>
recovers the known formula:
h(B,ν_B)=h(Y,ν)+∫_Y I(ℙ_μ,1^y,𝒯_y)dν(y)=h(Y,ν)+∫_Yinf_n∈ℕ I(ℙ_μ,1^y,ℙ_μ,n^y)dν(y).
which follows from combining <cit.>
and <cit.>.
For countable groups, we have the following formulae for Furstenberg
entropy of the Poisson bundle in terms of Shannon entropy of the fiberwise
random walks. This can be viewed as a generalization of the formula
for Poisson bundle over an IRS in <cit.>.
[Random walk entropy formula] Assume
G is a countable group endowed with a probability measure μ
of finite Shannon entropy. Let (X,η) be a standard system over
(Y,ν), with x↦ L_x satisfying AssumpS(S). Then the Poisson
bundle (Z,λ) over (X,η) satisfies
h_μ(Z,λ) =h_μ(Y,ν)+lim_n→∞∫_X(H(ξ_n^x)-H(ξ_n-1^x))dη(x)
=h_μ(Y,ν)+lim_n→∞1/nH(ξ_n|X,η).
In both lines lim can be replaced by inf_n∈ℕ.
A proof of Theorem <ref> is provided in Appendix <ref>.
§ UPPER SEMI-CONTINUITY OF FURSTENBERG ENTROPY
In this section, we consider a family of standard stationary (G,μ)-systems
(X,η_p), for p∈[0,1], over the same μ-boundary (Y,ν),
independent of p. As in Section <ref>, we denote
by (Z_p,λ_p) the associated Poisson bundles, where λ_p=ϑ(η_pℙ_μ)
are given by the diagram in Subsection <ref>. Let
L:X→ Sub(G) be a G-equivariant map satisfying the assumption
AssumpS(S).
Our goal is to obtain upper-semi-continuity of the map p↦ h_μ(Z_p,λ_p).
We prove it under two further assumptions on the family of measures
measures (η_p).
§.§ Two assumptions on a path of systems
We equip Sub(G) with the Chabauty topology and Prob( Sub(G))
with the weak^∗ topology.
AssumpC(C) Fiberwise continuity assumption. Let η_p=∫_Yη_p^ydν(y)
denote the disintegration over (Y,ν). We assume that for ν-a.e.
y∈ Y, the union of supports ∪_p∈[0,1]supp(L_∗η_p^y)
is included in a closed subset S_y⊂ Sub(G) and the
map p↦ L_∗η_p^y is continuous with respect to
the weak^∗ topology on Prob(S_y).
For Poisson bundles over an IRS, the space Y is a point. In this
case, the fiberwise continuity assumption AssumpC(C) is simply continuity
of the map p↦ L_∗η_p.
Our second assumption, slightly technical, is designed to obtain continuity
of the maps p↦I(ξ_1,ξ_n|X,η_p). It
is possible that this map is necessarily continuous under AssumpC(C) (this
is the case for G discrete), which would permit to avoid Lemma <ref>
below.
A subset S⊂ Sub(G) has the property
of local coincidence in Chabauty topology if a sequence (H_k)_k∈ℕ
of subgroups in S converges to H̅∈ S in Chabauty topology
if and only if for any exhaustion (K_n)_n=1^∞ of G
by compact subsets, we have
sup{ n:H_k∩ K_n=H̅∩ K_n}k→∞⟶∞.
This property means that two subgroups H_1,H_2 in S are
close in Chabauty topology if and only if they coincide on large subsets
H_1∩ K_n=H_2∩ K_n.
Clearly Sub(G) has this property when G is discrete. It
also holds when S is the collection of subgroups of a fixed discrete
subgroup of G. Some non-discrete examples appear in Subsection
<ref>. The space of one-dimensional subgroups
of ℝ^2 does not have local coincidence property.
AssumpL(L) Local coincidence assumption. Under AssumpC(C), we assume that for
ν-a.e. y∈ Y, the set S_y has the local coincidence
property.
Assumption AssumpL(L) is empty when the group G is discrete.
Let G be a locally compact group
with a probability measure μ of compact support and finite boundary
entropy. Consider a path of systems (X,η_p) as above satisfying
AssumpS(S), AssumpC(C) and AssumpL(L). Then the map p↦I(ξ_1,ξ_n|X,η_p)
is continuous.
Let us denote ξ_n^L the image in L\ G of the
time n position of the Doob transformed random walk of law ℙ_μ^y.
Let K denote the support of the measure μ, then time n
position belongs to K^n. By the local coincidence assumptionAssumpL(L),
if two subgroups L,L'∈ S_y are close enough in Chabauty topology,
then the two random variables (ξ_1^L,ξ_n^L) and (ξ_1^L',ξ_n^L')
have the same law, so I(ξ_1^L,ξ_n^L)=I(ξ_1^L',ξ_n^L').
It implies that the map L↦I(ξ_1^L,ξ_n^L)
is continuous on S_y. By compactness of S_y, the map P(S_y)→ℝ
given by κ↦∫_S_y I(ξ_1^L,ξ_n^L)dκ(L)
is weak^∗ continuous. Composing with the map p↦ L_∗η_p^y,
the fiberwise continuity assumption AssumpC(C) gives continuity of p↦∫_S_yI(ξ_1^x,ξ_n^x)dη_p^y(x)
for ν-a.e. y. Now by disintegration
I(ξ_1,ξ_n|X,η_p)=∫_XI(ξ_1^x,ξ_n^x)dη_p(x)=∫_Y∫_S_yI(ξ_1^x,ξ_n^x)dη_p^y(x)dν(y)
so the result follows as the convergence is dominated by I(ℙ_μ,1^y,ℙ_μ,n^y)
which belongs to L^1(Y,ν) by Lemma <ref>
in Appendix <ref>.
§.§ For locally compact groups
We assume here that G is a locally compact group, endowed with
a probability measure μ of compact support and finite boundary
entropy h_μ(B,ν_B)<∞. The following restrictive setting
will be sufficient for our construction later.
Let G be a locally compact
group with a probability measure μ of compact support and finite
boundary entropy. Consider a path (X,η_p)_p∈[0,1] of standard
stationary (G,μ)-systems over (Y,ν), satisfying AssumpS(S), AssumpC(C)
and AssumpL(L). Then the map p↦ h_μ(Z_p,λ_p) is upper
semi-continuous.
We use Proposition <ref>. By Lemma <ref>,
mutual information satisfies
I(ξ_1,𝒯|X,η_p)=inf_n∫_XI(ξ_1^x,ξ_n^x)dη_p(x)=inf_nI(ξ_1,ξ_n|X,η_p).
As an infimum of continuous functions is upper semi-continuous, Lemma <ref>
gives the corollary.
§.§ For countable groups
In the countable setting, we show upper semi-continuity of p↦ h_μ(Z_p,λ_p)
for the more general class of step distributions with finite Shannon
entropy.
Assume G is a countable discrete
group endowed with a probability measure μ of finite Shannon
entropy. Let (X,η_p) be a path of standard systems satisfying
AssumpS(S) and AssumpC(C). Then the map p↦ h_μ(Z_p,λ_p) is
upper semi-continuous.
When μ has finite support, the statement is also covered by Corollary
<ref>. We first record the following lemma.
For each n≥1, the map p↦ H(ξ_n|X,η_p)
is continuous.
The proof is in two steps. First approximate by measures with finite
support, then show continuity in this case. The second step is similar
to Lemma <ref>.
Let ℙ_μ,n^y denote the law of step n of the Doob
transformed random walk started at identity. Then μ^(n)=∫_Yℙ_μ,n^ydν(y).
By concavity of entropy ∫_YH(ℙ_μ,n^y)dν(y)≤ H(μ^(n)),
so ℙ_μ,n^y has finite entropy for ν-a.e. y.
Given a subset K⊂ G, an arbitrary probability measure ζ∈ P(G)
can be decomposed as ζ=ζ(K)ζ_|K+ζ(K^c)ζ_|K^c
where K^c denotes the complement of K in G, and H(ζ_|K_j)→ H(ζ)
for any exhaustion (K_j) of G. Moreover for L∈ Sub(G),
we have
|H(θ_L∗ζ_|K)-H(θ_L∗ζ)|≤|H(ζ_|K)-H(ζ)|
where θ_L:G→ L\ G denotes the quotient map. It
follows that for a given ε>0, we can find a large enough
finite set K and Y_1⊂ Y such that
∀ y∈ Y_1, |H(ℙ_μ,n|K^y)-H(ℙ_μ,n^y)|≤ε H(ℙ_μ,n^y) and ∫_Y∖ Y_1H(ℙ_μ,n^y)dν(y)≤ε H(μ^(n))
The above inequalities show that the map
p↦ H(ξ_n|X,η_p)=∫_XH(ξ_n^x)dη_p(x)=∫_Y∫_π^-1(y)H(θ_L_x∗ℙ_μ,n^y)dη_p^y(x)dν(y)
is the uniform limit of a sequence of maps of the form
p↦ H_n,K(p):=∫_Y∫_π^-1(y)H(θ_L_x∗ℙ_μ,n|K^y)dη_p^y(x)dν(y).
There remains to show that the maps H_n,K(p) are continuous.
Observe that the map L↦ H(θ_L∗ℙ_μ,n|K^y)
is continuous. Indeed, if L,L'∈ Sub(G) are close enough
in Chabauty topology, their coset partitions have the same intersection
with K, and so θ_L∗ℙ_μ,n|K^y=θ_L'∗ℙ_μ,n|K^y.
Then for any y∈ Y, the map P(G)→ℝ given by κ↦∫_XH(θ_L_x∗ℙ_μ,n|K^y)dκ(L)
is continuous in weak^∗ topology. We compose with p↦ L_∗η_p^y
, which is continuous for ν-a.e. Y by AssumpC(C), and get continuity
of p↦∫_π^-1(y)H(θ_L_x∗ℙ_μ,n|K^y)dη_p^y(x).
As these maps are dominated by H(ℙ_μ,n^y) in L^1(Y,ν),
we conclude that H_n,K(p) is continuous for each K.
By Theorem <ref>, we have h_μ(Z_p,λ_p)=h_μ(X,η_p)+inf_n1/nH(ξ_n|X,η_p).
By Lemma <ref>, 1/nH(ξ_n|X,η_p)
is a continuous function of p. We conclude as an infimum of continuous
functions is upper semi-continuous.
§ TOOLS FOR IDENTIFICATION OF POISSON BUNDLES
Throughout this section assume that G is a discrete countable group
and we are under assumption AssumpS(S) as in Subsection <ref>.
The goal is to show that the entropy criteria for identification of
Poisson boundaries, originally due to Kaimanovich <cit.>,
can be adapted to the current setting. Identification of Poisson bundles
is the starting point of the lower semicontinuity argument for Furstenberg
entropy in later sections.
Suppose we have a system, denoted by (M,λ̅),
which is a G-factor of the Poisson bundle (Z,λ)
that fits into the sequence of G-factors
(X× B,ην_B)→(Z,λ)→(M,λ̅)→(X,η),
where the composition (X× B,ην_B)→(X,η)
is the coordinate projection X× B→ X. Since the Poisson
bundle (Z,λ) is a proximal extension of (X,η),
it is a proximal extension of (M,λ̅) as well,
by <cit.>. By <cit.>,
it follows that (Z,λ) is G-measurable isomorphic to (M,λ̅)
if and only if h_μ(Z,λ)=h_μ(M,λ̅).
Under AssumpS(S), over x∈ X, we have that the coset random walk (L_xω_n)
is the projection of the Doob transformed random walk ℙ_μ^π(y)
to the coset space L_x\ G. Since (M,λ̅)
fits into (<ref>), the fiber of M over a point x∈ X
is covered by the Poisson boundary of the coset random walk (L_xω_n).
Denote by θ_M:W_Ω→ M the lift of the map Z→ M,
where W_Ω is the space of coset trajectories defined in
subsection <ref>. In this setting we have that
in the disintegration of ℙ_μ over θ_M,
the fiber measure (ℙ_μ)_(x,ζ),
considered as a distribution on G^ℕ, is the law of a
Markov chain (L_xω_n) conditioned on θ_M(x,L_xω)=(x,ζ).
To summarize, we have:
In the setting above, the Doob transform of the coset Markov chain
(L_xω_n) conditioned on θ_M(x,L_xω)=(x,ζ)
has transition kernel
P_μ,x^ζ(L_xg,A)=∑_s∈ G 1_A(L_xgs)dgs.λ̅/dg.λ̅(x,ζ)dμ(s).
Applying Shannon's theorem, see Proposition <ref>, to the
extension (Z,λ)→(M,λ̅), we have that
The difference between Furstenberg
entropy of (Z,λ) and (M,λ̅)
is the ηℙ_μ-a.s. limit
h(Z,λ)-h(M,λ̅)=lim_n→∞-1/nlog P_μ,x^ζ,n(L_x,L_xω_n).
In particular, h(Z,λ)=h(M,λ̅)
if and only if for λ̅-a.e. (x,ζ), the Doob transformed
coset Markov chain (L_xω_n) conditioned on
θ_M(x,L_xω)=(x,ζ) has 0 asymptotic
entropy.
§.§ Strip approximation for bundles over IRS
The strip approximation criterion, due to Kaimanovich <cit.>,
is a powerful tool for identification of Poisson boundary in the presence
of some form of hyperbolicity.
Considers bilateral paths in G^ℤ. Given a step distribution
μ on G, denote by μ̌ the reflected measure μ̌(g)=μ(g^-1).
Take the product space (G^ℕ,ℙ_μ)×(G^ℕ,ℙ_μ̌)
and the map (x,x̌)↦ω∈ G^ℤ,
where ω_0=id, ω_n=x_n, ω_-n=x̌_n
for n∈ℕ. We write ℙ̃_μ for the
pushforward of ℙ_μ×ℙ_μ̌ under
this map and call (G^ℤ,ℙ̃_μ)
a bilateral path space. Denote by (B_+,ν_+) the
Poisson boundary ((B_-,ν_-) resp.) of the μ-random
walk (μ̌-random walk resp.) on G and bnd_+:(G^ℕ,ℙ_μ)→(B_+,ν_+)
the associated boundary map (bnd_- resp.). Then we have
a map from the bilateral paths to the product of the Poisson boundaries
bnd_+× bnd_- :(G^ℤ,ℙ̃_μ)→(B_+,ν_+)×(B_-,ν_-)
ω↦( bnd_+((ω_n)_n∈ℕ), bnd_-((ω_-n)_n∈ℕ)).
Bilateral path space does not fit into the general stationary joining
framework considered in Section <ref>. However
when the measure η in the base space (X,η) is G-invariant,
we may take the product space (X× G^ℤ,η×ℙ̃_μ).
As in Subsection <ref>, it admits skew
transform
T̃(x,ω)=(ω_1^-1.x,(ω_1^-1ω_n+1)_n∈ℤ).
In the same way as Fact <ref>, one can verify that if (X,η)
is an ergodic p.m.p. G-system, then T̃↷(X× G^ℤ,η×ℙ_μ)
is a p.m.p. ergodic transformation.
In this setting, the following version of strip approximation holds.
Suppose in both positive and negative time directions, we have candidates
for the Poisson bundle that fit into
(X× B_±,η×ν_±)→(Z,λ_±)→(M_±,λ̅_±)→(X,η)
respectively. Denote by
(X× G^ℤ,η×ℙ_μ) →(M_±,λ̅_±)
(x,ω) ↦(x,ζ_±(x,ω))
the maps factorising the above. Further assume that G is equipped
with a distance d. Denote by |g|=d(id_G,g) and B_L\ G(R)
the ball of radius R centred at L in the coset space L\ G
with induced distance. For example, when G is finitely generated,
these can be word distances in the group and Schreier graphs.
Let G be a countable group with μ
of finite entropy. Suppose (X,η) is G-invariant. Assume that
we have a measurable assignment of strips
S(x,ω)=S(x,ζ_+(x,ω),ζ_-(x,ω))⊆ L_x\ G
that satisfies
(i) compatibility: L_xω_1∈ S(x,ω) if and only
if ω_1^-1L_xω_1∈ S(T̃(x,ω)),
(ii) positive probability of containing the root: (η×ℙ_μ)(L_x∈ S(x,ω))>0,
(iii) subexponential size: for any ϵ>0 and η×ℙ_μ-a.e.
(x,ω),
lim sup1/nlog|S(x,ω)∩ B_L_x\ G(|L_xω_n|)|≤ϵ.
Then (M_+,λ̅_+) is G-isomorphic to
the Poisson bundle (Z_+,λ_+); and (M_-,λ̅_-)
is G-isomorphic to the Poisson bundle (Z_-,λ_-).
In Theorem <ref>, we require the compatibility condition (i)
and positive probability of the event that the root is on the strip
(ii) as a replacement for G-equivariance of strips in the original
Kaimanovich strip criterion. An illustration can be found in Figure <ref>.
The proof relies on the Birkhoff ergodic theorem applied to the skew
transformation T̃ and Shannon's Theorem as in Proposition <ref>.
Denote by A the subset { (x,ω):L_x∈ S(x,ω)}
of X× G^ℤ. By (ii), A has positive probability
under η×ℙ_μ. Apply the Birkhoff
ergodic theorem to T̃↷(X× G^ℤ,η×ℙ_μ),
we have that for a.e. (x,ω), the set of times n such that
T̃^n(x,ω)∈ A has positive limiting frequency. Note
that
{T̃^n(x,ω)∈ A} ={ L_ω_n^-1.x∈ S(ω_n^-1.x,(ω_n^-1ω_m+n)_m∈ℤ)}
={ L_xω_n∈ S(x,ω)} .
It follows that for a.e. (x,ω), the set { n∈ℕ:L_xω_n∈ S(x,ω)}
has positive density.
Assume by contradiction that h(Z_+,λ_+)-h(M_+,λ̅_+)=δ>0
and take ϵ=δ/3. By Corollary <ref>,
for any p>0 there is a subset Ṽ⊂ X× G^ℤ
with (η×ℙ̃_μ)(Ṽ)≥1-p
and there is N∈ℕ such that for (x,ω)∈Ṽ
and n≥ N
P_μ,x^ζ_+(x,ω),n(L_x,L_xω_n)≤ e^-2nϵ.
Recall that as in previous sections we have the disintegration of
measure over M_+ that η×ℙ_μ=∫_M_+(η×ℙ_μ)_(x,ζ_+)dλ̅_+,
and moreover, fiberwise P_μ,x^ζ_+ is the transition
kernel of the Doob transformed random walk conditioned on ζ_+(x,ω)=(x,ζ_+).
We have then
η×ℙ_μ (L_xω_n∈ S(x,ω),(x,ω)∈Ṽ)
=η×ℙ_μ(L_xω_n∈ S(x,ω)∩ B_L_x\ G(|L_xω_n|),(x,ω)∈Ṽ)
=∫_M_+(η×ℙ_μ)_x,ζ_+(L_xω_n∈ S(x,ω)∩ B_L_x\ G(|L_xω_n|),(x,ω)∈Ṽ)dλ̅_+(x,ζ_+)
≤∫_M_+(sup_(x,ω)∈ṼP_μ,x^ζ_+(x,ω),n(L_x,L_xω_n))|S(x,ω)∩ B_L_x\ G(|L_xω_n|)|dλ̅_+(x,ζ_+)
≤ e^-nϵ,
where the last line uses the bound (<ref>) and the subexponential
size assumption (iii). By the Borel-Cantelli lemma, and as p is
arbitrary, we deduce that
η×ℙ_μ({ L_xω_n∈ S(x,ω)n∈ℕ})=0.
contradicting the positive limiting frequency of times spent on the
strip. The statement for (M_-,λ_-) follows from applying
the same argument to negative indices.
In the setting of general locally compact groups, one can not apply
the subadditive ergodic theorem to derive an analogue of Proposition
<ref>. The recent work of Forghani and Tiozzo <cit.>
shows a version of Shannon's theorem for random walks on locally compact
groups; and the techniques there could be adapted to our setting.
We will consider the Poisson bundle identification problem for free
groups, then inducing to SL(d,ℝ). For this reason
we do not pursue the direction to formulate results for locally compact
groups in this section.
§.§ Ray approximation criteria for Poisson bundles over standard systems
For future reference, we state a version of the ray approximation
criterion for Poisson bundles over standard systems, which is more
generally applicable than the strip criterion. Such a criterion is
originally due to Kaimanovich <cit.>. The version
stated here is adapted from the enhanced criterion of Lyons and Peres
<cit.>.
[Ray approximation <cit.>]
Let G be a countable group endowed with μ of finite entropy.
Let (M,λ̅) be a G-system that fits in (<ref>),
denote by θ_M:W_Ω→ M the factor map. Suppose for
any ϵ>0, there is a subset U⊆ M with positive
measure λ̅(U)>0 such that there is a sequence of measurable
maps
(x,ζ)↦ A_n^ϵ(x,ζ),
where (x,ζ)∈ U and A_n^ϵ(x,ζ)⊆ L_x\ G
satisfying that
(i) lim sup_n→∞ℙ_μ(∃ m≥ n: L_xω_m∈ A_n^ϵ(θ_M(x,L_xω))|θ_M(x,L_xω)∈ U)>0,
(ii) lim sup_n→∞1/nlog|A_n^ϵ(x,ζ)|≤ϵ
for all (x,ζ)∈ U.
Then (M,λ̅) is G-measurable isomorphic to the Poisson
bundle (Z,λ) over (X,η).
A proof of Theorem <ref> is provided in Subsection <ref>.
§ A SETTING FOR LOWER SEMI-CONTINUITY ARGUMENT
We now return to the general setting of bundles over stationary systems.
Suppose X and Y are locally compact metrizable spaces and π:X→ Y
is a Borel G-factor map where G is a lcsc group. For the remainder
of this section, let π:X→ Y and (Y,ν) be a
fixed μ-stationary system. In this section we do not need to
assume that (Y,ν) is a μ-boundary.
Suppose we have a (topological) bundle M over X where the fiber
over a point x∈ X is a topological space M_x. We assume:
AssumpM(M) Fiberwise measures. The space M_x is equipped with a
family of probability measures {α_x,g} _g∈ G
in the same measure class. Moreover, for every g∈ G, the Radon-Nikodym
derivative dα_x,g/dα_x,e∈ L^∞(M_x,α_x,e).
Let C_x,g<∞ be an upper bound for the L^∞-norm
of dα_x,g/dα_x,e.
Consider a path of measures on X, p↦η_p such that
π_∗(η_p)=ν and (X,η_p)
is a relative measure-preserving extension of (Y,ν) for all
p∈[0,1]. As before, let η=∫_Yη^ydν(y)
be the disintegration of η over Y. Similar to AssumpC(C), suppose
AssumpC'(C') Fiberwise continuity. The map p↦η_p^y is
continuous for ν-a.e. y, and for each y∈ Y,
there is a compact subset S_y⊆ X, such that S_y⊇∪_p∈[0,1] suppη_p^y.
Equip S_y with the subspace topology and Prob(S_y)
the weak^∗-topology.
Under AssumpM(M) , we equip the bundle M with a family of measures α_p=(α_g,p)_g∈ G,
where α_g,p is defined by its disintegration ∫_Xα_x,gdη_p(x)
over the map M→ X with measure η_p on X. We define
the entropy of α_p as
h_μ(M,α_p) :=h_μ(Y,ν)+∫_G∫_XD(α_x,g∥α_x,e)φ_g(π(x))dη_p(x)dμ(g)
=h_μ(Y,ν)+∫_G∫_Y∫_S_yD(α_x,g∥α_x,e)dη_p^ydgν(y)dμ(g)
where φ_g(y)=dgν/dν(y) is the Radon-Nikodym
derivative in (Y,ν). We refer to Appendix <ref>
for definition and basic properties of the KL-divergence D(α||β).
This definition of entropy is consistent:
Assume the composition of factor
maps X× Bζ→M→ X is the projection on the
first factor and let α_x,g:=(ζ_∗(gν_B))_x,
then h_μ(M,α) is the Furstenberg
entropy of (M,ζ_∗ν_B).
This follows from Proposition <ref> and Lemma <ref>
(iv), with X× Bζ→M in place of X× Bψ→Z.
Recall that by Lemma <ref>, (gλ)_x=(ψ_∗gν_B)_x.
For the rest of this section we will focus on fiberwise approximations
to the KL-divergence D(α_x,g∥α_x,e).
This will be sufficient for our purposes:
Under AssumpM(M) , AssumpC'(C') , if for ν-a.e. y∈ Y, μ-a.e. g∈ G,
the map x↦ D(α_x,g∥α_x,e)
is lower semi-continous on S_y, then p↦ h_μ(M,α_p)
is lower semi-continuous.
The lower semi-continuity assumption on D(α_x,g∥α_x,e)
implies that it can be written as an increasing limit of non-negative
continuous functions f_n on S_y. Let p_m→ p, then
fiberwise continuity AssumpC'(C') implies
∫_S_yf_n(y)dη_p^y=lim_m→∞∫_S_yf_n(y)dη_p_m^y≤lim inf_m→∞∫_S_yD(α_x,g∥α_x,e)dη_p_m^y.
Monotone convergence theorem implies ∫_S_yD(α_x,g∥α_x,e)dη_p^y=lim_n→∞∫_S_yf_n(y)dη_p^y.
Thus p↦∫_S_yD(α_x,g∥α_x,e)dη_p^y
is lower semi-continuous. By the integral formula (<ref>),
the statement follows from Fatou's lemma.
§.§ The case of uniform fiberwise approximation
In this subsection we consider approximations of measures on M_x.
Assume:
AssumpP(P) Generating finite partitions. For each x∈ X, there is
a refining sequence of finite measurable partitions 𝒫_x,n
of M_x, n∈ℕ, such that the union ∪_n∈ℕ𝒫_x,n
generates the Borel σ-field ℬ_x of M_x.
The KL-divergence of two Borel probability measures β_x,1
and β_x,2 on M_x is then given by
D(β_x,1∥β_x,2)=sup_nH_β_x,1∥β_x,2(𝒫_x,n), where H_β_x,1∥β_x,2(𝒫_x,n)=∑_A∈𝒫_x,nβ_x,1(A)logβ_x,1(A)/β_x,2(A).
See Appendix <ref>. We show continuity of the
maps x↦ H_β_x,1∥β_x,2(𝒫_x,n)
under assumptions of approximations.
Let S be a subset of X, equipped with subspace topology. We
say a collection of probability spaces (𝒫_x,n,q_x),
where q_x is a probability measure on the partition 𝒫_x,n,
is locally constant on S if for every x∈ X', there
is an open neighborhood O(x) of it in X' such that for any x'∈ O(x),
the spaces (𝒫_x,n,q_x) and (𝒫_x',n,q_x')
are isomorphic.
Assume AssumpP(P) . Let (β_x)_x∈ X' be a collection
of probability measures with each β_x supported on M_x.
We say that
* this collection admits approximations on X' if for x∈ X'
and n,t∈ℕ, there is a positive measure q_x,n^t
on 𝒫_x,n such that
max_A∈𝒫_x,n|1-β_x(A)/q_x,n^t(A)|≤ε_x,n(t) with lim_t→∞ε_x,n(t)=0.
* Such approximations are uniform on X' if in addition,
lim_t→∞sup_x∈ X'ε_x,n(t)=0.
* Such approximations are locally constant if for n,t∈ℕ
the collection (𝒫_x,n,q_x,n^t) is locally constant.
Let (β_x,1)_x∈ X'
and (β_x,2)_x∈ X' be two collections of fiber
probability measures, where β_x,i is supported on M_x.
Suppose each (β_x,i)_x∈ X' admit locally constant
uniform approximations on S, i∈{1,2}; and there is a constant
C>0 such that 1/C≤‖ dβ_x,1/dβ_x,2‖ _∞≤ C
for all x∈ X'. Then the following map is continuous:
S →ℝ_≥0
x ↦ H_β_x,1∥β_x,2(𝒫_x,n).
It follows that x↦ D(β_x,1∥β_x,2)
is lower semi-continuous on S.
For our applications, the subset S will be totally
disconnected and satisfy the local coincidence property AssumpL(L). The locally
constant approximation condition is natural in that context. See Proposition
<ref> for a formulation with weaker assumptions.
Let q_x,n,i^t be the approximation measures of β_x,i
on the finite partition 𝒫_x,n with the corresponding
error bound ε_x,n,i(t). Note that
max_A∈𝒫_x,nq_x,n,2^t(A)/q_x,n,1^t(A) ≤1/(1-ε_x,n,1(t))(1-ε_x,n,2(t))max_A∈𝒫_x,nβ_x,2(A)/β_x,1(A)
≤C/(1-ε_x,n,1(t))(1-ε_x,n,2(t))=:C_n,t.
Lemma <ref> implies that
H_q_x,n,2^t∥ q_x,n,1^t(𝒫_x,n)-H_β_x,2∥β_x,1(𝒫_x,n) ≤2C_n,t^1/2max_A∈𝒫_x,n|1-β_x,2(A)/q_x,n,2^t(A)|+log(max_A∈𝒫_x,nβ_x,1(A)/q_x,n,1^t(A))
≤2C_n,t^1/2ε_x,n,2(t)+ε_x,n,1(t);
and
H_β_x,2∥β_x,1(𝒫_x,n)-H_q_x,n,2^t∥ q_x,n,1^t(𝒫_x,n) ≤2C^1/2max_A∈𝒫_x,n|1-q_x,n,2^t(A)/β_x,2(A)|+log(max_A∈𝒫_x,nq_x,n,1^t(A)/β_x,1(A))
≤2C^1/2ε_x,n,2(t)/1-ε_x,n,2(t)+ε_x,n,1(t)/1-ε_x,n,1(t).
Write ϵ_n(t)=sup_x∈ X'(ε_x,n,2(t)+ε_x,n,1(t)),
then the uniform assumption states that ϵ_n(t)t→∞→0.
Therefore (<ref>) and (<ref>) show that the
sequence of continuous (actually locally constant) functions x↦ H_q_x,n,2^t∥ q_x,n,1^t(𝒫_x,n)
converges uniformly to the function x↦ H_β_x,2∥β_x,1(𝒫_x,n)
as t→∞. Thus by the uniform convergence theorem, the limit
function is continuous as well. By AssumpP(P) , the partitions 𝒫_x,n
generate the Borel σ-field ℬ_x of M_x,
we have that H_β_x,2∥β_x,1(𝒫_x,n)↗ D(β_x,1∥β_x,2)
when n→∞. It follows that x↦ D(β_x,1∥β_x,2)
is lower semi-continuous on S.
In the setting of AssumpC'(C') , AssumpM(M)
and AssumpP(P) , suppose for ν-a.e. y∈ Y, the family of measures
(α_x,g)_x∈ S_y admits locally constant
uniform approximations on S_y, then the map p↦ h_μ(M,α_p)
is lower semi-continuous.
This follows from Lemma <ref> and Proposition <ref>
for β_x,1=α_x,g and β_x,2=α_x,e.
§.§ A more general criterion with integral bounds
For completeness, we record in this subsection a relaxed version of
Corollary <ref>, which ensures lower semi-continuity
of the map p↦ h_μ(M,α_p).
Under AssumpP(P) , we assume that for each g∈ G and n∈ℕ,
there is a probability q_x,g^n defined on the partition 𝒫_x,n
that approximates α_x,g in the sense that there is a constant
ε_x(g,n)>0 such that
max_A∈𝒫_x,n|1-α_x,g(A)/q_x,g^n(A)| and lim_n→∞ε_x(g,n)=0.
Recall that in AssumpM(M) , C_x,g is an upper bound for the L^∞-norm
of the Radon-Nikodym derivative dα_x,g/dα_x,e. Similar
to the bound in (<ref>), define
Δ_x,g(n):=2(C_x,g/(1-ε_x(e,n))(1-ε_x(g,n)))^1/2ε_x(g,n)+ε_x(e,n).
In the setting of AssumpC'(C'),
AssumpM(M) and AssumpP(P) , suppose in addition that for each y∈ Y,
- for all n∈ℕ, the map x↦ H_q_x,e^n∥ q_x,g^n(𝒫_x,n)
is continuous on S_y ,
- for μ×ν-a.e. (g,y), the error terms Δ_x,g(n)
defined in (<ref>) is dominated by some function ψ_g(x)
which is integrable with respect to every η∈𝔓_1.
- there is a sequence (δ_n(g,y))_n∈ℕ
that converges to 0 and
∫_S_yΔ_x,g(n)dη_p^y(x)≤δ_n(g,y)p∈[0,1].
Then the map p↦ h_μ(M,α_p)
is lower semi-continuous.
As in Lemma <ref>, it suffices to show that p↦∫_S_yD(α_x,g∥α_x,e)dη_p^y(x)
is lower semi-continuous. As in the proof of Proposition <ref>,
Lemma <ref> implies
D(α_x,e∥α_x,g)=sup_n∈ℕ{ H_q_x,e^n∥ q_x,g^n(𝒫_x,n)-Δ_x,g(n)} .
Write δ_n=δ_n(g,y), we have
∫_S_yD(α_x,g∥α_x,e)dη^y (x)=∫_S_ysup_n{ H_q_x,g^n∥ q_x,e^n(𝒫_x,n)-Δ_x,g(n)} dη^y(x)
≥sup_n{∫_S_yH_q_x,g^n∥ q_x,e^n(𝒫_x,n)dη_p^y-∫_S_yΔ_x,g(n)dη^y(x)}
≥sup_n{∫_S_yH_q_x,g^n∥ q_x,e^n(𝒫_x,n)dη^y-δ_n} .
In the other direction, (<ref>) and the dominated convergence
theorem implies that
∫_S_yD(α_x,g∥α_x,e)dη^y(x)=lim_n∫_S_yH_q_x,g^n∥ q_x,e^n(𝒫_x,n)dη^y(x).
Since δ_n→0 as n→∞, we have then
∫_S_yD(α_x,g∥α_x,e)dη^y(x)=sup_n{∫_S_yH_q_x,g^n∥ q_x,e^n(𝒫_x,n)dη^y-δ_n} .
By the continuity assumptions (C1) and (C2), the function η↦∫_S_yH_q_x,e^n∥ q_x,g^n(𝒫_x,n)dη^y-δ_n
is continuous on 𝔓_1. Then (<ref>) implies
the statement.
§ BOWEN-POISSON BUNDLE FOR FREE GROUPS
In this section, we apply the tools of the previous sections to the
Poisson bundles over IRSs of the free group considered in <cit.>.
Let F be the free group 𝐅_k on k≥2 generators.
Denote its standard generating set as S={ a_1,…,a_k}.
The Schreier graph of a subgroup H of F has vertex set H\ F
(the space of cosets) and edge set { (Hg,Hgs):g∈ F,s∈ S}.
Following a terminology of Bowen, we call a subgroup H of F,
or rather its Schreier graph H\ F, tree-like if
the only simple loops are self-loops, i.e. have length 1 – see
<cit.>. This precisely means that between any two
vertices of the Schreier graph, there is a unique path without backtrack
nor self-loop from one to the other. Algebraically, a subgroup H
is tree-like if and only if it is generated by elements of the form
gsg^-1 for g in F and s in the generating set (and we
can assume there is a bijection between such pairs (g,s) and the
loops of the Schreier graph). We denote by ∂(H\ F)
the space of ends of H\ F. Denote by Tree_F
the subset of Sub(F) which consists of H with tree-like
Schreier graphs. It is a conjugacy invariant closed subset of Sub(F).
§.§ Quasi-transitive tree-like Schreier graphs
We first consider the situation where H_0∈ Tree_F has
a normalizer N_F(H_0) of finite index in F. The normalizer
N_F(H_0) acts from the left on the Schreier graph of H_0\ F
by automorphisms; it extends to a continuous action on the space of
ends ∂(H_0\ F). We also assume that the tree-like
Schreier graph H_0\ F has infinitely many ends.
Following <cit.>, given an integer
ℓ≥2, take the subgroup K_ℓ of 𝐅_2=⟨ a,b⟩
which is generated by all elements of the form ghg^-1, where
g∈⟨ a^ℓ,b^ℓ⟩ and h∈{ a^kba^-k,b^kab^-k:k=1,2,…,ℓ-1}.
The coset Schreier graph K_ℓ\ F is tree-like, and
the normalizer N_F(K_ℓ) is of finite index in F.
It is known by <cit.> that for a locally finite
infinite tree 𝖳, for any random walk step distribution
μ on Aut(𝖳) such that suppμ is not
contained in an amenable subgroup of Aut(𝖳), the
sequence ω_n.v, where (ω_n) is a μ-random
walk, converges to an end with probability 1. This convergence
result uses a martingale argument originally due to Furstenberg. Along
this line of reasoning, we have:
Let μ be a
non-degenerate step distribution on F and (ω_n)_n=0^∞
be a μ-random walk. Suppose H_0∈ Tree_F has a normalizer
N_F(H_0) of finite index in F and that the Schreier graph
H_0\ F has infinitely many ends. Then the coset random
walk (H_0ω_n)_n=0^∞ converges to an
end in ∂(H_0\ F) with probability
1.
Let τ_n be the n-th return time of the random walk (ω_n)_n=0^∞
to the finite index subgroup N=N_F(H_0). Denote by μ_τ
the distribution of ω_τ_1. Note that suppμ_τ
generates N. Denote by o the identity coset in H_0\ F,
we have that ω_τ_n.o=H_0ω_τ_n. Apply
the convergence theorem <cit.> to the μ_τ-random
walk (ω_τ_n)_n=0^∞ on N< Aut(H_0\ F),
we have that with probability 1, ω_τ_n.o converges
to an end. On this full measure set of ω∈ F^ℕ,
denote by λ_ω^(τ) the end where H_0ω_τ_n
converges to.
Take an infinite reduced word ξ=x_1x_2…∈∂ F
such that for any γ∈ F, in the Schreier graph H_0\ F,
the sequence (H_0γ x_1… x_n) converges
to an end in ∂(H_0\ F). Denote the
end as H_0γξ. Such infinite words exist: since simple
random walk on H_0\ F is transient and converges to
an end starting from any vertex, we have that ν_0-a.e. ξ∈∂ F
has the property required, where ν_0 is the harmonic measure
on ∂ F of simple random walk on F. Here transience follows
from the assumption that the quasi-transitive graph H_0\ F
is not quasi-isometric to ℤ. Fix a choice of such ξ.
Let ν_γ(n) be the distribution of H_0γω_nξ
on ∂(H_0\ F). By compactness and a
standard diagonal argument, there is a subsequence (n_i)
such that ν_γ(n_i) converges in the weak^∗ topology
for all γ∈ F. Denote by ν_γ the limit of ν_γ(n_i).
The limits satisfy the harmonicity condition ∑_s∈ Fν_γ sμ(s)=ν_γ,
γ∈ F. By the martingale convergence theorem, along the μ-random
walk trajectory, ν_ω_n converges to a limit measure
ν_ω in the weak^∗ topology for ℙ_μ-a.e.
ω.
Next we show that for ℙ_μ-a.e. ω, the limit
ν_ω is the point mass at the end λ_ω^(τ).
It suffices to show the subsequence (ν_ω_τ_n)
converges weakly to δ_λ_ω^(τ). Since N
normalizes H_0, we have for g∈ N,
ν_g=lim_i→∞∑_s∈ Fδ_{ H_0gsξ}μ^(n_i)(s)=lim_i→∞∑_s∈ Fδ_{ gH_0sξ}μ^(n_i)(s)=g.ν_e.
The μ-harmonicity condition satisfied by {ν_g}
then implies ν_e is μ_τ stationary. Then the measures
ν_g, g∈ N, are non-atomic, see e.g., the first paragraph
in the proof of <cit.>. Recall that for
ℙ_μ-a.e. ω, along the subsequence (ω_τ_n),
ω_τ_n.o converges to the end λ_ω^(τ).
By <cit.>, ω_τ_n.x converges
to λ_ω^(τ) for all x∈ H_0\ F∪∂(H_0\ F)
except possibly one point. Therefore ω_τ_n.ν_e
converges to the point mass at λ_ω^(τ). We conclude
that ν_ω=δ_λ_ω^(τ).
Finally, suppose that the sequence (H_0ω_n)_n=0^∞
has other accumulation points than λ_ω^(τ): there
is a subsequence (H_0ω_m_j) that converges
to a different end λ'. Since N is finite index in F,
passing to a further subsequence if necessary, we may assume that
there is a γ∈ F such that ω_m_j∈ Nγ for
all j. Then ω_m_jγ^-1.o=H_0ω_m_jγ^-1
converges to the end λ' as well. The same calculation as
in (<ref>) shows that ν_ω_m_j=(ω_m_jγ^-1).ν_γ.
Again by <cit.>, (ν_ω_m_j)
converges weakly o δ_λ'. However we have shown that
ν_ω_n converges weakly to δ_λ_ω^(τ).
Therefore λ'=λ_ω^(τ). We conclude that
there is no other accumulation points and (H_0ω_n)_n=0^∞
converges to λ_ω^(τ) for ℙ_μ-a.e.
ω.
Under the finite entropy and finite log-moment assumption on μ,
we note the following strengthening of the convergence statement in
Lemma <ref>. This property will be useful
in the lifting argument in the next subsection. Given a point x∈ H_0\ F
and an element g∈ F (viewed as a reduced word), for 0≤ℓ≤|g|,
denote by g_ℓ the length ℓ prefix of g, and [x;g]
the set { x,xg_1,xg_2,…,xg}.
In the setting of Lemma <ref>,
assume also μ has finite entropy and finite log-moment. Then
we have for any finite set B in H_0\ F,
ℙ_μ(B∩[H_0ω_n;ω_n^-1ω_n+1]≠∅)=0.
It suffices to prove it for the case where B consists of a single
point, B={x_0}. Let h be the entropy of the Poisson bundle
over conjugates of H_0. By the convergence lemma <ref>,
we know that the tail σ-field of (H_0ω_n)_n=0^∞
is nontrivial, and thus the Furstenberg entropy of the Poisson bundle
is positive: h=h_μ(Z,λ)>0. Take any 0<ϵ<h/3.
Consider the subset of vertices
A_n={ x∈ H_0\ F:-1/nlog P_μ,H_0^n(o,x)≥ h-ϵ} ,
and the event
C_n={ω:log|ω_n^-1ω_n+1|≤ nϵ,H_0ω_n∈ A_n,x_0∈[H_0ω_n;ω_n^-1ω_n+1]} .
Given an element g∈ F, for ℓ≤|g|, denote by g_ℓ
the length ℓ prefix of g. Then we have:
ℙ_μ(C_n) ≤∑_g:|g|≤ e^nϵμ(g)∑_ℓ=0^|g|ℙ_μ(H_0ω_n∈ A_n,H_0ω_ng_ℓ=x_0)
=∑_g:|g|≤ e^nϵμ(g)∑_ℓ=0^|g|ℙ_μ(H_0ω_n=x_0g_ℓ^-1)1_A_n(x_0g_ℓ^-1)
≤∑_g:|g|≤ e^nϵμ(g)∑_ℓ=0^|g|e^-n(h-ϵ)A_n)
≤ ce^nϵe^-n(h-ϵ)≤ ce^-nh/3.
By the Borel-Cantelli lemma, we have that ℙ_μ(ω∈ C_n)=0.
By the Shannon theorem <ref>, ℙ_μ(H_0ω_n∉ A_n)=0,
and recall that finite log-moment implies that ℙ_μ(log|ω_n^-1ω_n+1|>nϵ)=0.
The statement follows from taking a union of these three events.
§.§ Identification over the covering construction
We describe the end-compactification bundle and identify it with the
Poisson bundle.
§.§.§ A bundle of end-compactifications
Denote by M the end compactification bundle over Tree_F:
the fiber over H∈ Tree_F is the space of ends ∂(H\ F).
The group F acts on M as follows. For (H,ζ)∈ M, let
ξ∈∂ F be such that the sequence Hξ_n converges
to ζ on H\ F, where ξ_n is the length n
prefix of ξ. Then F acts on M by γ.(H,ζ)=(H^γ,ζ'),
where ζ' is the end in ∂(H^γ\ F)
that (H^γγξ_n)_n=1^∞ converges
to.
The F-action on M described above is well-defined.
Suppose ξ,ξ' are two infinite reduced words such that Hξ_n
and Hξ_n' converge to the same end ζ∈∂(H\ F).
Then on the tree-like Schreier graph H\ F, the Gromov
product (Hξ_n|Hξ_n')_Hn→∞⟶∞.
On the graph H^γ\ F, which is related to H\ F
by rerooting, we have
(H^γγξ_n|H^γγξ_n')_H^γ≥(Hξ_n|Hξ_n')_H-|γ|_F.
Thus H^γγξ_n and H^γξ'_n converges
to the same end in ∂(H^γ\ F).
Throughout the rest of this subsection, let μ be a nondegenerate
step distribution on F of finite entropy and finite log-moment.
Let (ω_n) be a μ-random walk on F. Denote by ν
the hitting distribution on ∂ F of the μ-random walk.
Let H_0∈ Tree_F be as in Lemma <ref>
that the Schreier graph H_0\ F is a quasi-transitive
tree-like graph with infinitely many ends. Denote by
Tree_F^H_0={ H∈ Tree_F:H<H_0^γγ∈ F} ,
that is, subgroups H such that, up to rerooting, the Schreier graph
H\ F is a tree-like graph that covers H_0\ F.
The property stated in Lemma <ref> naturally lifts to covering
graphs. Thus we have the following convergence to ends lemma.
Let H_0 be as in Lemma <ref>.
For any H∈ Tree_F^H_0 and any finite set K in H\ F,
ℙ_μ-almost surely K∩[Hω_n;ω_n^-1ω_n+1]≠∅
for only finitely many n. In particular, Hω_n converges
to an end in ∂(H\ F) when n→∞.
Lemma <ref> implies that there is a measurable F-invariant
ν-conull subset A⊆∂ F such that the map ζ_H:A→∂(H\ F)
is defined for all H∈ Tree_F^H_0: if ω_n
converges to ξ∈ A, then Lω_n converges to ζ_H(ξ).
Suppose ρ is an F-invariant measure supported on Tree_F^H_0.
We equip the bundle M→ Tree_F with a measure λ̅
such that the disintegration of λ̅ is
λ̅=∫_ Tree_F^H_0(ζ_H)_∗ν dρ(H).
That is, in M the fiber ∂(H\ F) over
H is equipped with the measure (ζ_H)_∗ν, which
is the hitting distribution of the random walk (Hω_n)_n=0^∞
on the ends space ∂(H\ F).
§.§.§ Identification of bundles
[Shadows] Let H∈ Tree_F,
choose the identity coset H as the base point o in H\ F.
For a vertex v∈ H\ F, denote by Shd(v) the
set of geodesic rays (finite or infinite) based at o that pass
through v. We view Shd(v) as a subset of (H\ F)∪∂(H\ F).
Denote by
℧_H(v):= Shd(v)∩∂(H\ F).
We note the following lower bound on the hitting probabilities of
shadows, which will be used to apply the strip criterion in Proposition <ref>.
For each n∈ℕ, there
exists a constant c=c(H_0,n,μ)>0 such that for any H∈ Tree_F^H_0,
the hitting distribution satisfies
(ζ_H)_∗ν(℧_H(u))≥ c
for any u∈ H\ F within distance n to the root o=H.
First consider the Schreier graph H_0\ F. Let B be
a connected finite subset of H_0\ F, containing the
root o=H_0. We claim that for any v∈ H_0\ F such
that v∉ B,
p_v(B):=ℙ_μ(B∩[vω_t;ω_t^-1ω_t+1]=∅t∈ℕ)>0.
Suppose the claim is not true for some v. The connected component
of v in H_0\ F-B is Shd(v_0) of some vertex
v_0 at distance 1 to B. Then it follows from non-degeneracy
of μ that for all x∈ Shd(v_0), with probability 1,
[xω_t;ω_t^-1ω_t+1] intersects
B for some t∈ℕ. Since the hitting distribution (ζ_H_0)_∗ν
charges the cylinder set ℧(v_0) with positive probability,
this contradicts with Lemma <ref>.
Let H∈ Tree_F^H_0. Recall (<ref>) that
H<H_0^γ for some word γ in F representative
of one of the finitely many cosets of F/N_H(F). Assume |γ|≤ n.
On the Schreier graph H\ F, let u be a vertex within
distance n to the root o=H, and choose g∈ F a representative
such that u=Hg and |g|≤ n. Then choose an element g'∈ F
with |g'|≤2n such that on H\ F, Hgg'∈ Shd(u);
and on H_0^γ\ F, |H_0^γgg'|>n.
By non-degeneracy of μ, there is m_0∈ℕ such that
μ^(m_0) charges every element in the ball of radius 3n
around identity in F. Consider the event that in m_0 steps,
the μ-random walk on F is at gg', and after time m_0,
the induced trajectory on H_0^γ\ F never sweeps
cross the ball of radius n around H_0^γ. The covering
property implies that the corresponding trajectory (Hω_t)
never sweeps cross the ball of radius n around H after time
m_0, in particular, it stays in the shadow of u. As the n-ball
centred at H_0^γ in H_0^γ\ F is isometric
the n-ball centred at H_0γ^-1 in H_0\ F,
it follows that
(ζ_H)_∗ν(℧(u))≥min_B(e_F,3n)μ^(m_0)·min{ p_v(B(H_0γ^-1,n)):v∉ B(H_0γ^-1,n), |v|≤3n} .
Since ρ is F-invariant, we are in the setting of Subsection <ref>,
with X= Tree_F^H_0 and x↦ L_x the identity
map. We apply the strip criterion to identify the Poisson bundle with
the end compactification bundle. Recall that as a measurable F-space,
(M,λ̅) fits into the sequence of F-factors
( Tree_F^H_0×∂ F,ρ×ν)ζ→(M,λ̅)→( Tree_F^H_0,ρ),
where the first map ζ sends (H,ξ) to (H,ζ_H(ξ));
and the second map is the coordinate projection (H,ζ_H(ξ))↦ H.
Let H_0 be as in
Lemma <ref> and ρ an ergodic F-invariant
probability measure on Tree_F^H_0. The Bowen-Poisson
bundle over ( Tree_F^H_0,ρ) is F-measurably
isomorphic to the end compactification bundle (M,λ̅)
defined above.
We apply the strip criterion in Theorem <ref>. Consider the
bilaterial path space (F^ℤ,ℙ̃_μ).
Denote by ζ_+(H,ω) the composition ζ_H∘ bnd_+,
which is the end of H\ F that the random walk Hω_n
converges to in the positive time direction n→∞. Similarly,
denote by ζ_+(H,ω) the composition ζ_H∘ bnd_-
in the negative time direction.
Take the strip S(H,ω) to be the (unique) geodesic on the tree-like
Schreier graph H\ F connecting ζ_+(H,ω)
to ζ_-(H,ω). Since the geodesic does not depend on the
location of the root, see Figure <ref>, the choice of
strips satisfies the compatibility condition (i) in Theorem <ref>.
We now verify the positivity condition (ii). Since the graph H\ F
is tree-like, we have that for b_+∈℧(Hs) and b_-∈℧(Hs')
where s,s' are two elements of F such that Hs,Hs' are two
distinct vertices distance 1 from H. the geodesic connecting
b_+ and b_- passes through the identity coset H. Therefore
for Hs≠ Hs', we have
(ρ×ℙ_μ)(H∈ S(H,ω)) ≥ℙ_μ(ζ_+(H,ω)∈℧(Hs),ζ_-(H,ω)∈℧(Hs'))
≥ c(1,ℓ,μ)c(1,ℓ,μ̌)>0,
where the positive constants c(H_0,1,μ),c(H_0,1,μ̌)
are provided by Lemma <ref>. We have verified condition
(ii).
Since μ is assumed to have finite log-moment, we have that log|ω_n|/n→0
when n→∞ for ℙ_μ-a.e. ω. Since the
strips are chosen to be geodesics, the intersection of a strip with
any ball of radius r is bounded by 2r. Condition (iii) is verified.
The statement then follows from Theorem <ref>.
§.§.§ Another interpretation of the hitting distribution
In the ends compactification bundle M, the fiberwise measure λ̅^H=(ζ_H)_∗ν
is the hitting distribution of the random walk (Hω_n)
on ∂(H\ F). We have the diagram
(F^ℕ,ℙ_μ)[r]^ bnd[d] (∂ F,ν)[d]^ζ_H
((H\ F)^ℕ,ℙ_μ)[r] (∂(H\ F),λ̅^H).
In the diagram above, a point ξ∈∂ F is viewed as an
end where the random walk (ω_n) converges to.
For later use in constructions for SL(d,ℝ), here we consider
another way of interpreting the map ζ_H:(∂ F,ν)→(∂(H\ F),λ̅^H).
A point in ∂ F is represented uniquely as an infinite reducible
word in the alphabet { a^±1,b^±1}. Denote
by ξ_n the length n prefix of the word ξ. We view ξ_n
as an element in F. Then the point ξ∈∂ F induces
a sequence of points (Hξ_n), which form a nearest
neighbor path on the Schreier graph H\ F.
In the setting of Lemma <ref>, for ν-a.e. ξ∈∂ F,
the nearest neighbor sequence (Hξ_n) converges to
the end ζ_H(ξ)∈∂(H\ F).
For a μ-random walk trajectory ω=(ω_n)
on F that converges to a point ξ∈∂ F, we claim that
for each k∈ℕ, there is a time n_k such that the
prefix ξ_k is on the geodesic connecting ω_n_k
to ω_n_k+1. Indeed, since ω converges to ξ,
n_k=max{ n:ξ_kω_n}
is finite. Then ξ_k is a prefix of ω_n_k+1 and
the common prefix of ω_n_k and ω_n_k+1 has
length <k. It follows then ξ_k is on the geodesic path connecting
ω_n_k and ω_n_k+1.
By Lemma <ref>, we have for any finite set K in H\ F,
almost surely K∩[Hω_n;ω_n^-1ω_n+1]≠∅
for only finitely many n. Since ξ_k is on the geodesic connecting
ω_n_k to ω_n_k+1, Hξ_k∈[L_xω_n_k;ω_n_k^-1ω_n_k+1].
It follows then for ν-a.e. ξ, the set {k∈ℕ:Hξ_k∈ K}
is finite. Therefore (Hξ_k) converges to the end
in ∂(H\ F). Moreover, since ξ_k
is the length k prefix of ω_n_k+1, we have that the
Gromov product of Hξ_k and Hω_n_k+1 goes to infinity
as k→∞. Therefore the two sequences (Hξ_k)
and (Hω_n_k+1) converge to the same end, which
is ζ_H(ξ).
§.§ Approximations on end-compactification bundles of F
Let H_0∈ Tree_F be as in Lemma <ref>
and μ be a non-degenerate step distribution on F with finite
entropy and finite log-moment. By Proposition <ref>,
the bundle M with fiber ∂(L_x\ F)
over x∈ Tree_F^H_0 equipped with hitting distribution
of the coset random walk, is the Poisson bundle over the same base
( Tree_F^H_0,ρ) with x↦ L_x
identity map. Next we show how the bundle M fits into the setting
of Proposition <ref>, and obtain lower semi-continuity of
entropy.
Let β be a probability measure in the measure class of the
μ-harmonic measure ν on ∂ F. Moreover suppose
‖ dβ/dν‖ _∞,‖ dν/dβ‖ _∞<∞.
Then β_x=(ζ_x)_∗β admits locally constant
uniform approximations on Tree_F^H_0.
Since the Schreier graph of L_x\ F is tree-like, we
have a natural sequence of partitions of ∂(L_x\ F)
given by cylinder sets that are shadows of vertices. Consider the
sphere S_x(n)={ v∈ L_x\ F:d_L_x\ F(o,v)=n}
of the Schreier graph L_x\ F. Since the graph is tree-like,
we have that
𝒫_x,n={℧(v)} _v∈ S_x(n),
where the shadow ℧(v) is defined in Notation <ref>,
forms a partition of ∂(L_x\ F) by clopen subsets.
This sequence of partitions satisfy:
* 𝒫_x,n+1 is a refinement of 𝒫_x,n,
* the Borel σ-field of ∂(H\ F)
is generated by the partitions ∨_n=0^∞𝒫_x,n.
As shown in Lemma <ref>, for x∈ Tree_F^H_0,
we have a map ζ_x:∂ F→∂(L_x\ F)
such that (ζ_x)_∗ν is the hitting distribution of
the random walk L_xω_n on the Schreier graph L_x\ F.
For locally constant approximations to (ζ_x)_∗(gν)
on such partitions, one option is to take the measure of ℧(v)
to be the probability that the coset random walk L_xgω_t
is in Shd(v) and up to time t, the random walk never exited
the ball of radius r_t around L_x, for a suitable choice
of the radius r_t. One can indeed verify the conditions to apply
Proposition <ref> for such a approximations. Instead of this
natural choice, for the convenience of inducing to SL(d,ℝ)
in the next subsections, we use the interpretation of the hitting
distributions in Subsection <ref>, which leads
naturally to Proposition <ref>.
Denote by ξ_t the length t prefix of an infinite word ξ∈∂ F.
Then a point ξ∈∂ F induces a sequence of points (Hξ_t)_t∈ℕ
on the Schreier graph H\ F. Apply Proposition <ref>
to x∈ Tree_F^H_0, we have that for ν-a.e. ξ,
the sequence L_xξ_t converges to an end, denoted by ζ_x(ξ)
in ∂(L_x\ F), when t→∞.
Suppose β is a probability measure on ∂ F in the
measure class of the μ-harmonic measure ν. Write β_x=(ζ_x)_∗β.
Define a measure ϕ_x,β^t on the partition 𝒫_x,n
by setting
ϕ_x,β^t(℧(v)):=β({ξ∈∂ F:L_xξ_t∈ Shd(v)}).
For t>n, ϕ_x,β^t is a measure on 𝒫_x,n
Moreover, ϕ_x,β^t depends only on the ball of radious
t around L_x in the Schreier graphs L_x\ F,
which by definition of the Chabauty topology implies that x↦(𝒫_x,n,ϕ_x,β^t)
is locally constant on Tree_F^H_0.
Suppose there is a constant C_β>0 such that 1/C_β≤ dβ/dν≤ C_β.
Then there is a function ϵ(t)t→∞⟶0,
which only depends on n,H_0,μ, such that for all x∈ Tree_F^H_0,
max_A∈𝒫_x,n|1-β_x(A)/ϕ_x,β^t(A)|≤ C_β^2ϵ(t).
Let v∈ S_x(n). To ease notations, in this proof we write ϕ_x,β^t(v)
in place of ϕ_x,β^t(℧(v)), similarly for β_x.
The tree-like structure of the Schreier graphs guarantees that the
total variation distance between β and ϕ_x,β^t,
both restricted to 𝒫_x,n, is no more than the measure,
under β, that there is some s≥ t such that the path L_xξ_s
returns to the ball of radius n in L_x\ F. For x∈ Tree_F^H_0,
we have that L_x<H_0^γ for some γ∈ F, thus
for graph distances, d(H_0^γ,H_0^γg)≤ d(L_x,L_xg)
for any g∈ F. Recall also H_0 has only finitely many conjugates
in F. Thus by this covering property, we have
1/2∑_v∈ S_x(n)|β_x(v)-ϕ_x,β^t(v)| ≤β({ξ∈∂ F:∃ s≥ t,L_xξ_s∈ B_L_x\ F(n)})
≤‖dβ/dν‖ _∞ν({ξ∈∂ F:∃ s≥ t,∃γ∈ F,H_0^γξ_s∈ B_H_0^γ\ F(n)})
=:‖dβ/dν‖ _∞ϵ_0(t)=:ϵ(t).
The term ϵ_0(t) does not depend on x, and ϵ_0(t)→0
as t→∞ by Proposition <ref>.
The hitting measure β_x(v)=(ζ_x)_∗ν(℧(v))
is equal to the probability that the trajectory (L_xξ_s)_s=1^∞,
eventually remains in Shd(v). By Lemma <ref>, we
have a lower bound β_x(v)≥ c(n,H_0,μ). It follows that
max_A∈𝒫_x,n|1-ϕ_x,β^t(A)/β_x(A)| ≤1/min_v∈𝒮_x(n)β_x(v)∑_v∈ S_x(n)|β_x(v)-ϕ_x,β^t(v)|≤2/c(n,H_0,μ)‖dβ/dν‖ _∞ϵ_0(t).
The approximations ϕ_x,β^t defined in (<ref>)
are locally constant and uniform by Lemma <ref>.
§.§ Entropy realization for the free group
In this subsection we conclude the proof of Theorem <ref>.
Let μ be an admissible probability measure on F with finite
entropy and finite log-moment. Recall that we are in the standard setting
of Section <ref>, with X= Tree_F⊂ Sub(F)
an F-space. Take a path of ergodic IRS (ρ_ℓ,p)_p∈[0,1]
supported on X as in Bowen <cit.>, which is briefly described in the next paragraph.
For an integer ℓ≥2, let H_0=K_ℓ be the quasi-transitive
subgroup of Example <ref>. Its Schreier graph is tree-like with infinitely many ends and H_0 has a finite index normalizer.
Informally, an ρ_ℓ,p sample is obtained by taking a random covering of the Schreier graph
K_ℓ\ F where each loop is “opened” independently
according to a p Bernoulli distribution (or equivalently, the generating
pair (g,s) of a loop is removed from the set of generators of K_ℓ).
One can check directly from this description that for p∈(0,1), F↷( Tree_F,ρ_ℓ,p)
is a weakly mixing extension of a finite transitive system. The properties we need in what follows are: the
map p↦ρ_ℓ,p is weak^∗ continuous, ρ_ℓ,0
is the uniform measure on conjugates of K_ℓ, and ρ_ℓ,1
is the δ-mass on the trivial group {e}. These are shown
in <cit.>.
The situation fits into the setting of Section <ref>,
we obtain a family of measured F-bundles (Z_p,λ_ℓ,p)
over IRS's (X,ρ_ℓ,p) standard over the same trivial μ-boundary
(Y is a point here).
In the setting above, the
map p↦ h_μ(Z_p,λ_ℓ,p) is continuous.
By <cit.>, Assumption AssumpC(C) is satisfied. The assumption
AssumpL(L) is automatically satisfied in the discrete setting. Lemma <ref>
shows that p↦ h_μ(Z_p,ρ_ℓ,p) is upper semi-continuous.
Now by Proposition <ref>, the Poisson bundle (Z_p,λ_ℓ,p)
is F-isomorphic to the end compactification bundle (M,λ̅_ℓ,p)
whose fiber over x is the topological space M_x=∂(L_x\ F)
with hitting distribution λ̅_p^x=(ζ_x)_∗(ν_B).
Then
h_μ(Z_p,λ_ℓ,p)=h_μ(M,λ̅_ℓ,p).
Write α_x,g=(ζ_x)_∗(gν_B) for the hitting
distribution of the coset random walk (L_xgω_n)
starting at L_xg. By stationarity of ν_B, we have that
the Radon-Nikodym derivative dgν_B/dν_B is bounded from
above and below. It follows that Assumption AssumpM(M) is satisfied and this
fits into the setting of Section <ref>. By Lemma <ref>,
we have
h_μ(M,λ̅_ℓ,p)=h_μ(M,α_p).
Proposition <ref> provides locally constant uniform
approximations. Corollary <ref> gives lower semi-continuity
of p↦ h_μ(M,α_p). Combine with the upper
semi-continuity, the statement follows.
With Proposition <ref>, the proof concludes in the
same way as in <cit.>: by the intermediate value theorem, each
Furstenberg entropy value between h_μ(Z_0,λ_ℓ,0)
and h_μ(Z_1,λ_ℓ,1) is attained. Now by <cit.>,
ρ_ℓ,1 is the trivial subgroup, so h_μ(Z_1,λ_ℓ,1)=h_μ(B,ν_B)
is maximal.
Finally by <cit.>, the sequence
of measures (ρ_ℓ,0)_ℓ≥2 converges in weak^∗
topology towards a measure κ=∑_i=1^ rank(F)δ_A_i,
where A_i is the normal closure of the cyclic group ⟨ a_i⟩.
Since A_i\ F is isomorphic to ℤ, any coset
random walk has trivial Poisson boundary. It follows that the Poisson
bundle over (X,κ) has zero Furstenberg entropy. We conclude
that
lim sup_ℓ→∞h_μ(Z_0,λ_ℓ,0)=0
by the upper semi-continuity Corollary <ref>.
§ POISSON BUNDLES FOR SL(D,ℝ)
In this section we complete the proof of Theorem <ref>.
Throughout this section, let G=SL(d,ℝ) and μ be either
an admissible measure or a Furstenberg discretization measure supported
on a lattice. In both cases, the Poisson boundary of the μ-random
walk can be identified as (G/P,ν_P), where ν_P
is in the measure class of the unique K-invariant measure.
§.§ Stationary system induced from IRS of F
The set of simple roots of G=SL(d,ℝ) is Δ={e_1-e_2,…,e_d-1-e_d},
which is naturally identified with the set {1,…,d-1}. Let
I⊆Δ and list Δ-I={ i_1,…,i_ℓ-1}
in increasing order. Then associated with I is the partition d=d_1+…+d_ℓ,
where d_j=i_j-i_j-1, i_0=0 and i_ℓ=d. The minimal
parabolic subgroup P=P(n,ℝ) is the subgroup of upper triangular
matrices. The parabolic subgroup Q=P_I is stabilizer of the standard
flag V_1⊂ V_2⊂…⊂ V_ℓ, where V_j
is spanned by the i_j-first standard basis vectors. The Levi
subgroup of P_I consists of block diagonal matrices
L_I={[[ M_1; M_2; ⋱; M_ℓ ]], M_j∈ GL(d_j,ℝ), (M_1)…(M_ℓ)=1} .
Throughout this subsection we consider the situation where I⊆Δ
is such that SL(2,ℝ) is a factor of L_I, i.e., one
of blocks is 2×2. Take such a block, that is, k∈{1,…,ℓ}
with d_k=2. Regard SL(2,ℝ) as a subgroup of L_I,
embedded in M_k and all the other blocks are identities. Denote
by
p_k:Q→ G_2:={ M ∈ GL(2,ℝ), |(M)|=1 }
the quotient map which is the composition of Q=L_I⋉ V_I→ L_I
and L_I→ G_2 which sends the 2×2-block M_k to 1/√(| M_k|)M_k. Note that G_2 is isomorphic to SL(2,ℝ)⋊ℤ/2ℤ.
By the structure
of parabolic subgroups, we have Q/P=∏_j=1^ℓSL(d_j,ℝ)/P(d_j,ℝ), where P(n,ℝ)
is the minimal parabolic subgroup of SL(n,ℝ).
The unique K∩ Q invariant measure on Q/P is m̅_K∩ Q=∏_j=1^ℓm̅_SO(d_j),
where m̅_SO(n) denotes the unique SO(n)-invariant probability
measure on SL(n,ℝ)/P(n,ℝ). Denote by
p̅_k:(Q/P,m̅_K∩ Q)→(SL(2,ℝ)/P(2,ℝ),m̅_SO(2))
the projection to the k-th
component in the product, induced by the projection p_k.
For clarity of later arguments, it is convenient to fix an embedding of
F=F_2 as a lattice in SL(2,ℝ). Take
A=([ 1 2; 0 1 ]), B=([ 1 0; 2 1 ]).
The group ⟨ A,B⟩ is called the Sanov subgroup,
it is a free group of rank 2. Denote by ℍ the upper
half plane model of the 2-dimensional hyperbolic space, where SL(2,ℝ)
acts by Mobius transforms. Take the ideal rectangle R_0 with
vertices -1,0,1,∞ on ℍ. It is the union of two
adjacent ideal triangles with vertices -1,0,∞ and 0,1,∞
in the Farey tessellation by ideal triangles. The orbit of R_0
under ⟨ A,B⟩ forms a tessellation of the
hyperbolic plane. The dual graph of the tessellation is a tree, it
can be identified as the standard Cayley graph of the free group F=⟨ A,B⟩.
We follow the classical method to code hyperbolic geodesics with the
tessellation. Take the map ψ:ℍ→ F by sending a point
x in the tile γ R_0 to γ. Given a base point
x_0∈ℍ and an irrational point z∈∂ℍ,
for the geodesic from x_0 to z, record the sequence of tiles
that it passes through: (γ_0R_0,γ_1R_0,γ_2R_0,…),
where each γ_n∈ F. Since the sequence (γ_n)
comes from a geodesic, which in particular can not backtrack, we have
that γ_n converges to an infinite reduced word γ_∞∈∂ F
as n→∞. By basic properties of the Farey tessellation, see
e.g., <cit.>, we have:
The map ψ extends continuously to
ψ:∂ℍ-ℚ →∂ F
z ↦γ_∞,
which is F-equivariant, injective on ∂ℍ-ℚ.
We continue to use the same notations for sections and cocycles as
in Subsection <ref>.
Take a fundamental domain of F in SL(2,ℝ) so that its
image on ℍ is the ideal rectangle R_0. Lift it to
a fundamental domain Ω_0 of F in G_2, which is a 2-cover of
SL(2,ℝ). Let σ:G_2×Ω_0→ F be the
associated cocycle. Denote by β:G× G/Q→ Q the cocycle
associated with a chosen measurable section τ:G/Q→ G. Let
F↷(X_0,ρ) be an ergodic measure
preserving action of F. Consider the G-space
X=G/Q×_β(Ω_0×_σX_0)η_ρ^μ=ν_Q× m_Ω_0×ρ,
where the Q action on Ω_0×_σ Sub(F)
is through the quotient map p_k:Q→ G_2, ν_Q is the
μ-stationary probability measure on G/Q, and m_Ω_0
is the restriction of the Haar measure on G_2 on Ω_0,
normalized to be a probability measure. Note that (X,η_ρ^μ)
is a relative-measure preserving extension of (G/Q,ν_Q).
In the terminology of Subsection <ref>, (X,η_ρ^μ)
is a standard system over the μ-boundary (G/Q,ν_Q).
Take the stabilizer map X→ Sub(G), x↦ Stab_G(x),
then the pushforward of η_ρ^μ is a μ-stationary
random subgroup (SRS). This SRS may be viewed as co-induced from ( Sub(F),ρ)
in the specific way described above. Here as customary, we identify ρ and the IRS Stab_∗ρ, where Stab:X_0→ Sub(F) maps a point to its stabilizer. Note that the operation is different
from the canonical co-induction of IRSs in the setting of countable
groups <cit.>.
Given an IRS ρ of F and a step distribution μ on G,
denote by (Z,λ_ρ^μ) the Poisson bundle
associated with the μ-random walk, over the standard system (X,η_ρ^μ)π→(G/Q,ν_Q)
as in (<ref>), where L_x= Stab_G(x).
The bundle depends on Q and the choice of rank one factor M_k,
but the dependence is suppressed in the notation.
Since Y=G/Q is a factor of X, the stabilizer assumption AssumpS(S)
of Subsection <ref> is satisfied. By Fact <ref>,
S=⟨ suppμ⟩ acts ergodically on
(Z,λ_ρ^μ) if S acts ergodically on
(X,η_ρ^μ).
§.§ Identification of Poisson bundles for SL(d,ℝ)
Recall that we assume μ is a step distribution on G=SL(d,ℝ)
such that (G/P,ν_P) is the Poisson boundary of the
μ-random walk, and ν_P is in the quasi-invariant measure
class of m̅_K.
Denote by ν_P=∫_Yν_P^ydν_Q(y) the disintegration
of the harmonic measure ν_P over the quotient map G/P→ Y=G/Q.
Note that the support of ν_P^y is τ(y)Q/P and L_x
acts on τ(y)Q/P where y=π(x).
As in the setting of Lemma <ref>, let
H_0∈ Tree_F be a subgroup whose associated Schreier
graph is a quasi-transitive tree-like graph with infinitely many ends.
Let the subspace Tree_F^H_0⊂ Sub(F) be described
as in (<ref>). Take ρ an F-invariant measure
supported on Tree_F^H_0.
Associated with the μ-random walk on G, take the Poisson bundle
(Z,λ_ρ^μ) over the standard system (X,η_ρ^μ)
defined in Notation <ref>. By Proposition <ref>,
in the Poisson bundle (Z,λ_ρ^μ) over
(X,η_ρ^μ), the fiber over x∈ X is the
space of ergodic components L_x(τ(y)Q/P,ν_P^y),
where y=π(x).
Our task in this subsection is to identify (Z,λ_ρ^μ)
with a concrete model. Recall that ρ is supported on Tree_F^H_0.
Denote by M_F→ Tree_F^H_0 the end-compactification
F-bundle where the fiber over a subgroup H∈ Tree_F^H_0
is the space of ends ∂(H\ F), see Subsection <ref>.
Retain the same notation as in (<ref>), induce F↷ M_F
to a G-space
M=G/Q×_β(Ω_0×_σM_F).
We will show that the space M, equipped with a suitable measure,
is G-measurably isomorphic to the Poisson bundle (Z,λ_ρ^μ),
see Proposition <ref>. The identification will
play a key role in the lower semi-continuity argument.
§.§.§ The case of K-invariant harmonic measure
We first consider the case where the step distribution μ is such
that its Poisson boundary is (G/P,m̅_K), where
m̅_K is the K-invariant probability measure on G/P. To emphasize K-invariance,
in this case we write (Z,λ_ρ^K) for the
associated Poisson bundle over (X,η_ρ^K),
where (X,η_ρ^K) is defined in (<ref>)
with ν_P=m̅_K.
Regard F as the Sanov subgroup in SL(2,ℝ). It acts
on the boundary SL(2,ℝ)/P(2,ℝ). Given the IRS
ρ of F, we take an F-bundle (E_F,m_ρ^K)→( Sub(F),ρ)
where the fiber over H∈ Sub(F) is the ergodic decomposition
H(SL(2,ℝ)/P(2,ℝ),m̅_SO(2))
. We now show that the Poisson bundle (Z,λ_ρ^K)
can be seen as induced from the F-system (E_F,m_ρ^K).
Let (Z,λ_ρ^K) be the Poisson bundle over
(X,η_ρ). There is a G-measurable isomorphism
Ψ_0:(Z,λ_ρ^K)→(G/Q×_β(Ω_0×_σE_F),m̅_G/Q× m_Ω_0× m_ρ^K).
By (<ref>), we have that in the disintegration
of m̅_K over G/P→ G/Q, the fiber measure over a point
y is τ(y).m̅_K∩ Q. Let p_k:Q→ G_2 and p̅_k:Q/P→ SL(2,ℝ)/P(2,ℝ) be the
projection maps as in Subsection <ref>, where
k is the index of the chosen 2×2 block.
Then in the Poisson bundle (Z,λ_ρ^K)→(X,η_ρ),
the fiber over x=(y,r,H) is the ergodic decomposition
L_x(τ(y)Q/P,τ(y).m̅_K∩ Q).
The stabilizer map x↦ L_x is given explicitly by
Stab_G :G/Q×_β(Ω_0×_σ Sub(F))→ Sub(G)
(y,r,H)↦τ(y)p_k^-1(rHr^-1)τ(y)^-1.
A point in Z can be recorded as (x,A), where x=(y,r,H)∈ X
and A is an L_x-invariant measurable subset in the coset τ(y)Q/P.
By the description of the subgroup L_x in (<ref>),
we have that τ(y)^-1A is a subset of Q/P which is invariant
under L_Q(r,H):=p_k^-1(rHr^-1). Since the subgroup
L_Q(r,H) of Q acts transitively on the components SL(d_j,ℝ)/P(d_j,ℝ)
for j≠ k, we have that τ(y)^-1A is of the form A_k×∏_j≠ kSL(d_j,ℝ)/P(d_j,ℝ),
where A_k is a rHr^-1-invariant subset in SL(2,ℝ)/P(2,ℝ).
Then r^-1A_k is an H-invariant event in SL(2,ℝ)/P(2,ℝ),
thus in the fiber of E_F over H. To summarize, we have seen
an isomorphism Ψ_0:Z→ G/Q×_β(Ω_0×_σE_F),
((y,r,H),A)↦(y,r,(H,r^-1p̅_k(τ(y)^-1A))).
It follows from K-invariance that (Ψ_0)_∗λ_ρ^K=m̅_G/Q× m_Ω_0× m_ρ^K.
Lemma <ref> shows that the identification problem for
(Z,λ_ρ^K) is reduced to (E_F,m_ρ^K).
Relying on Furstenberg discretization, (E_F,m_ρ^K)
can be identified as a F-Poisson bundle, and further an end compactification
bundle as follows.
Recall that we have an F-equivariant map ψ:ℍ∪(∂ℍ-ℚ)→ F∪∂ F
from the Farey tessellation as in Lemma <ref>. View SL(2,ℝ)/P(2,ℝ)
as the ideal boundary of the hyperbolic plane ℍ, the SO(2)-invariant
measure m_0=m̅_SO(2) corresponds to the Lebesgue measure.
By <cit.>, see also <cit.>,
for the lattice F<SL(2,ℝ), there is a nondegenerate step
distribution κ_F on F with finite Shannon entropy and
finite log-moment (with respect to word distance on F) such that
(∂ℍ,m_0) is F-measurable isomorphic
to the Poisson boundary of (F,κ_F). We mention
that one can also apply <cit.> to the free group
F acting on ℍ to see such a measure κ_F exists.
Then by the description of Proposition <ref>, we have
that (E_F,m_ρ^K) is F-measurably isomorphic
to the Poisson bundle associated with κ_F-random walk over
the same base ( Sub(F),ρ).
Associated with the κ_F-random walk (ω_n)_n=0^∞,
as in Subsection <ref>, we have the end-compactification
bundle (M_F,λ̅_ρ)→( Tree_F^H_0,ρ),
where in the disintegration of λ̅_ρ, the fiber
∂(H\ F) over H∈ suppρ is endowed with
the hitting distribution of the coset trajectory (Hω_n)_n=0^∞.
The bundle (E_F,m_ρ) is F-measurably isomorphic
to the Poisson bundle, and also to the end-compactification bundle
(M_F,λ̅_ρ) associated with the κ_F-random
walk over the same base ( Sub(F),ρ).
Recall that by <cit.> we have that almost surely
a κ_F-random walk trajectory converges to an end in ∂ F,
and the Poisson boundary can be identified as ∂ F equipped
with the hitting measure. Given the map ψ associated with the
Farey tessellation, we have that the hitting measure on ∂ F
is the same as the pushforward ψ_∗m̅_SO(2).
Now for H∈ suppη_ρ, consider H-ergodic components:
(∂ℍ,m̅_SO(2))[r]^ψ[d] (∂ F,ψ_∗m̅_SO(2))[d]
H(∂ℍ,m̅_SO(2))[r]^ψ̅_H H(∂ F,ψ_∗m̅_SO(2)),
since ψ is an F-measurable isomorphism, it follows that ψ̅_H
is a measurable isomorphism as well. Since (∂ F,ψ_∗m̅_SO(2))
is the Poisson boundary of (F,κ_F), we have that
the bundle over ( Tree_F^H_0,ρ) with fiber
H(∂ F,ψ_∗m̅_SO(2)) over
H is the Poisson bundle over the same base associated with the
κ_F-random walk. Applying Proposition <ref>
to the random walk κ_F, we have that the κ_F-Poisson
bundle over ( Tree_F^H_0,ρ) is F-measurable
isomorphic to the end-compactification bundle (M_F,λ̅_ρ).
The isomorphism in the statement is implemented fiberwise by ψ̅_H:H(∂ℍ,m_0)→∂(H\ F),
where for m_0-a.e. ξ∈∂ℍ, denote by (γ_n)_n=0^∞
the sequence of tiles in the Farey tessellation that it passes through,
in other words (γ_n)_n=0^∞=ψ(ξ);
then by Proposition <ref>, ψ_H(ξ) is the end that
(Hγ_n)_n=0^∞ converges to.
Combine Lemma <ref> and <ref>, we have that
the Poisson bundle (Z,λ_ρ^K) is G-measurably
isomorphic to the bundle M over the same base (X,η_ρ^K),
where the fiber over x=(y,r,H) is the space of ends ∂(H∖ F),
endowed with the hitting distribution of the κ_F-coset random
walk (Hω_n)_n=0^∞.
§.§.§ The more general case
Next we consider the case where μ is such that (G/P,ν_P)
is the Poisson boundary of (G,μ) and ν_P is in the measure
class of m̅_K. Let (X,η_ρ) be defined
as in (<ref>). Since ν_P is in the measure class
of m̅_K on G/P, it follows that the μ-Poisson bundle
(Z,λ_ρ^μ) over (X,η_ρ) can
be realized on the same space Z as in the K-invariant case and
the measure λ_ρ^μ is in the measure class of λ_ρ^K.
It remains to describe the corresponding measure on the end-compactification
bundle M.
By Lemma <ref> and <ref>, we have a G-measurable
isomorphism, which is a composition of Ψ_0 and fiberwise maps
ψ̅_H:
Ψ :Z→ M:=G/Q×_β(Ω_0×_σM_F)
((y,r,H),A)↦(y,r,(H,ψ̅_H(r^-1p̅_k(τ(y)^-1A)))),
where p̅_k is the projection Q/P=∏_j=1^ℓSL(d_j,ℝ)/P(d_j,ℝ)→ SL(2,ℝ)/P(2,ℝ)
to the k-th component, and ψ̅_H:H(∂ℍ,m_0)→∂(H\ F)
is the map specified in the proof of Lemma <ref>.
Denote by ν_P=∫_G/Qν_P^ydν_Q(y) the disintegration
of ν_P over G/Q. Note that ν_P^y is in the measure
class of τ(y).m̅_K∩ Q.
Suppose μ is such that (G/P,ν_P) is the Poisson
boundary of the μ-random walk and ν_P is in the quasi-invariant
measure class of m̅_K. The map Ψ defined as (<ref>)
is a G-measurable isomorphism between the μ-Poisson bundle
(Z,λ_ρ^μ) over (X,η_ρ^μ)
and the bundle (M,α_ρ). In the disintegration
α_ρ=∫_Xα_ρ^xdη_ρ(x) over (M,α_ρ)→(X,η_ρ),
we have
α_ρ^x=(ψ̅_H)_∗(r^-1(p̅_k)_∗(τ(y)^-1.ν_P^y))x=(y,r,H).
Lemma <ref> and <ref> imply that Ψ:Z→ M
is a G-measurable isomorphism. The expression for α_ρ^x
is obtained from a change of variable given the explicit formula (<ref>).
§.§ Lower semi-continuity for Poisson bundles of SL(d,ℝ)
In this subsection we prove lower semi-continuity statement via the
identification of the Poisson bundle (Z,λ_ρ^μ)
and the bundle (M,α_ρ). Assume μ is a
step distribution on G=SL(d,ℝ) satisfying:
AssumpB(B) Bounded Radon-Nikodym derivatives. The Poisson boundary of
the μ-random walk can be identified as (G/P,ν_P),
where ν_P is in the quasi-invariant measure class on G/P
and moreover, the Radon-Nikodym derivatives dν_P/dm̅_K
and dm̅_K/dν_P are in L^∞(G/P,m̅_K).
The main sources of examples of step distributions that satisfy AssumpB(B)
are from the works of Furstenberg <cit.>:
(i) μ is a left K-invariant admissible measure on G.
(ii) μ is an admissible B_∞ measure on G.
(iii) μ is a Furstenberg measure on a lattice Γ<G.
Under assumption AssumpB(B), we explain how the system (M,α_ρ)
in Proposition <ref> fits into the setting of Section
<ref>.
Write Y'=G/Q×_βΩ_0 and equip it with the measure
ν=ν_Q× m_Ω_0. Note that (Y',ν)
is a measure-preserving extension of (G/Q,ν_Q);
and the purpose of taking this is to apply Proposition <ref>
in the fibers of M→ Y'.
Recall that M=G/Q×_β(Ω_0×_σM_F)
is induced from the end-compactification bundle M_F of F over
Tree_F. View M as a bundle over X=G/Q×_β(Ω_0×_σ Sub(F)),
the fiber over x=(y,r,H), is the space M_x=∂(H\ F).
By Proposition <ref>, disintegrate the measure
g.α_ρ=Ψ_∗(g.λ_ρ) over M→ X,
we have that the fiber measures in g.α_p=∫_Xα_x,gd(g.η_ρ)(x)
are given by
α_x,g=(ζ_H)_∗(r^-1(p̅_k)_∗(τ(y)^-1(gν_P)^y)), x=(y,r,H).
Suppose H_0∈ Tree_F is such that its normalizer N_F(H_0)
is of finite index in F and that the Schreier graph H_0\ F
has infinitely many ends. Let Tree_F,H_0 be as defined
in (<ref>). In the same way as in the free group case,
when the Schreier graph H\ F is tree-like, we take the
sequence of finite partitions of M_x=∂(H\ F)
to be 𝒫_x,n={℧(v)} _v∈ S_H(n),
where the shadow ℧(v) in ∂(H\ F)
is defined in Notation <ref>. We have that the conditions
AssumpM(M), AssumpC'(C') and AssumpP(P)
are satisfied by construction.
Proposition <ref>
on the free group implies the following.
In the setting above,
under AssumpB(B), for y'=(y,r)∈ Y'=G/Q×_βΩ_0, the
map
S_y':={(y,r)}× Tree_F,H_0 →ℝ_≥0
x ↦ H_α_x,g∥α_x,e(𝒫_x,n)
is continuous with respect to the Chabauty topology on Tree_F,H_0.
It follows that x↦ D(α_x,g∥α_x,e) is
lower semi-continuous on S_y'.
Given y'=(y,r)∈ Y', denote by β_g the measure r^-1(p_k)_∗(τ(y)^-1.(gν_P)^y)
on ∂ℍ=SL(2,ℝ)/P(2,ℝ). The Radon-Nikodym derivative dβ_g/dm_0 is bounded
by
‖dβ_g/dm_0‖ _∞≤‖dβ_g/dr^-1.m_0‖ _∞‖dr^-1.m_0/dm_0‖ _∞≤‖dτ(y)^-1(gν_P)^y/dm̅_K∩ Q‖ _∞‖dr^-1.m_0/dm_0‖ _∞,
where τ(y)^-1(gν_P)^y and m̅_K∩ Q are measures
on Q/P. Furthermore,
‖dτ(y)^-1(gν_P)^y/dm̅_K∩ Q‖ _∞=‖d(gν_P)^y/d(τ(y).m̅_K)^y‖ _∞≤‖d(gν_P)/d(τ(y).m̅_K)‖ _∞‖d(τ(y).m̅_K)/d(gν_P)‖ _∞.
Similar calculation applies to ‖ dm_0/dβ_g‖ _∞.
Therefore Assumption AssumpB(B) implies that ‖ dβ_g/dm_0‖ _∞
and ‖ dm_0/dβ_g‖ _∞ are bounded
by a finite constant depending on y,r,g,‖ dν_P/dm̅_K‖ _∞
and ‖ dm̅_K/dν_P‖ _∞.
Take a Furstenberg discretization random walk κ_F on F
such that (∂ℍ,m_0) is F-isomorphic
to the Poisson boundary of (F,κ_F). Denote by
ν the κ_F-harmonic measure on ∂ F. We apply
Proposition <ref> to β_e and β_g,
viewed as measures on ∂ F through the F-measurable isomorphism
of Proposition <ref>. This gives locally constant uniform
approximations for α_x,e and α_x,g on S_y',
where α_x,g=(ζ_H)_∗β_g. Since the corresponding
Radon-Nikodym derivatives are bounded, Proposition <ref>
applies to show that the map
(y,r)× Tree_F,H_0 →ℝ
x=(y,r,H) ↦ H_α_x,g∥α_x,e(𝒫_x,n)
is continuous with respect to the Chabauty topology.
We deduce lower semi-continuity of the entropy of the Poisson bundle
(Z,λ_ρ^μ).
Let μ be a step distribution
on G satisfying AssumpB(B). The map ρ↦ h_μ(Z,λ_ρ^μ)
is lower semi-continuous, where ρ is in the space of ergodic
F-invariant measures on Tree_F,H_0, equipped with
the weak^∗-topology.
The KL-divergence D(α_x,g∥α_x,e)
is the increasing limit of H_α_x,g∥α_x,e(𝒫_x,n),
where the latter is continuous on S_y' by Proposition <ref>.
Then by Lemma <ref>, we have that the map p↦ h_μ(M,α_ρ)
is lower semi-continuous. By the identification in Proposition <ref>,
the Poisson bundle (Z,λ_ρ^μ) is G-measurable
isomorphic to the induced end-compactification bundle (M,α_ρ).
Thus h_μ(Z,λ_ρ)=h_μ(M,α_ρ),
the statement follows.
§.§ Entropy realization for SL(d,ℝ) and its lattices
Denote by Δ simple roots of G=SL(d,ℝ) and let
I⊆Δ. Recall that we list Δ-I={ i_1,…,i_ℓ-1}
in increasing order, and associated with I is the partition d=d_1+…+d_ℓ,
where d_j=i_j-i_j-1, i_0=0 and i_ℓ=d. Suppose
there is a k is such that d_k=2. Then i_k-1∈ I and
it corresponds to a rank 1 factor of L_I. Write I'=I-{ i_k-1}.
In the setting of Theorem <ref>,
the interval [h_μ(G/P_I),h_μ(G/P_I')]
is contained in the Furstenberg entropy spectrum EntSp(S,μ).
Let Q=P_I. Let ρ be an ergodic IRS of the free group F
and (X,η_ρ) be the induced G-system in (<ref>),
then (X,η_ρ) is a relative-measure preserving
extension of (G/Q,ν_Q). The G-action on (X,η_ρ)
is ergodic by general properties of inducing (see e.g. <cit.> or <cit.>).
For ℓ∈ℕ, p∈[0,1], let H_0=K_ℓ and
ρ=ρ_ℓ,p be the F-ergodic IRS supported on Tree_F,K_ℓ
constructed by Bowen as reviewed in Subsection <ref>.
We now verify that a lattice Γ in G also acts ergodically
on (X,η_ρ_ℓ,p). First note that when p=0
or 1, G acts transitively on suppη_ρ_ℓ,p,
the system is of the form G/H for some noncompact closed subgroup
H and the corresponding measure is in the quasi-invariant measure
class. By Moore ergodicity (see e.g., <cit.>), H acts ergodically on (G/Γ,m̅),
it follows that Γ acts ergodically on (X,η_ρ_ℓ,p)
for p∈{0,1}. Next for p∈(0,1), we have by construction that
F↷( Tree_F,ρ_ℓ,p) is a weakly
mixing extension of a finite transitive system. Then as a G-system, (X,η_ρ_ℓ,p) is a relative measure-preserving,
relative weakly mixing
extension of a homogeneous system of the form G/H_1 equipped with a quasi-invariant measure, where H_1 is not compact.
It remains as a weakly mixing extension when viewed as Γ-systems
(see e.g., <cit.>), it follows that Γ↷(X,η_ρ_ℓ,p)
is ergodic.
The assumption on μ guarantees that AssumpB(B) is satisfied.
Let (Z,λ_ℓ,p) be the μ-Poisson bundle
over the standard system (X,η_ρ_ℓ,p) with
x↦ L_x= Stab_G(x). Apply Corollary <ref> in the case where μ is in B_∞ class, and Corollary <ref> when μ is supported on a lattice Γ, we have that p↦ h_μ(Z,λ_ℓ,p) is upper semi-continuous.
Combined with the lower semi-continuity statement in Corollary
<ref>, it follows that p↦ h_μ(Z,λ_ℓ,p) is continuous on [0,1].
In the proof of Proposition <ref>, we
have identified (Z,λ_ℓ,p) with an induced
end-compactification bundle (M,α_ℓ,p).
For p=1, the IRS ρ_ℓ,1=δ_{e} is supported on
the trivial subgroup. The bundle M is simply G/Q×_β(Ω_0×_σ∂ F).
By definition of p_k, the Q-space Ω_0×_σ∂ F
is isomorphic to SL(2,ℝ)/P(2,ℝ) and so to Q/P_I'.
It follows that the bundle M is G-isomorphic to G/Q×_βQ/P_I'=G/P_I'.
This shows h_μ(Z,λ_ℓ,1)=h_μ(G/P_I').
Next for p=0, as in the proof of Theorem <ref> at the
end of Section <ref>, we have the weak^∗
convergence ρ_ℓ,0→κ=1/2∑_i=1^2δ_A_i,
where A_i is the normal closure of the cyclic subgroup ⟨ a_i⟩
of F. Corollary <ref> on upper semi-continuity gives
lim sup_ℓ→∞h_μ(Z,λ_ℓ,0)≤ h_μ(Z,λ_κ)=1/2∑_i=1^2h_μ(Z,λ_δ_A_i).
We have that A_i is a normal subgroup of F with quotient A_i∖ F
isomorphic to ℤ. Since any random walk on ℤ
has trivial Poisson boundary, A_i acts ergodically on (∂ℍ,m̅_SO(2)),
which is identified with Poisson boundary of κ_F-random
walk on F. By the description in Proposition <ref>,
we have that the Poisson bundle (Z,λ_δ_A_i)
has trivial fibers over the corresponding base system (X,η_δ_A_i).
In particular, (Z,λ_δ_A_i) is a measure-preserving
extension of (G/P_I,ν_P_I). We conclude lim sup_ℓ→∞h_μ(Z,λ_ℓ,0)≤ h_μ(G,P_I)
and
⋃_ℓ∈ℕ{ h_μ(Z,λ_ℓ,p):p∈[0,1]} =(h_μ(G/P_I),h_μ(G/P_I')].
Combine Theorem <ref> and Proposition <ref>.
§.§ Interpretation in terms of Lyapunov exponents
Equip ℝ^n with the standard Euclidean norm ‖·‖
and SL(d,ℝ) with the operator norm ‖·‖ _ op.
For a step distribution μ on G=SL(d,ℝ) satisfying
the first moment condition ∫log^+‖ g‖ _ opdμ(g)<∞,
by the Osceledets multiplicative ergodic theorem, there exists exponents
λ_1>λ_2>…>λ_k and for a.e. ω
a flag V^≤λ_k⊂ V^≤λ_k-1⊂…⊂ V^≤λ_1=ℝ^d.
When the harmonic measure ν_P is in the quasi-invariant measure
class, it is well-known that the Furstenberg entropy h_μ(G/Q,ν_Q) can be expressed
in terms of the Lyapunov exponents, see <cit.> and references therein. Then in the setting of Theorem <ref>,
the Furstenberg entropy spectrum is determined by the Lyapunov spectrum of the μ-random walk.
We include the explicit formulae below for the convenience of the reader.
For the minimal parabolic subgroup P,
h_μ(G/P,ν_P)=∑_1≤ i<j≤ dλ_i-λ_j.
For a standard parabolic subgroup Q such that the corresponding
flag in G/Q is of type (r_1,r_2,…,r_k),
h_μ(G/Q,ν_Q)=∑_ℓ=1^k∑_r_ℓ-1<i≤ r_ℓ,j>r_ℓλ_i-λ_j=h_μ(G/P,ν_P)-∑_ℓ=1^k∑_r_ℓ-1≤ i<j≤ r_ℓλ_i-λ_j.
Given a parabolic subgroup Q=P_I, let V̅=V̅_I
be the corresponding opposite unipotent subgroup. denote by η
the Haar measure on V̅. Recall that the projection G→ G/Q
maps V̅ diffeomorphically onto a set of full m̅_K-measure.
Still denote by η the pushward of η to G/Q. Write g=n_gσ_gk_g∈N̅AK
for the Iwasawa decomposition. Note that N̅ preserves the
measure η on G/Q. By <cit.> we may
change m̅_K to the measure η in the Radon-Nikodym
derivative and
th_μ(G/Q,m̅_K) =-∫_G∫_G/Qlogdg^-1.m̅_K/dm̅_K(x)dm̅_K(x)dμ^(t)(g)
=-∫_G∫_G/Qlogdg^-1.η/dη(x)dm̅_K(x)dμ^(t)(g)
=-∫_G∫_G/Qlogdσ_g^-1n_g^-1.η/dη(x)dm̅_K(x)dμ^(t)(g)
=-∫_G∫_G/Qlogdσ_g^-1η/dη(x)dm̅_K(x)dμ^(t)(g).
Denote a μ-random walk by (ω_t)_t=0^∞. By
the almost sure convergence <cit.> and
the equivalence between Iwasawa and polar decompositions <cit.>,
it is known that 1/tlogσ_ω_t converges almost
surely to the deterministic diagonal matrix Λ with Lyapunov
exponents (λ_1,λ_2,…,λ_d),
repeated with multiplicity when the spectrum is not simple. So an
appropriate i,j entry of the corresponding matrix in V̅
is essentially multiplied by e^t(λ_i-λ_j). The
formula follows.
§ THE NEVO-ZIMMER OPERATION ON FUNCTIONS
In this section we review some arguments from <cit.> for the
use in Section <ref>. Let G be a lcsc group and
H be a closed subgroup of G. Throughout this section we assume
that an ergodic G-system (X,ν) has the following structure:
there is an H-invariant measure λ on X and a measure
ν_0 on G/H such that the map
ξ_0 :(G×_HX_0,ν_0×λ)→(X,ν)
[g,x_0]↦ g.x_0
is a G-factor map, where X_0= suppλ⊆ X.
We identify X_0 as a subset of G×_HX_0 via x_0↦[e,x_0].
§.§ Conditional expectation
Let s be an element in H. Denote by ℱ^s the sub-σ-field
of (X_0,λ) which consists of s-invariant
measurable sets, that is, ℱ^s={A∈ℬ(X_0):s.A=A}.
Denote by 𝔼_λ[·|ℱ^s]:L^2(X_0,λ)→ L^2(X_0,λ)
the conditional expectation on the space of s-invariant functions.
Given a function f∈ C(X), lift it to a function f̃
on G× X_0 by f̃(g,x_0)=f(g.x_0).
With a fixed element g∈ G, we view f̃(g,·) as a
function in L^2(X_0,λ) and take its conditional expectation
given ℱ^s.
Suppose for an element g∈ G, for any f∈ C(X),
𝔼_λ[f̃(g,·)|ℱ^s]=𝔼_λ[f̃(e,·)|ℱ^s]λ,
then the measure λ (viewed as a measure on X) is invariant
under g.
By the definition of conditional expectation, we have that
∫_X_0𝔼_λ[f̃(g,·)|ℱ^s]dλ=∫_X_0f̃(g,x_0)dλ(x_0)=∫_Xf(g.x_0)dλ(x_0)=∫_Xf(x)dg.λ(x).
In the second equality we used the fact that X_0= supp(λ).
Then (<ref>) implies ∫_Xf(x)dg.λ(x)=∫_Xf(x)dλ(x),
since it holds for all f∈ C(X), g.λ=λ.
§.§ Dynamics of the ⟨ s⟩-action
When the ⟨ s⟩-action on G/H has certain
contracting properties, in <cit.> it is shown that the resulting
conditional expectation given ℱ^s factors through ξ_0.
We now describe the conditions.
Recall that Int(s).g=sgs^-1.
Suppose we have an element s∈ H, subgroups U,V of G and
a normal subgroup W of H satisfying
(i) the map
p:U× V → G/H
(u,v) ↦ uvH
takes U× V homeomorphically to a ν_0-conull set in
G/H, and moreover p_∗(m_U× m_V) is
in the same measure class as ν_0.
(ii) Int(s) acts trivially on U, that is, s commutes
with elements in U,
(iii) Int(s^-1) acts as a contracting automorphism
on V; Int(s) acts as a contracting automorphism on W.
Note that (iii) implies that U∩ V={e} and (i) implies that
U∩ H=V∩ H={e}. Also note that since s is an element
in H, the measure λ is invariant under s.
For G=SL(3,ℝ), these assumptions are satisfied for example for
H=P=([ ∗ ∗ ∗; 0 ∗ ∗; 0 0 ∗; ]),
U=([ 1 0 0; ∗ 1 0; 0 0 1; ]),
V=([ 1 0 0; 0 1 0; ∗ ∗ 1; ]),
W=([ 1 0 ∗; 0 1 ∗; 0 0 1; ]),
and
s=([ e^-t_1 0 0; 0 e^-t_1 0; 0 0 e^-t_2; ]), for t_1<t_2.
When (i) - (iii) are satisfied, consider the following continuous
map
ξ :U× V× X_0→ X
(u,v,x_0)↦ uv.x_0,
which is related to the map ξ_0 by ξ(u,v,x_0)=ξ_0([uv,x_0]).
Since ν=ν_0∗λ, by condition (i) we have that the
image ξ(U× V× X_0) is a ν-conull
set in X. Denote by ℒ̃(X) the sub-σ-field
of the Borel σ-field of U× V× X_0 that consists
of (classes modulo null sets of ) lifts by ξ of measurable subsets
of X. Denote by L̃^∞(X)=L̃^∞(X,ν)
the subspace of L^∞(U× V× X_0,m_U× m_V×λ)
that consists of functions that are measurable with respect to ℒ̃(X)
. In other words, L̃^∞(X) consists of lift functions
in L^∞(X).
The following actions of the groups U and ⟨ s⟩
on U× V× X_0 preserve the product structure:
u_1.(u,v,x_0) =(u_1u,v,x_0), u_1∈ U,
s.(u,v,x_0) =(sus^-1,svs^-1,s.x_0)=(u, Int(s).v,s.x_0).
The map ξ is equivariant under U and ⟨ s⟩.
Note also that since U commutes with ⟨ s⟩,
U preserves the s-invariant σ-field ℱ^s.
For a continuous function f∈ C(X), consider the composition f∘ξ,
which is a continuous function on U× V× X_0. The proof
of <cit.> applies verbatim to the current setting
and implies that the conditional expectation 𝔼_λ[f̃(u,·)|ℱ^s]
is the limit of Cesaro averages under the ⟨ s⟩-action.
Under conditions (i), (ii) and (iii), we have
lim_N→∞1/N+1∑_n=0^Ns^n.(f∘ξ)(u,v,x_0)=𝔼_λ(f̃(u,·)|ℱ^s)(x_0),
where the converge is a.s. and in L^2. Moreover, the function
(u,v,x_0)↦𝔼_λ(f̃(u,·)|ℱ^s)(x_0)
is in L̃^∞(X).
This property allows to define a map
ℰ_s :C(X)→L̃^∞(X)
ℰ_sf(u,v,x_0) =lim_N→∞1/N+1∑_n=0^Ns^n.(f∘ξ)(u,v,x_0)=𝔼_λ(f̃(u,·)|ℱ^s)(x_0).
We will refer to the map ℰ_s as the Nevo-Zimmer operation
with (s,U,V,W) on C(X). The resulting function does
not depend on the v-coordinate, that is, ℰ_sf(u,v,x_0)=ℰ_sf(u,e,x_0).
Note the following immediate properties:
In the setting above, we have that for f∈ C(X),
(i) u.ℰ_s(f)=ℰ_s(u.f)
for u∈ U. It follows that ∫_Xfdu.λ=∫_X_0ℰ_sf(u,e,x_0)dλ(x_0).
(ii) If u∈ U is such that u.X_0=X_0, then ℰ_sf(u,e,x_0)=ℰ_sf(e,e,u.x_0).
(iii) Suppose h∈ H is an element such that either Int(s)
or Int(s^-1) contracts h to e, then ℰ_sf(u,v,h.x_0)=ℰ_sf(u,e,x_0).
(i). Since s is assumed to commute with U, we have that for
u∈ U, us^n.(f∘ξ)=s^n.((u.f)∘ξ).
The statement then follows from Proposition <ref>.
(ii). This follows from (f∘ξ)(u,e,x_0)=(f∘ξ)(e,e,u.x_0).
(iii). For a given u∈ U, the function ℰ_sf(u,e,·)
viewed as an element in L^2(X_0,λ), is invariant under
⟨ s⟩ in the sense that ℰ_sf(u,e,x_0)=ℰ_sf(u,e,s.x_0).
Apply the generalized Mautner lemma (<cit.>)
to the unitary representation of H on L^2(X_0,λ),
we have that ℰ_sf(u,e,·) is invariant under h
as well.
§.§ Three cases
Suppose (s,U,V,W) are such that conditions (i) - (iii)
are satisfied. When we apply the Nevo-Zimmer operation ℰ_s,
one of the following scenarios occurs. The first case is:
(I) The subgroup U preserves the measure λ.
As in the <cit.>, in the negation of (I), there are
two situations to consider, depending on whether ⟨ s⟩↷(X_0,λ)
is ergodic.
(II1) There exists a function f∈ C(X) and u∈ U such
that ∫_Xfdλ≠∫_Xfd(u.λ); and
for m_U-a.e. u'∈ U, the function x_0↦ℰ_sf(u',e,x_0)
is λ-constant.
The remaining case is
(II2) =( I∨ II1). There exists a function
f∈ C(X) such that ∫_Xfdλ≠∫_Xfd(u.λ)
for some u∈ U. Moreover, for every such f, there is a m_U-positive
set of u∈ U where the function x_0↦ℰ_sf(u,e,x_0)
is not λ-constant.
Case (II1) is treated in <cit.>.
We briefly describe the argument to show existence of a nontrivial
homogeneous factor in this case. Take a function f∈ C(X) as
in the description of (II1). Since x_0↦ℰ_sf(u',e,x_0)
is λ-constant, we have ℰ_sf(u',e,x_0)=∫_X_0ℰ_sf(u',e,x_0)dλ(x_0)=∫ fdu'.λ.
Recall that we have the map ξ:U× V× X_0→ X where
ξ(u,v,x_0)=uv.x_0; and the projection map p:U× V× X_0→ G/H
given by (u,v,x_0)↦ uvH. Similar to the sub-σ-field
ℒ̃(X), denote by ℒ̃(G/H) the
sub-σ-field of the Borel σ-field of U× V× X_0
that consists of lifts by p of measurable subsets of G/H.
Then the function (u,v,x_0)↦ℰ_sf(u,e,x_0)=∫ fdu.λ
can be viewed as a non-constant function measurable with respect to
the intersection of σ-fields ℒ̃(X)∩ℒ̃(G/H).
Take the Mackey realization of ℒ̃(X)∩ℒ̃(G/H),
we obtain a nontrivial common factor of (X,ν) and (G/H,m_G/H).
Thus in this case we conclude that (X,ν) has a nontrivial homogeneous
factor.
Case (II2) is treated by considerations of the Gauss map in <cit.>.
By the second part of condition (iii), Int(s) acts as a contracting
automorphism on W⊲ H. As in the proof of <cit.>,
apply the generalized Mautner lemma to the unitary representation
of H on L^2(X_0,λ), we have that s-fixed functions
in L^2(X_0,λ) are fixed by W as well. In ℬ(X_0)
take the W-invariant sub-σ-algebra:
ℬ_W(X_0):={A∈ℬ(X_0):g.A=Ag∈ W}.
In particular, ℱ^s⊆ℬ_W(X_0) by
the Mautner lemma. Since W is assumed to be normal in H, we
have that if A∈ℬ_W(X_0), then for h∈ H, h.A∈ℬ_W(X_0)
as well. Denote by X_0' the Mackey realization of the σ-algebra
ℬ_W(X_0), equipped with the measure λ' which
corresponds to the restriction of λ to ℬ_W(X_0).
Then X_0' is an H-space and W acts trivially on X_0'.
Note that by construction X_0'= suppλ'.
As in <cit.>, take the space (X',ν'),
which is the largest common G-factor of (X,ν) and (G×_HX_0',ν_0×λ')
G×_HX_0[r][d] X[d]
G×_HX_0'[r] X'.
The existence of a function f as described in (II2) implies that
the sub-σ-field ℱ^s is nontrivial, it follows
that (X_0',λ') and (X',ν') are
nontrivial, see <cit.>. Note the following
property.
In Case (II2), the subgroup U does not preserve the measure λ',
which is viewed as a measure on X'.
Take a function f∈ C(X) such that ∫_Xfdλ≠∫_Xfd(u.λ)
for some u∈ U. The function x_0↦ℰ_sf(e,e,x_0),
which is measurable with respect to ℱ^s, thus also
ℬ_W(X_0), can be viewed as the lift of a function
ϕ∈ L^2(X_0',λ') to L^2(X_0,λ). Suppose
on the contrary u.λ'=λ'. Then in particular, u. suppλ'= suppλ'
which implies u.X_0=X_0. Apply Lemma <ref>, we have
that ℰ_sf(u,e,·) is the lift of u^-1.ϕ
to L^2(X_0,λ). Then
∫_Xf(x)dλ(x) =∫_X_0ℰ_sf(e,e,x_0)dλ(x_0)=∫_X'ϕ dλ'
=∫_X'ϕ(x')du.λ'(x')=∫_X_0ℰ_sf(u,e,x_0)dλ(x_0)=∫_Xf(x)du.λ(x),
which is a contradiction.
Since W acts trivially on X_0', we have that in the induced
system G×_HX_0', a point stabilizer contains a conjugate
of W. Passing to the factor X', it follows that a stabilizer
Stab_G(x') contains a conjugate of W for x'∈ X'.
§ MUTUAL INFORMATION AND ENTROPY FORMULAE
Let μ be an admissible measure on an lcsc group G. Consider
(B,ν_B) the Poisson boundary, and (X,η) a standard (G,μ)-system.
Throughout this appendix, we consider an intermediate factor of their
joining :
(X× B,ην_B)ψ→(Z,λ)ϱ→(X,η),
where the composition ϱ∘ψ is the natural coordinate
projection X× B→ X. The definition of the joining of two
stationary systems was given in Section <ref>.
See also <cit.> for a detailed treatment.
§.§ A consequence of Birkhoff's ergodic theorem
Let us write ψ̃ for the map X× G^ℕ→(Z,λ)
given by ψ̃(x,ω)=ψ(x, bnd(ω)).
In the setting above, we have
h_μ(Z,λ)=∫_X× G^ℕlogdω_1λ/dλ(ψ̃(x,ω))d(ηℙ_μ)
and
h_μ(X,η)=∫_X× G^ℕlogdω_1η/dη(x)d(ηℙ_μ).
The Furstenberg entropy of (Z,λ) is defined as
h_μ(Z,λ) =∫_G∫_Zlogdλ/dg^-1.λ(z)dλ(z)dμ(g)=∫_G∫_Zlogdgλ/dλ(g.z)dλ(z)dμ(g)
=∫_G∫_X× G^ℕlogdgλ/dλ(g.ψ̃(x,ω̃))d(ηℙ_μ)(x,ω̃)dμ(g)
=∫_G∫_G^ℕ∫_Xlogdgλ/dλ(g.ψ̃(x,ω̃))dη_ω̃(x)dℙ_μ(ω̃)dμ(g).
Now for ω=(ω_1,ω_2,…) in G^ℕ,
set T'ω=(ω_1^-1ω_2,ω_1^-1ω_3,…).
When ω has law ℙ_μ, then (ω_1,T'ω)
has the same law μ×ℙ_μ as (g,ω̃).
It follows that
h_μ(Z,λ)= ∫_G^ℕ∫_Xlogdω_1λ/dλ(ω_1.ψ̃(x,T'ω))dη_T'ω(x)dℙ_μ(ω)
= ∫_G^ℕ∫_Xlogdω_1λ/dλ(ψ̃(ω_1.x,ω'))dη_ω'(ω_1.x)dℙ_μ(ω) (as η_ω_1^-1ω'=ω_1^-1η_ω')
= ∫_G^ℕ∫_Xlogdω_1λ/dλ(ψ̃(x,ω))dη_ω(x)dℙ_μ(ω).
The second formula is proved similarly with ϱ∘ψ(x,ω)=x
in place of ψ.
Let (Z,λ) be an intermediate factor in (<ref>).
If (X,η) is an ergodic stationary system, then for ηℙ_μ-a.e.
(x,ω)∈ X× G^ℕ, we have
lim_n→∞1/nlogdω_nλ/dλ(ψ(x, bnd(ω)))=h_μ(Z,λ).
The telescoping argument is adapted from the proof of <cit.>.
Given ω, let g_k=ω_k-1^-1ω_k be its k-th
increment, with ω_0=id. The Radon-Nikodym derivative can
be rewritten as a product:
dω_nλ/dλ(ψ(x, bnd(ω))) =∏_k=1^ndω_kλ/dω_k-1λ(ψ(x, bnd(ω)))
=∏_k=1^ndg_kλ/dλ(ψ(ω_k-1^-1.x,ω_k-1^-1. bnd(ω)))
=∏_k=1^nd(T^k-1(x,ω))_1λ/dλ(ψ̃(T^k-1(x,ω)))
where T(x,ω)=(ω_1^-1x,(ω_1^-1ω_2,ω_1^-1ω_3,…))
is the skew transformation defined in Section <ref>.
Let f(x,ω):=logdω_1λ/dλ(ψ̃(x,ω)).
By Fact <ref>, we can apply Birkhoff's pointwise ergodic theorem
to T and deduce almost surely,
lim_n→∞1/nlogdω_nλ/dλ(ψ(x, bnd(ω)))=lim_n→∞1/n∑_k=1^nf(T^k-1(x,ω))=∫_X× G^ℕf(x,ω)dηℙ_μ=h_μ(Z,λ).
§.§ KL-divergence and mutual information
We review the definition of KL-divergence, record a useful inequality,
and recall several facts about mutual information due to Derriennic
<cit.>.
§.§.§ KL-divergence
Suppose α,β are probability measures on a measurable space
(Ω,ℬ) and 𝒫_n is a refining sequence
of finite partitions whose union generates ℬ. Given a
finite measurable partition 𝒫 of X, denote the relative
entropy of 𝒫 with measure α with respect to β
as
H_α∥β(𝒫):=∑_A∈𝒫α(A)logα(A)/β(A).
The Kullback-Leibler divergence of probability measures α
and β can be defined by (see e.g., <cit.>)
D(α∥β)=sup_nH_α∥β(𝒫_n).
When α is absolutely continuous with respect to β,
we have D(α∥β)=∫_Mlog(dα/dβ)dα
by <cit.>. We refer to <cit.> for
a detailed account.
In the proof of Proposition <ref>, we will use the following
inequality, which is a consequence of the reverse Pinsker inequality
in <cit.>.
Let α,β,α',β' be probability measures on the
space X and let 𝒫 be a finite partition of X. Denote
by C=C_α,β(𝒫)=max_A∈𝒫{α(A)/β(A)}.
Then
H_α∥β(𝒫)-H_α'∥β'(𝒫)≤log(max_A∈𝒫β'(A)/β(A))+2C^1/2max_A∈𝒫|1-α'(A)/α(A)|.
Write M=M_α,β. The difference in relative entropies
is
H_α∥β(𝒫)-H_α'∥β'(𝒫) =∑_A∈𝒫α(A)logα(A)/β(A)-α'(A)logα'(A)/β'(A)
=(∑_A∈𝒫α(A)logα(A)/β(A)-α'(A)logα'(A)/β(A))+∑_A∈𝒫α'(A)logβ'(A)/β(A)
=I+II.
Rewrite I as
I =(∑_A∈𝒫(1-α'(A)/α(A))α(A)logα(A)/β(A))-H_α'∥α(𝒫)
≤∑_A∈𝒫(1-α'(A)/α(A))α(A)logα(A)/β(A).
Split the sum into two parts: A is in 𝒫_+ (𝒫_-
resp.) if α(A)/β(A)≥1 (<1 resp.). By the reverse
Pinsker inequality we have
∑_A∈𝒫_+α(A)logα(A)/β(A)≤√(C)d_ TV(α,β).
Note that <cit.> is stated as with D(α∥β)
on the left-hand side in the inequality. For the convenience of the
reader, we briefly repeat its proof here to show (<ref>).
For 1≤ z≤ M, 1-M^-1/log Mzlog z≤ z-1, summing
over 𝒫_+, we have
1-M^-1/log M∑_A∈𝒫_+β(A)α(A)/β(A)logα(A)/β(A)≤∑_A∈𝒫_+β(A)(α(A)/β(A)-1)=d_ TV(α,β).
For x∈(0,1), √(x)≤(x-1)/log x, then the inequality
(<ref>) follows.
Since H_α∥β(𝒫)≥0, it follows that
-∑_A∈𝒫_-α(A)logα(A)/β(A)≤∑_A∈𝒫_+α(A)logα(A)/β(A).
Plugging back in I, we have then
I ≤max_A∈𝒫|1-α'(A)/α(A)|(∑_A∈𝒫_+α(A)logα(A)/β(A)-∑_A∈𝒫_-α(A)logα(A)/β(A))
≤2√(C)max_A∈𝒫|1-α'(A)/α(A)|.
The second part is bounded by
II=∑_A∈𝒫α'(A)logβ'(A)/β(A)≤∑_A∈𝒫α'(A)log(max_A∈𝒫β'(A)/β(A))=log(max_A∈𝒫β'(A)/β(A)).
§.§.§ Mutual information
The mutual information of two random variables X and Y of laws
P(X) and P(Y) is the KL-divergence of their joint law with respect
to their product law:
I(X,Y):=D(P(X,Y) P(X)⊗ P(Y)).
Given a probability space (Ω,ℬ,ℙ), a random
variable X:Ω→ S and a sub-σ-field ℱ
of ℬ, denote by P(X|ℱ) the conditional law
of X given ℱ. The mutual information of X and ℱ
is given by
I(X,ℱ)=∫_Ω∫_SlogdP(X|ℱ)/dP(X)dP(X|ℱ)dℙ.
These formulae are consistent: denote by σ(Y) the σ-field
generated by Y, then I(X,Y)=I(X,σ(Y)).
Recall from Subsection <ref> that (ξ_n^x)_n=0^∞
is the Markov chain obtained from a Doob transformed trajectory of
law ℙ_μ^π(x) by taking the quotient map G→ L_x\ G.
Following <cit.>, we consider the mutual information I(ξ_1^x,𝒯_x)
between the chain at time one and its tail σ-field 𝒯_x.
In the setting of Subsection <ref>, we collect some
known facts regarding entropy and mutual information.
Let (Z,λ) be a Poisson bundle over a standard system (X,η)π→(Y,ν)
satisfying assumption AssumpS(S).
(i) I(ξ_1^x,ξ_n^x)≤ I(ℙ_μ,1^π(x),ℙ_μ,n^π(x)).
(ii) Integrated over (Y,ν), we have
∫_Y I(ℙ_μ,1^y,ℙ_μ,n^y)dν(y)= I(ℙ_μ,1,ℙ_μ,n)-h_μ(Y,ν).
(iii) The sequence I(ξ_1^x,ξ_n^x) is
non-increasing and
I(ξ_1^x,𝒯_x)=inf_nI(ξ_1^x,ξ_n^x)=lim_nI(ξ_1^x,ξ_n^x).
(iv) The mutual information can be written as
I(ξ_1^x,𝒯_x) =∫_G^ℕlogd(ω_1λ)_x/dλ_x(ψ_x(ω))dℙ_μ^π(x)(ω)
=∫_G∫_Z_xlogd(gλ)_x/dλ_x(z)d(gλ)_x(z)φ_g(π(x))dμ(g).
(i). Recall that ℙ_μ^π(x) is the law of the Doob
transformed random walk (or rather Markov chain) on G^ℕ
conditioned by {β_Y(ω)=π(x)}. The law
of (ξ_1^x,ξ_n^x) can be viewed as the restriction
of the joint law (ℙ_μ,1^π(x),ℙ_μ,n^π(x))
on G× G to the sub-σ-field of L_x-invariant subsets,
that is, measurable sets A⊆ G× G such that L_xA=A.
It then follows from general properties of KL-divergence with respect
to restrictions of measures, see e.g., <cit.>.
(ii). Denote by P the joint law (ℙ_μ,1,ℙ_μ,n)
and by Q the product law ℙ_μ,1×ℙ_μ,n
of times 1 and n of a random walk trajectory of law ℙ_μ.
Similarly for y in Y, denote by P^y the joint law (ℙ_μ,1^y,ℙ_μ,n^y)
and by Q^y the product law ℙ_μ,1^y×ℙ_μ,n^y
for a Doob transformed trajectory of law ℙ_μ^y.
We have dP^y/dQ^y(g,h)=dP/dQ(g,h)1/φ_g(y).
Thus
∫_Y I(ℙ_μ,1^y,ℙ_μ,n^y)dν(y) =∫_Y∫_G× GlogdP/dQdP^ydν(y)-∫_Y∫_G× Glogφ_g(y)dP^y(g,h)dν(y)
=∫_G× GlogdP/dQdP-∫_Y∫_G× Glogφ_g(y)dμ(g)dμ^n-1(g^-1h)φ_h(y)dν(y)
= I(ℙ_μ,1,ℙ_μ,n)-∫_Y∫_Glogφ_g(y)dμ(g)dg.ν(y)= I(ℙ_μ,1,ℙ_μ,n)-∫_Ylogφ_g(y)dν(y).
In the last line we used the harmonicity of φ_h, which
implies ∫_Gφ_h(y)dμ^n-1(g^-1h)=φ_g(y).
We conclude by Lemma <ref>.
(iii) By <cit.>, this is a consequence of
the fact that (ξ_n^x) has the Markov property,
according to Proposition <ref>.
(iv). By <cit.> one has
I(ξ_1^x,𝒯_x)=∫_(L_x\ G)^ℕlogdℙ_μ,x,ξ_1^x|_𝒯_x/dℙ_μ,x,e|_𝒯_xdℙ_μ,x,e,
where the notation Q|_ℱ denote the restriction of the
measure Q to a sub-σ-field ℱ. By Corollary <ref>,
we may replace in (<ref>) the tail σ-field
𝒯_x by the invariant σ-field ℐ_x
of (ξ_n^x). Lemma <ref> gives that
for any g in G,
ℙ_μ,x,g|_ℐ_x=θ(x,·)_∗ℙ_μ,x,g=(gλ)_x.
Finally Lemma <ref> gives θ(x,·)_∗ℙ_μ,x,e=θ(x,·)_∗ϑ_∗ℙ_μ^π(x)=ψ(x,·)_∗ℙ_μ^π(x).
It follows that (<ref>) implies
I(ξ_1^x,𝒯_x)=∫_G^ℕlogd(ω_1λ)_x/dλ_x(ψ(x,ω))dℙ_μ^π(x)(ω).
The second equality follows by conditioning on the first step ω_1=g
of the Doob transformed random walk, recalling that (gλ)_x
is the hitting distribution in the fiberwise Poisson boundary Z_x.
§.§.§ Proof of Proposition <ref>
Recall that (Z,λ) fits into (X× G^ℕ,ηℙ_μ)ψ→(Z,λ)ϱ→(X,η).
By Lemma <ref>, the Furstenberg entropy of (Z,λ)
is
h(Z,λ)=∫_G^ℕ∫_Xlogdω_1.λ/dλ(ψ(x,ω))dη_ω(x)dℙ_μ(ω).
Next we disintegrate λ over (Z,λ)ϱ→(X,η)
and denote it as λ=∫_Xλ_xdη(x). Then the
Radon-Nikodym derivative can be written as
dg.λ/dλ(ψ(x,ω))=dg.η/dη(x)d(g.λ)_x/dλ_x(ψ(x,ω)).
Thus we have
h(Z,λ)=∫_G^ℕ∫_Xlogdω_1.η/dη(x)dη_ω(x)dℙ_μ(ω)+ ∫_G^ℕ∫_Xlogd(ω_1λ)_x/dλ_x(ψ(x,ω))dη_ω(x)dℙ_μ(ω)= I+ II.
By Lemma <ref> and as π:X→ Y is measure preserving,
we have that I=h_μ(X,η)=h_μ(Y,ν). By Lemma <ref>,
we have
II=∫_X∫_G^ℕlogd(ω_1λ)_x/dλ_x(ψ_x(ω))dℙ_μ^π(x)(ω)dη(x),
where ℙ_μ^y is the law of the Doob transformed random
walk on G conditioned on {β_Y(ω)=y}.
By Lemma <ref> (iv), we have II=∫_XI(ξ_1^x,𝒯_x)dη(x)=I(ξ_1,𝒯|X,η).
§.§ Entropy formulae and Shannon's theorem for countable groups
In this subsection we assume that G is a countable group. For a
convolution μ-random walk (ξ_n)_n=0^∞
on a discrete group, it is classical that
I(ξ_1,ξ_n)=H(μ^(n))-H(μ^(n-1))
where H(p)=∑_g∈ Gp(g)log p(g) is the Shannon entropy of
the discrete probability measure p on G. In the case of a countable
group G endowed with a finite entropy probability measure μ,
we will obtain a bundle version of this formula, stated in Theorem
<ref>.
§.§.§ Proof of Theorem <ref>
Let us denote
I(ξ_1,ξ_n|X):=∫_XI(ξ_1^x,ξ_n^x)dη(x) and H(ξ_n|X):=∫_XH(ξ_n^x)dη(x).
Assume G is a countable group
endowed with a probability measure μ of finite Shannon entropy.
Let (X,η) be a standard system satisfying assumption AssumpS(S). Then
I(ξ_1,ξ_n|X)=H(ξ_n|X)-H(ξ_n-1|X).
The proof follows standard calculations as in <cit.>.
The sequence (ξ_n^x)_n=0^∞ is a Markov chain in
the coset space L_x\ G, started at the identity coset
L_x and with transition probabilities given by Proposition <ref>.
By definition, we have
I(ξ_1^x,ξ_n^x)=∑_ξ_1^x∑_ξ_n^xlog(P_μ,x^n-1(ξ_1^x,ξ_n^x)/P_μ,x^n(L_x,ξ_n^x))P_μ,x^1(L_x,ξ_1^x)P_μ,x^n-1(ξ_1^x,ξ_n^x)
We can split the logarithm to get I(ξ_1^x,ξ_n^x)= I_x+ II_x,
where I_x=H(ξ_n^x), so ∫_X I_xdη(x)=H(ξ_n|X).
There remains to show ∫_X II_xdη(x)=-H(ξ_n-1|X),
where
II_x=∑_s∈ Gμ(s)dsν/dν(π(x))∑_ξ_n^x∈ L_x\ Glog(P_μ,x^n-1(L_xs,ξ_n^x))P_μ,x^n-1(L_xs,ξ_n^x)
as the transition probabilities satisfy
P_μ,x^1(L_x,ξ_1^x)=∑_{s:ξ_1^x=L_xs}μ(s)dsν/dν(π(x)).
Now by equivariance, see (<ref>) in the proof of Proposition <ref>,
P_μ,x^n-1(L_xs,L_xh)=P_μ,s^-1x^n-1(L_s^-1.x,L_s^-1.xs^-1h).
This is the law of ξ_n-1^s^-1.x, in other terms of the
position at time n-1 of a coset trajectory in the fiber over s^-1.x.
We get
II_x=-∑_s∈ Gμ(s)dsν/dν(π(x))H(ξ_n-1^s^-1.x).
Use the disintegration η=∫_Yη^ydν(y) over the measure
preserving extension π:X→ Y to get
∫_X II_xdη(x)=-∑_s∈ Gμ(s)∫_Y∫_π^-1(y)H(ξ_n-1^s^-1.x)dη^y(x)dsν/dν(y)dν(y)=-∫_XH(ξ_n-1^x)dη(x).
Lemma <ref> implies the random walk entropy formula stated
in Theorem <ref>:
By Lemma <ref> (iii), we have
I(ξ_1,𝒯|X)=∫_X I(ξ_1^x,𝒯_x)dη(x)=∫_Xlim_n→∞ I(ξ_1^x,ξ_n^x)dη(x)=lim_n→∞∫_X I(ξ_1^x,ξ_n^x)dη(x)
as the convergence is dominated by the integrable function
H(ξ_1^x)= I(ξ_1^x,ξ_1^x)≥inf_n I(ξ_1^x,ξ_n^x)=lim_n→∞ I(ξ_1^x,ξ_n^x).
Then Proposition <ref> and Lemma <ref>
give the first formula. The second formula follows by Cesaro average
since H(ξ_n|X)=∑_k=1^nH(ξ_k|X)-H(ξ_k-1|X) and
H(ξ_k|X)-H(ξ_k-1|X)= I(ξ_1,ξ_k|X) is a non-increasing
sequence tending to I(ξ_1,𝒯|X).
§.§.§ Shannon's theorem
In the case of a countable group G, Kingman's subadditive ergodic
theorem implies:
Assume G
is a countable group endowed with a probability measure μ of
finite Shannon entropy. Let (Z,λ) be a standard system satisfying
assumption AssumpS(S). Then
lim_n→∞-1/nlog P_μ,x^n(L_x,L_xω_n)=lim_n→∞1/nH(ξ_n|X)=h_μ(Z,λ)-h_μ(X,η),
ηℙ_μ-a.s. and in L^1 limits.
By Fact <ref>, the skew transformation
T:(x,(ω_1,ω_2,…))↦(ω_1^-1.x,(ω_1^-1ω_2,ω_1^-1ω_3,…))
is ergodic p.m.p. on the space (X× G^ℕ,ηℙ_μ).
Consider the functions f_n:X× G^ℕ→ℝ
given by f_n(x,ω)=P_μ,x^n(L_x,L_xω_n).
Then
f_n+m(x,ω) ≥ P_μ,x^n(L_x,L_xω_n)P_μ,x^m(L_xω_n,L_xω_n+m)
=P_μ,x^n(L_x,L_xω_n)P_μ,ω_n^-1.x^m(L_ω_n^-1.x,L_ω_n^-1.xω_m) (<ref>)
=f_n(x,ω)f_m(T^n(x,ω)).
Kingman's subadditive ergodic theorem applied to -log f_n, gives
a constant h with lim_n→∞-1/nlog f_n=h,
both -a.s.
and in L^1. Moreover, Lemma <ref> gives
∫_X× G^ℕ-log f_ndηℙ_μ =∫_Y∫_π^-1(y)∫_G^ℕ-log(P_μ,x^n(L_x,L_xω_n))dℙ_μ^y(ω)dη^y(x)dν(y)
=∫_Y∫_π^-1(y)∑_ω_n∈ G-log(P_μ,x^n(L_x,L_xω_n))P_μ,x^n(L_x,L_xω_n)dη^y(x)dν(y)
=∫_XH(ξ_n^x)dη(x)=H(ξ_n|X).
Therefore
h=lim_n→∞1/n∫_X× G^ℕ-log f_ndηℙ_μ=lim_n→∞1/nH(ξ_n|X)=h_μ(Z,λ)-h_μ(X,η).
by Theorem <ref>.
Note that the proof above shows subadditivity of the sequence n↦ H(ξ_n|X).
§.§ A proof of the ray criterion
It suffices to show that h(M,λ̅)=h(Z,λ). Suppose
on the contrary h(Z,λ)-h(M,λ̅)=δ>0.
Take ϵ=1/3δ, we will omit the reference to
ϵ in the notation for A_n^ϵ in what follows.
By Corollary <ref>, for any p>0, there is a subset V
of W_Ω with measure ℙ_μ(V)>1-p
and a constant N=N_ϵ,p<∞ such that for all (x,L_xω)∈ V
and n≥ N, we have
P_μ,x^ζ,n(L_x,L_xω_n)≤ e^-(δ-ϵ)n=e^-2nϵ.
For short, let us denote (x,ζ)=θ_M(x,L_xω). Take
a union bound over time m in the expression of probability in (i),
we have
ℙ_μ(∃ m≥ n: L_xω_m∈ A_n(x,ζ)|(x,ζ)∈ U)
≤1/λ̅(U)ℙ_μ(∃ m≥ n: L_xω_m∈ A_n(x,ζ),(x,ζ)∈ U,(x,L_xω)∈ V)
+1/λ̅(U)ℙ_μ((x,ζ)∉ V)
≤p/λ̅(U)+∑_m=n^∞ℙ_μ(L_xω_m∈ A_n(x,ζ),(x,L_xω)∈ V|(x,ζ)∈ U).
To estimate the conditional probabilities that appear as summands,
we use the disintegration ℙ_μ=∫_Mℙ_μ,x^ζdλ̅(x,ζ).
We have for (x,ζ)∈ U and m≥ N,
ℙ_μ (L_xω_m∈ A_n(x,ζ),(x,L_xω)∈ V|(x,ζ)∈ U)
≤1/λ̅(U)∫_M 1_U(x,ζ)ℙ_μ,x^ζ(L_xω_m∈ A_n(x,ζ),(x,L_xω)∈ V)dλ̅(x,ζ)
≤|A_n(x,ζ)|e^-2mϵ by (<ref>).
Condition (ii) gives the upper bound |A_n(x,ζ)|≤ e^3/2nϵ
for large n. Then
∑_m=n^∞ℙ_μ(L_xω_m∈ A_n(x,ζ),(x,L_xω)∈ V|(x,ζ)∈ U)≤ ce^-1/2ϵ n.
Since p can be arbitrarily small and λ̅(U)>0, we
get the limit
lim_n→∞ℙ_μ(∃ m≥ n: L_xω_m∈ A_n(x,ζ)|(x,ζ)∈ U)=0.
This contradicts the strictly positive limit in (i).
alpha
Jérémie Brieussel — Université de Montpellier
— [email protected]
Tianyi Zheng — UC San Diego
— [email protected]
|
http://arxiv.org/abs/2307.00297v1
|
20230701104630
|
Fields with few small points
|
[
"Nuno Hultberg"
] |
math.NT
|
[
"math.NT",
"11G50, 14G40, 11R04"
] |
Temperature-independent almost perfect photon entanglement from quantum dots via the SUPER scheme
Doris E. Reiter
August 1, 2023
=================================================================================================
Let X be a projective variety over a number field K endowed with a height function associated to an ample line bundle on X. Given an algebraic extension F of K with a sufficiently big Northcott number, we can show that there are finitely many cycles in X_ of bounded degree defined over F. Fields F with the required properties were explicitly constructed in <cit.> and <cit.>, motivating our investigation. We point out explicit specializations to canonical heights associated to abelian varieties and selfmaps of ^n. As a crucial tool, we introduce a refinement of Northcott's theorem.
There have recently been advances on the study of height properties of algebraic extensions of in <cit.> and <cit.>. Let denote the Northcott number with respect to the logarithmic Weil height. The key result of their work is the following theorem.
[Theorem 1.3 <cit.>]For every t ∈ [0,∞] there exist sequences of prime numbers (p_i)_i ∈, (q_i)_i ∈, and (d_i)_i ∈ such that the field F = ((p_i/q_i)^1/d_i| i ∈) satisfies (F) = t.
The full strength of this result is not necessary for our purposes. Instead we opt for the simpler construction of <cit.>.
[Theorem 1.3 <cit.>]For every t ∈ [0,∞) there exist sequences of prime numbers (p_i)_i ∈ and (d_i)_i ∈ such that p_i^1/d_i converges to exp(2t) and the p_i are strictly increasing.
Given such a sequence, the field F = (p_i^1/d_i| i ∈) satisfies t ≤(F) ≤ 2t.
For an arbitrary number field K it is not known whether there exists a field extension F of K such that (F) = t for a specified t. We can, however, show the abundance of extensions of K with large Northcott number as a consequence of the above theorem.
rlemmfields
Let C >0 be a constant and K a number field. Then there exist uncountably many algebraic extensions F of K such that (F) > C.
For fields satisfying the Northcott property the finiteness of cycles of bounded degree and height is known. It is natural to ask whether a similar result can be extended to fields with known Northcott number.
Let (X,L) be a pair consisting of a variety over a number field K and a line bundle on said variety. In order to state our theorems more elegantly, we write D(V) = ((V) + 1)(V) for homogeneous cycles V on X. The line bundle implicit in this notation will be clear from context. Going forward, all cycles will be assumed homogeneous and effective throughout the article.
rthmmain
Let X be a projective scheme over a number field K endowed with an admissible adelically metrized line bundle L̅ whose underlying line bundle L is ample. Let d ∈ and C > 0 be constants. Then there exists a constant R > 0 such that, for all algebraic extensions F of K, such that its Northcott number satisfies (F) > d(C + R), we obtain the following.
There are only finitely many F-rational cycles V on X such that D(V) ≤ d and h_L̅(V) < CD(V).
Regardless of this theorem, we can't expect to have only finitely many subvarieties defined over even a number field K as the Northcott property holds only for subvarieties of bounded degree. An example of the failure of the Northcott property without bound on the degree are the subvarieties {(z,z^n)}⊆^2. They are all distinct, defined over the base field and have canonical height 0.
We will now give some specializations of interest with explicit constants.
rthmproj
Consider ^n over a number field K endowed with the canonical toric height ĥ. Let d ∈ and C > 0 be constants. Let F be an extension of K, such that its Northcott number satisfies
(F) > d(C + 7/2 nlog2 + ∑_i = 1^n 1/2i +log2).
Then there are only finitely many F-rational cycles V on ^n_K such that D(V) ≤ d and ĥ(V) < CD(V).
rthmabvar
Let A be an abelian variety of dimension g over a number field K endowed with an ample symmetric line bundle . Let L denote the extension of K generated by
( A A A^∨),
where p_ denotes the polarization morphism associated to . Then there is an embedding Θ of A into ^n defined over L with associated line bundle ^⊗ 16. Denote by h_2 the l^2-logarithmic Weil height and by ĥ_ the canonical height associated to the group structure of A.
Let d ∈ and C > 0 be constants. If F is an extension of L, such that its Northcott number satisfies
(F) > d/16(C + 4^g+1h_2(Θ_^⊗16(0_A)) + 3g log2 + ∑_i = 1^n 1/2i+log2 ),
then there are only finitely many F-rational cycles V on A_L such that D(V) ≤ d and ĥ_(V) < CD(V). In particular, there are only finitely many torsion points and abelian subvarieties with D(V) ≤ d defined over F.
A similar result may be obtained for dynamical systems on projective space.
rthmdyn
Let f: ^n →^n be a selfmap of degree D ≥ 2, defined over a number field K. Denote by ĥ the canonical height associated to f and the tautological line bundle. Let d ∈ and C > 0 be constants. Let F be an extension of K, such that its Northcott number satisfies (F) > d(C + C_1(n,D)h(f) + C_2(n,D) + ∑_i = 1^n 1/2i),
where h(f) is the height of the coefficients of f as a projective tuple and
C_1(n,D)=5nD^n+1, C_2(n,D)=3^n n^n+1(2D)^n2^n+4D^n.
Then there are only finitely many F-rational effective divisors V on ^n_K such that (V) ≤ d and ĥ(V) < CD(V). In particular, there are only finitely many preperiodic hypersurfaces of degree ≤ d defined over F.
Based on the ideas in <cit.>, a result that is linear in (V) should be possible in any codimension. At the present moment we may use <cit.>, which yields a bound exponential in (V).
If we restrict to geometrically irreducible closed subsets we can improve the bound on the Northcott number by dlog 2 in Theorems <ref>, <ref> and by dlog 2 /16 in Theorem <ref>. The statement of Theorem <ref> cannot be improved.
In the first section we introduce Northcott numbers and their behaviour under field extension. Lastly we deduce Lemma <ref>.
The second section will deal with various notions of height and the bounds on their differences. At the end we will see how Theorems <ref> and <ref> follow from these bounds.
The third section contains the applications to abelian varieties and dynamical systems on projective space.
At last, we discuss other approaches that may be used to improve constants in the results.
§ ACKNOWLEDGEMENTS
I thank Fabien Pazuki for his guidance and mathematical discussions. I am specially grateful for his suggestion to consider also positive dimensional subvarieties and pointing me to references. I also thank Desirée Gijón Gómez for helpful comments on drafts of this article.
§ NORTHCOTT NUMBERS
In this section, we introduce Northcott numbers of subsets of , which allows us to refine Northcott's theorem (see <cit.>) to a statement on Northcott numbers that we call the Northcott inequality. We conclude the section with a proof of Lemma <ref>.
[Northcott number]For a subset S ⊆ of the algebraic numbers we define the Northcott number of S with respect to a function f: → [0,∞) as
_f(S) = inf{t ∈[0,∞) | # {α∈S; f(α)< t} = ∞}.
We follow the convention that inf∅ = ∞.
We call (S)∈ [0,∞] the Northcott number of S.
Our main focus is on the case that f=h is the logarithmic Weil height. In this case, we omit the h from the notation.
Let K be a number field. Then by Northcott's theorem (K) = ∞. On the other hand, () = 0.
We now state and prove the Northcott inequality.
[Northcott inequality] Let F be a field with Northcott number 𝒩(F) = C. Then the set of algebraic numbers X of degree ≤ d over F satisfies 𝒩(X)≥C-dlog 2/d2^d.
Let ϵ > 0. Let Y_ϵ be the set of algebraic numbers x of height ≤C-dlog 2/d2^d-ϵ = B_ϵ satisfying [F(x):F]≤ d. It is enough to show that the set Y_ϵ is finite for any ϵ > 0. Let x∈ Y_ϵ. Then the at most d conjugates of x over F are also elements of Y_ϵ. The coefficients of the minimal polynomial of x over F are elementary symmetric functions in these conjugates. We can bound the height of the coefficients by
d2^dB_ϵ+ dlog2 = C - ϵd2^d
using the properties of the height (see <cit.>). Let x, x_1, …, x_r ∈ and σ∈(/), then
h(σ(x)) = h(x)
h(x_1+ … + x_r) ≤ h(x_1) + … + h(x_r) + log r
h(x_1 … x_r) ≤ h(x_1) + … + h(x_r).
However, by assumption on F, there are only finitely many such coefficients, thus showing the finiteness of Y_ϵ.
The optimal bound we may obtain with these methods is min_0 ≤ j ≤ dC-logd j/d jj.
In <cit.> they notice that the house shares the crucial properties necessary to perform the proof of Theorem <ref>. By combining the ideas of <cit.> and Theorem <ref> we obtain.
Let f:→ [0,∞) be a function. Denote by _f(S) the Northcott number of a subset S ⊆ with respect to f. Suppose that f satisfies
f(σ(x)) = f(x)
f(x_1+x_2) ≤ F(f(x_1),f(x_2))
f(x_1 x_2) ≤ F(f(x_1),f(x_2))
for some continuous function F:^2 → [0,∞) and all x_1,x_2 ∈ and σ∈(/). Then there exists a continuous function G:[0,∞] → [0,∞] with G(∞) = ∞ depending only on F and an auxiliary natural number d such that the following holds. Let U ⊆ and let S⊆ be the subset of numbers satisfying monic polynomials with coefficients in U of degree bounded by d. Then
_f(S) ≥G(_f(U)).
Let us be more explicit in the case of the house. The house is defined as follows.
: → [0,∞)
α ↦max_σ:|σ(α)|
Let F be a field such that _(Ø_F) = C. Then the set of algebraic integers X of degree ≤ d over F satisfies 𝒩_(X)≥C^1/d/2^d.
The proof is analogous to that of Theorem <ref> using the properties
σ(x) = x
x_1+x_2≤x_1+ x_2
x_1 x_2≤x_1x_2.
for x_1, x_2 ∈ and σ∈(/)
We may improve the constant to min_0≤ j≤ dC^1/j/d j.
This approach, of course, can be used to upper bound Northcott numbers, as well.
Suppose a field K has a field extension F of degree d satisfying 𝒩(F)=C. Then 𝒩(K)≤ Cd2^d + dlog 2.
Again we may improve the bound. Here the best possible bound is min_0≤ j≤ dd jj C + logd j.
We may apply this to the field extension ^tr(i)/^tr of the totally real numbers. In <cit.> it is shown that
α_k = (2-i/2+i)^1/k
is a sequence of points with height tending to zero in ^tr(i). In particular, 𝒩_h(^tr(i)) = 0. Hence 𝒩(^tr) ≤log 2 ≈ 0.693. The best known bound is the one in <cit.> (𝒩(^tr) ≤ 0.2732…).
The bound in the specific case of the totally real numbers is not sharp and may be improved. Using that the conjugates of α_k equidistribute around the unit circle we may see that h(α_k + α_k) →∫_0^1 max{2logcos(π x), 0}≈ 0.323.[This constant also appears as the Mahler measure of the polynomial 1+x+y,computed by Smyth in <cit.> and as the Arakelov-Zhang pairing ⟨ x^2, 1-(1-x)^2⟩ in <cit.>. It equals 3√(3)/4πL(2,χ), where χ is the nontrivial quadratic character modulo 3.]
We can prove lemma <ref>.
*
When the ground field is , this follows immediately by the work of <cit.> or <cit.> quoted at the beginning of the introduction.
Consider now the case of an arbitrary number field K and write d = [K:]. We may use Theorem <ref> to obtain that for fields F satisfying (F) > d2^dD+dlog 2 the composite field KF satisfies (KF) > D. Over , there are uncountably many fields satisfying (F) > d2^dD+dlog 2. Hence it suffices to show that KF are distinct for distinct F.
For this let us consider fields of the form F = (p_i^1/d_i| i ∈), where all p_i and d_i are distinct primes. We can find an extension F of the above form that further satisfies that p_i^1/d_i tends to exp2t for some t > d2^dD+dlog 2. This satisfies the conditions of <ref> and hence (F) ≥ t. Let t^'≠ t and F^' be an extension (p^'_i^1/d^'_i| i ∈) with the same conditions of F, but with p^'_i^1/d^'_i going to exp2t^'. We need to show that KF cannot contain F^'. Now F^' contains infinitely many p^'_i^1/d^'_i that are not contained in F. When d^'_i > [K:], then also p^'_i^1/d^'_i∉ KF.
Let C >0 be a constant and K a number field. Then there exist uncountably many algebraic extensions F of K such that _(Ø_F) > C.
Fields F with prescribed value for _(Ø_F) are constructed in <cit.>. The same argument as above applies since the fields are of similar form.
§.§ Relative Northcott numbers
In <cit.>, Northcott numbers are considered in a relative setting. The following simplified statement of their result suffices for our needs.
[<cit.> Thm. 1.7.]There exists a field L satisfying (L)=0 such that, for every t∈(0,∞], there exist sequences of prime numbers (p_i)_i ∈, (q_i)_i ∈, and (d_i)_i ∈ such that the field F = L((p_i/q_i)^1/d_i| i ∈) satisfies (F∖ L) = t.
Let L ⊆ F ⊆ be fields satisfying (L)=c and (F∖ L) = t. Then there exists no x∈ F∖ L satisfying h(x) < t-c.
We notice that the set F∖ L is closed under multiplication by elements in L^. Suppose x ∈ F∖ L satisfies h(x) < t-c. Let ϵ > 0 be such that h(x)+2ϵ < t-c. Then for any of the infinitely many y∈ L^ satisfying h(y) ≤ c+ϵ, yx lies in F∖ L and satisfies h(yx) ≤ h(y) + h(x) < t-c-2ϵ+c+ϵ = t-ϵ. This contradicts the assumption (F∖ L) = t.
Using the lemma above we can state and prove our results in a relative setting. Theorem <ref>, for instance, would take the following form.
Consider ^n over an algebraic extension L/ endowed with the canonical toric height ĥ. Let d ∈ and C > 0 be constants. Suppose that (L) = c. Let F be an extension of K, such that its relative Northcott number satisfies
(F∖L) > d(C + 7/2 nlog2 + ∑_i = 1^n 1/2i +log2) + c.
Then all F-rational cycles V on ^n_K such that D(V) ≤ d and ĥ(V) < CD(V) are already defined over K.
§ HEIGHTS
This section will contain an overview of some different notions of heights and the bounds on their differences. The two notions of heights we will consider are Arakelov heights, which are defined using arithmetic intersection theory, and Philippon heights, whose definition relies on Chow forms of subvarieties of projective space. While Arakelov heights have conceptual advantages, Philippon height will be crucial to obtain information on the height of a subvariety from the arithmetic of its field of definition.
As a link between these two notions we use canonical heights. Canonical heights may be considered as Arakelov heights, but can at the same time be obtained from Philippon heights by a limit procedure. We will lastly apply this study to prove Theorems <ref> and <ref>.
§.§ Arakelov heights and adelic metrics
We now introduce the notions in Arakelov geometry needed in this text. For a more comprehensive survey, we refer to <cit.>.
Let X be a proper scheme over . For all places v ≤∞ we may associate an analytic space X^_v. For v = ∞ we set X^_∞ = X()/F_∞, where F_∞ denotes complex conjugation. For v < ∞ the definition of the analytification is due to Berkovich in <cit.>. For all v this is a compact metrizable, locally contractible topological space containing X(_v)/(_v/_v) as a dense subspace. Further, it's equipped with the structure of a locally ringed space with a valued structure sheaf _X^_v, i.e. to each f ∈_X^_v(U) we can associate an absolute value function |f|: U →_+ that is continuous in a way that is compatible with restrictions. We define X_ = ∐_v≤∞ X^_v.
We now define the structure of an adelic metric on a line bundle L on X. An adelic metric is a collection of compatible v-adic metrics. A v-adic metric on a line bundle L^_v on X^_v is the association of a norm function ||s||_v:U →_+ to every section s ∈ L^_v(U) compatible with restriction. Being a norm function means compatibility with multiplication by holomorphic functions and that ||s||_v only vanishes when s does. Tensor products and inverses of line bundles with v-adic metrics are canonically endowed with v-adic metrics. The absolute value endows the trivial bundle with a v-adic metric at all places.
The compatibility conditions for adelic metrics reflect the global nature of X. A model (,) of (X,L) over induces v-adic metrics at all finite places. For a collection of v-adic metrics to form an adelic metric we demand it agrees with the metrics induced by (,) at all but finitely many places. If for some power the metrics agree at all places with model metrics we say that the adelic metrics are algebraic.
Not all adelically metrized line bundles can be studied equally well. It is often helpful to impose algebraicity and positivity conditions. A notion fulfilling these requirements is semipositivity. Semipositive metrics are limits of algebraic metrics with a positivity condition. Important examples of semipositive metrics are the canonical metrics obtained from polarized dynamical systems. An adelic line bundle is called admissible if it can be represented as the difference of semipositive adelic line bundles.
We can easily define the height of a point P ∈ X() in terms of adelic metrics. Let L̅ be an adelically metrized line bundle on X with underlying line bundle L and P ∈ X(). This point defines a point P_v in the Berkovich space X^_v for all v. The height of a point P ∈ X() with respect to an adelically metrized line bundle L̅ on X is defined as h_L̅(P) = -∑_v ≤∞log||s(P_v)||_v, where s is a meromorphic section of L with no poles or zeroes at P.
More generally, the height of irreducible closed subsets of X_ is defined using arithmetic intersection theory. Given an irreducible closed subset Z ⊆ X_ of dimension d, we define
h_L̅(Z) = _L̅(Z)= (ĉ_1(L̅)^d+1|Z).
We do not follow the convention of <cit.> since we would like a notion which is additive in cycles. Our convention differs from that of <cit.> by the factor D(V)= ((V) + 1)(V).
§.§ Heights under the variations of metrics
We will now introduce a lemma comparing the heights with respect to two admissible metrics.
Let X be a proper scheme over endowed with a line bundle L. Let L̅ and L̅^' be admissible adelic metrics on L. Then there exists a constant C ∈ such that for all closed integral subschemes V ⊆ X_ we have
|h_L̅(V)-h_L̅^'(V)| ≤CD(V).
If L is ample and the metrics are algebraic, the admissibility assumption can be omitted.
This follows from <cit.>, a limit argument and linearity. The second case is Prop. 3.7 loc.cit. . In order to follow our convention we multiply the bounds by D(V).
§.§ Philippon height
There is an alternative definition of heights of subvarieties of projective space introduced by Philippon in his papers <cit.>, <cit.> and <cit.>. The Philippon height is obtained from the coefficients of the Chow form of the variety. This viewpoint is important in order to obtain information on the height of a subvariety from the arithmetic of its field of definition. We do not consider the case of weighted projective spaces. For more details we refer to Philippon's original papers. The heights in his different papers differ in the contribution of the infinite places. We will follow <cit.>.
In order to define the Philippon height of a subvariety of projective space we need to first define its Chow form. This is done using projective duality. Let K be a field and V be a closed geometrically irreducible subvariety of _K^n of dimension r. Denote the variety parametrizing linear hyperplanes in ^n, i.e. the projective dual of ^n, by ^n,∨. The subvariety X of (^n,∨)^r+1 consisting of the tuples of hyperplanes (H_0, …, H_r) such that H_0 ∩… H_r ∩ V ≠∅ is a hypersurface. In fact, it is the vanishing locus of a multihomogeneous polynomial over K of degree V in the coordinates of each factor. This polynomial f, defined up to multiplication by a scalar, is called the Chow form of V. If K is a number field we may now proceed to define the Philippon height of V. Given the Chow form we define
h_Ph(V) 1/[K:] ∑_v [K_v:_v] log_v(f).
Here _v(f) is defined as the maximum v-adic absolute value of the coefficients of f when v is a finite place. For the archimedean places we define
log_v(f) = ∫_(S^n+1)^r+1 log|σ_v(f)| σ^∧(r+1)_n+1 + D(V)∑^n_i=1 1/2i.
Here σ_v denotes a choice of complex embedding for the place v. S^n+1 denotes the unit sphere in ^n+1, while σ_n+1 denotes the invariant probability measure on S^n+1. We define a variant of the Philippon height h̃_Ph by taking the contribution at an archimedean place to be the maximum modulus of the coefficients instead.
We need to compare the Philippon height with this variant in order to deduce from the Northcott number of a field something about the height of projective varieties defined over said field. Philippon attributes such a comparison to Lelong <cit.>. We state it now.
Let V ⊆_^n be an integral closed subvariety, then we have the inequalities
0 ≤h_Ph(V)-h̃_Ph(V) ≤D(V)∑_i=1^n1/2i = D(V)c(n).
Lastly we need to compare Philippon's heights with the toric canonical height on projective space. This allows us to relate Arakelov heights with Philippon heights. The following statement is taken from <cit.>.
Let V ⊆_^n be a closed irreducible subset. Let ĥ denote canonical toric height on ^n. Then
|ĥ(V)-h_Ph(V)| ≤D(V)7/2n log2 .
§.§ Cycles
It may be useful to consider the height of general homogeneous cycles defined over a field F⊆. Since the components of an F-rational cycle C are not necessarily defined over F, a further lemma is required to relate its height to the arithmetic of F.
Let C = ∑ n_i V_i, for geometrically irreducible V_i, be a F-raitional cycle on ^n. Its Chow form is defined to be f_C = ∏f^n_i_V_i.
Up to scalar, f_C has coefficients in F. Let us define the Philippon height of a cycle C by applying Philippon's construction to f_C. We can define h̃_Ph in the analogous way.
The resulting height isn't linear with respect to addition of cycles. To address this issue we invoke an inequality on the height of products of polynomials.
[<cit.> Thm 1.6.13] Let f_1,…,f_m be polynomials in n variables, d the sum of partial degrees of f = f_1… f_m and let h denote the logarithmic Weil height of the coefficients of a polynomial considered as a projective tuple. Then
|h(f)-∑^m_j=1 h(f_j)| ≤dlog2.
Let C=∑ n_i V_i be a homogeneous cycle of _^n. Then
|h̃_Ph(C) - ∑n_i h̃_Ph(V_i)| ≤D(C)log2 .
We apply the theorem to f_C = ∏ f^n_i_V_i and obtain that d = ((C) +1)(C).
§.§ Small subvarieties of projective space
In this section we prove Theorems <ref> and <ref> on small subvarieties.
*
Let V=∑ n_i V_i be an F-rational homogeneous cycle. Then its Chow form f_V has coefficients in F. As such, we know that h(f_V) ≤(F)-ϵ for only finitely many cycles. By Lemma <ref> there can only be finitely many cycles satisfying ∑ n_i h̃_Ph(V_i) ≤(F)- D(V)log 2 -ϵ. Consequently there are only finitely many V with ∑ n_i h_Ph(V_i)+ ϵ≤(F) - D(V)(c(n) + log 2 ) by Lemma <ref>. Moreover, there are only finitely many V such that
ĥ(V)+ ϵ= ∑n_i ĥ(V_i)+ ϵ≤(F) - D(V)(7/2n log2 +c(n) + log2 ) by Proposition <ref>. Under the assumption that C > (F)/d -7/2n log 2 - c(n)-log 2 we obtain that there are only finitely many F-rational cycles V on ^n_K such that D(V) ≤ d and ĥ(V) < CD(V). By rearranging the inequality, we conclude the theorem.
We easily obtain Theorem <ref> as a consequence.
*
We need to compare the heights on X with heights on projective varieties. For this we replace by its n-th power such that the underlying line bundle is very ample. Let X ^k be an embedding associated to . Pulling back the canonical toric metric on Ø(1) induces an adelic metric on , which we denote .
Then by Lemma <ref> the height associated to only differs from the one associated to by an amount bounded by R^' D(V) for some constant R^'. Now the result follows from Theorem <ref>.
As an alternative to admissiblity one may require algebraicity in the above theorem.
§ APPLICATIONS TO DYNAMICAL SYSTEMS
Specializations of our main theorem can be obtained by applying more specific height bounds. The arguments required to obtain these specializations are adaptations of the proof of theorem <ref>, which will only be sketched.
The dynamical systems to be considered in greater detail are the ones given by multiplication on abelian varieties and selfmaps of projective space. We start out with a more general situation considered in the foundational paper of Call and Silverman(<cit.>).
In their setup, X is a smooth projective variety over a number field K endowed with a selfmap ϕ and a divisor class η∈(X) ⊗ satisfying ϕ^* η = αη for some α > 1. Suppose h is a Weil function associated with η. Then there is a constant R such that |h ∘ϕ - α h| ≤ R. Let ĥ denote the canonical height for η and ϕ. Then the following holds.
[<cit.>Proposition 1.2]For every P ∈ X(K̅), the following inequality holds:
|ĥ(P) - h(P)| ≤R/α- 1.
Note that we can't expect to have finitely many small points for arbitrary η, as an associated Weil function might not even be bounded below. We may, however, by adapting the proof of Theorem <ref> obtain the following statement.
In the current setting, suppose that η is very ample and h is induced by the canonical toric height under some embedding into projective space. Let F be an algebraic extension of K satisfying (F) > C + R/α - 1. Then there are only finitely many points P ∈ V(F) such that ĥ(P) ≤ C.
We adapt the proof of Theorem <ref>. We bound the height of a point in projective space from below by the height of one of its coordinates and use the bound in Proposition <ref>.
§.§ Small subvarieties of abelian varieties
In order to study small points on abelian varieties, we embed them into projective space using a variant of the theta embedding, first introduced in <cit.>. For a more detailed overview of its properties, see <cit.>. We will then apply a bound on the difference of the canonical height to the Philippon height from loc.cit. to deduce a result on small points of abelian varieties.
Let A be a g-dimensional abelian variety defined over a number field K. Let be an ample symmetric line bundle on A. Then ^⊗ 16 is very ample. David and Philippon choose sections that yield the embedding Θ_^⊗ 16, or simply Θ, into ^N. It is inspired by the embedding of Mumford in <cit.>, but differs from it. As such, it is not defined over K itself, but over the field generated by
( A A A^∨),
where p_ denotes the polarization morphism associated to .
In this setting, we have the following comparison of heights.
[<cit.>Proposition 3.9.]Let V be an integral closed subvariety of A_K̅ and let h_2 denote the l^2-logarithmic Weil height. Then
|ĥ_^⊗16(V) - h_Ph(Θ(V))| ≤c_0(Θ)D(V).
Here, c_0(Θ) = 4^g+1h_2(Θ(0_A)) + 3g log 2.
*
We adapt the proof of Theorem <ref>. The main differences are that Proposition <ref> applies to ĥ_^⊗ 16 = 16 ĥ_ instead of directly to ĥ_ and that the Θ-embedding of A is not defined over its field of definition K, but only over L = K(( A A A^∨)).
The l^2-logarithmic Weil height h_2(Θ_^⊗ 16(0_A)) in the theorem is compared to the Faltings height of the abelian variety in <cit.>. This allows for a phrasing of the theorem that does not reference the theta embedding. In <cit.> the quantity h(Θ_^⊗ 16(0_A)) is denoted by h(A) which may lead to confusion with the Philippon height of A, see <cit.>.
§.§ Small subvarieties with respect to dynamical systems on ^n
Another case in which explicit bounds on difference of heights exist are divisors on ^n with a canonical height from a selfmap. In fact, <cit.> proves the following statement.
Let f: ^n →^n be a morphism of degree d ≥ 2 defined over . Let V be an effective divisor on ^n, then
|ĥ_f(V) - h_Ph(V)| ≤(C_1(n,d)h(f) + C_2(n,d))D(V),
where h(f) is the height of the coefficients of f as a projective tuple. Moreover, one may choose
C_1(n,d)=5nd^n+1, C_2(n,d)=3^n n^n+1(2d)^n2^n+4d^n.
For simplicity he states the theorem only for hypersurfaces, but claims there to be no conceptual obstruction to its generalization.
This leads to Theorem <ref>.
*
We adapt the proof of Theorem <ref>. Note that Theorem <ref> applies directly to cycles, so the results in section <ref> is not needed.
=1em
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.